diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/CyberLink PowerDVD Ultra 19.0.2512.63 Crack __HOT__ Crack __HOT__.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/CyberLink PowerDVD Ultra 19.0.2512.63 Crack __HOT__ Crack __HOT__.md deleted file mode 100644 index f9280fa46909e191ad307de6f1253980e437d60f..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/CyberLink PowerDVD Ultra 19.0.2512.63 Crack __HOT__ Crack __HOT__.md +++ /dev/null @@ -1,149 +0,0 @@ - -

CyberLink PowerDVD Ultra 19.0.2512.63 Crack: The Ultimate Media Player for Windows

-

Introduction

-

If you are looking for a powerful and versatile media player that can handle any type of media file, you should check out CyberLink PowerDVD Ultra 19.0.2512.63 Crack. This is a cracked version of the original software that allows you to enjoy all the premium features without paying a dime.

-

CyberLink PowerDVD Ultra 19.0.2512.63 Crack crack


Downloadhttps://byltly.com/2uKw36



-

What is CyberLink PowerDVD Ultra 19.0.2512.63 Crack?

-

CyberLink PowerDVD Ultra 19.0.2512.63 Crack is a software that lets you play, stream, download, and organize your media files on your Windows PC. It supports a wide range of formats, including DVD, Blu-ray, CD, MP4, MKV, AVI, WMV, FLV, MP3, AAC, WAV, FLAC, and more.

-

Why do you need CyberLink PowerDVD Ultra 19.0.2512.63 Crack?

-

You need CyberLink PowerDVD Ultra 19.0.2512.63 Crack because it offers you many benefits that other media players don't have, such as:

- -

Features of CyberLink PowerDVD Ultra 19.0.2512.63 Crack

-

In this section, we will go over some of the most impressive features of CyberLink PowerDVD Ultra 19.0.2512.63 Crack in more detail.

-

Playback of any media format

-

CyberLink PowerDVD Ultra 19.0.2512.63 Crack can play any media format you throw at it without any hassle or compatibility issues. Whether it's a DVD disc, a Blu-ray disc, a CD disc, or a digital file on your hard drive or cloud storage, it can handle it with ease.

-

CyberLink PowerDVD Ultra 19 full version with crack
-How to activate CyberLink PowerDVD Ultra 19 for free
-CyberLink PowerDVD Ultra 19.0.2512.63 Crack download link
-CyberLink PowerDVD Ultra 19 serial key generator
-CyberLink PowerDVD Ultra 19 patch and keygen
-CyberLink PowerDVD Ultra 19 license code and activation
-CyberLink PowerDVD Ultra 19 cracked software for Windows
-CyberLink PowerDVD Ultra 19 latest update and crack
-CyberLink PowerDVD Ultra 19 offline installer and crack
-CyberLink PowerDVD Ultra 19 registration and crack
-CyberLink PowerDVD Ultra 19 torrent file and crack
-CyberLink PowerDVD Ultra 19 crack only
-CyberLink PowerDVD Ultra 19 crack fix
-CyberLink PowerDVD Ultra 19 crack instructions
-CyberLink PowerDVD Ultra 19 crack review
-CyberLink PowerDVD Ultra 19 features and crack
-CyberLink PowerDVD Ultra 19 system requirements and crack
-CyberLink PowerDVD Ultra 19 installation guide and crack
-CyberLink PowerDVD Ultra 19 troubleshooting and crack
-CyberLink PowerDVD Ultra 19 alternatives and crack
-CyberLink PowerDVD Ultra 19 comparison and crack
-CyberLink PowerDVD Ultra 19 benefits and crack
-CyberLink PowerDVD Ultra 19 disadvantages and crack
-CyberLink PowerDVD Ultra 19 tips and tricks and crack
-CyberLink PowerDVD Ultra 19 best settings and crack
-CyberLink PowerDVD Ultra 19 support and crack
-CyberLink PowerDVD Ultra 19 feedback and crack
-CyberLink PowerDVD Ultra 19 testimonials and crack
-CyberLink PowerDVD Ultra 19 FAQs and crack
-CyberLink PowerDVD Ultra 19 forum and crack
-CyberLink PowerDVD Ultra 19 blog and crack
-CyberLink PowerDVD Ultra 19 video tutorial and crack
-CyberLink PowerDVD Ultra 19 demo and crack
-CyberLink PowerDVD Ultra 19 free trial and crack
-CyberLink PowerDVD Ultra 19 discount and crack
-CyberLink PowerDVD Ultra 19 coupon code and crack
-CyberLink PowerDVD Ultra 19 deal and crack
-CyberLink PowerDVD Ultra 19 offer and crack
-CyberLink PowerDVD Ultra 19 bundle and crack
-CyberLink PowerDVD Ultra 19 upgrade and crack
-CyberLink PowerDVD Ultra 19 refund policy and crack
-CyberLink PowerDVD Ultra 19 warranty and crack
-CyberLink PowerDVD Ultra 19 customer service and crack
-CyberLink PowerDVD Ultra 19 contact information and crack
-CyberLink PowerDVD Ultra 19 privacy policy and crack
-CyberLink PowerDVD Ultra 19 terms of service and crack
-CyberLink PowerDVD Ultra 19 disclaimer and crack
-CyberLink PowerDVD Ultra 19 affiliate program and crack
-CyberLink PowerDVD Ultra 19 partner program and crack

-

You can also play ISO files directly without mounting them or extracting them first.

-

Support for 4K, HDR, and 360-degree videos

-

CyberLink PowerDVD Ultra 19.0.2512.63 Crack supports the latest video technologies that deliver stunning visuals and immersive experiences.

-

You can watch 4K videos that have four times more pixels than Full HD videos for sharper and clearer images.

-

You can watch HDR videos that have a wider range of colors and contrast for more realistic and lifelike scenes.

-

You can watch 360-degree videos that let you explore every angle of the video with your mouse or keyboard.

-

Enhanced audio quality with TrueTheater Sound

-

CyberLink PowerDVD Ultra 19.0.2512.63 Crack enhances your audio quality with TrueTheater Sound that applies various sound effects to your media files.

-

You can boost the volume level without distortion or clipping with Volume Booster.

-

You can improve the clarity and detail of dialogues and vocals with Dialogue Enhancer.

-

You can enhance the bass and depth of low-frequency sounds with Bass Enhancer.

-

You can create a surround sound effect with Virtual Surround that simulates a multi-channel speaker system.

-

Stream and cast media to any device

-

CyberLink PowerDVD Ultra 19.0.2512.63 Crack lets you stream and cast your media files to any device on your network or online.

-

You can stream your media files to smart TVs, game consoles, Chromecast devices, Apple TV devices, Roku devices, and more using DLNA or Miracast protocols.

-

You can also cast your media files to any device using CyberLink's cloud service that lets you access your files from anywhere.

-

Download and watch videos offline

-

CyberLink PowerDVD Ultra 19.0.2512.63 Crack allows you to download and watch online videos offline with its built-in YouTube downloader.

-

You can download videos from YouTube in various resolutions and formats such as MP4, WebM, FLV, and more.

-

You can also download entire playlists or channels with one click.

-

You can then watch your downloaded videos offline using CyberLink PowerDVD Ultra 19 or transfer them to other devices for later viewing.

-

Organize and manage your media library

-

CyberLink PowerDVD Ultra 19 helps you organize and manage your media library with its intuitive interface and smart features.

-

You can browse your media files by folders, albums, artists, genres, or ratings.

-

You can also use face recognition to sort your photos by people, scene search to find specific moments in your videos, and auto-tagging to add metadata to your files automatically.

-

How to install and activate CyberLink PowerDVD Ultra 19 Crack

-

In this section, we will show you how to install and activate CyberLink PowerDVD Ultra 19 Crack on your Windows PC.

-

Download the setup file and crack file from the link below

-

The first step is to download the setup file and crack file from the link below:

- - - -
Setup FileCrack File
https://www.cyberlink.com/downloads/trials/powerdvd-ultra/download_en_US.htmlhttps://cracksway.com/cyberlink-powerdvd-crack/
-

Save them in a folder on your PC where you can easily find them later.

-

Install the setup file and run the program

-

The next step is to install the setup file by following these steps:

-
    -
  1. Double-click on the setup file to launch the installation wizard.
  2. -
  3. Accept the license agreement and click Next.
  4. -
  5. Select the destination folder where you want to install the program and click Next.
  6. -
  7. Select the components you want to install and click Next.
  8. -
  9. Select whether you want to create shortcuts on your desktop or start menu and click Next.
  10. -
  11. Click Install to begin the installation process.
  12. -
  13. Wait for the installation to complete and click Finish.
  14. -
  15. Run the program from the shortcut on your desktop or start menu.
  16. -
-

Copy the crack file and paste it into the installation folder

-

Copy the crack file and paste it into the installation folder

-

The final step is to copy the crack file and paste it into the installation folder by following these steps:

-
    -
  1. Right-click on the crack file and select Copy.
  2. -
  3. Go to the installation folder where you installed the program. The default location is C:\Program Files (x86)\CyberLink\PowerDVD19.
  4. -
  5. Right-click on an empty space and select Paste.
  6. -
  7. Click Yes to replace the existing file.
  8. -
  9. Close the folder and run the program again.
  10. -
-

Congratulations! You have successfully installed and activated CyberLink PowerDVD Ultra 19.0.2512.63 Crack on your PC.

-

Conclusion

-

In this article, we have shown you what CyberLink PowerDVD Ultra 19.0.2512.63 Crack is, why you need it, what features it offers, and how to install and activate it on your PC.

-

We hope you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below.

-

If you want to download and try CyberLink PowerDVD Ultra 19.0.2512.63 Crack for yourself, you can use the link below to get it for free.

-

Thank you for reading and happy watching!

-

FAQs

-

Here are some frequently asked questions about CyberLink PowerDVD Ultra 19.0.2512.63 Crack:

- -

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/((FREE)) Free Download Marc Mentat Software.md b/spaces/1gistliPinn/ChatGPT4/Examples/((FREE)) Free Download Marc Mentat Software.md deleted file mode 100644 index dc7224f3400fd75cece5919d1213d4ae4dfad73c..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/((FREE)) Free Download Marc Mentat Software.md +++ /dev/null @@ -1,6 +0,0 @@ - -

the analysis and simulation of structural behavior are complicated and expensive. the finite element method is an effective approach for analyzing structural behavior because it is easier to specify boundary conditions and loads than to specify the complex mechanical behavior of the component. in a finite element analysis, the component is modeled as a solid, using a mathematical function to approximate the behavior of the component. the mathematical function is called the shape function. the shape function determines the exact geometry of the component and provides the basis for the analysis. finite element analysis is an integral part of many other disciplines of engineering. it is an important part of structural analysis because it allows for a more accurate analysis of structural behavior. it is used in many areas of engineering. because of the importance of finite element analysis, many companies have developed computer software to perform finite element analysis.

-

a general-purpose finite element program is much easier to use than a specialty structural analysis software. the most basic finite element programs (such as the one shown to the right) are easy to use. the user interacts directly with the program. they do not need to learn special commands and symbols. this enables them to use a program quickly and effectively. some finite element programs can simulate simple mechanical behavior. the user can specify loads, boundary conditions, and other aspects of the analysis. they can also import other data, such as geometric and material data. these programs often have very simple user interfaces. most are not graphical. however, some of them can be used to perform a limited amount of analysis.

-

free download marc mentat software


Download Zip === https://imgfil.com/2uxYF3



899543212b
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Accelrys Materials Studio V6 Windows-CLoNY ISO Download Pc 2021.md b/spaces/1gistliPinn/ChatGPT4/Examples/Accelrys Materials Studio V6 Windows-CLoNY ISO Download Pc 2021.md deleted file mode 100644 index d2bf3b1f1b9ebf6e9f641850d16b305a14bdf616..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Accelrys Materials Studio V6 Windows-CLoNY ISO Download Pc 2021.md +++ /dev/null @@ -1,6 +0,0 @@ -

Accelrys Materials Studio v6 Windows-CLoNY ISO download pc


DOWNLOADhttps://imgfil.com/2uxXdu



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download One Man Band 10 Full Version.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download One Man Band 10 Full Version.md deleted file mode 100644 index 1bd6c1acf1e3ad92c17a4e8bb69c03517556e1a9..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Download One Man Band 10 Full Version.md +++ /dev/null @@ -1,6 +0,0 @@ -

download one man band 10 full version


Download ✵✵✵ https://imgfil.com/2uxZ4c



-
-"Werner Hirzel One Man Band", World-Record Holder for 51 Piece One ... welkom bike freaks and bicycle lovers on nthis blog full of nice bicycles, cool bike ... and since then three more versions have evolved from the original, each one ... to download showing a picture of a street musician, a one-man band. ... Amanda 10. 4d29de3e1b
-
-
-

diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cute Animal Match A Free and Fun APK Game for Android Devices.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cute Animal Match A Free and Fun APK Game for Android Devices.md deleted file mode 100644 index 4919488f6e357550bcf42fa949104f375c55f9ec..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cute Animal Match A Free and Fun APK Game for Android Devices.md +++ /dev/null @@ -1,25 +0,0 @@ - - - -' % label - images.append(image_numpy.transpose([2, 0, 1])) - idx += 1 - if idx % ncols == 0: - label_html += '%s' % label_html_row - label_html_row = '' - white_image = np.ones_like(image_numpy.transpose([2, 0, 1])) * 255 - while idx % ncols != 0: - images.append(white_image) - label_html_row += '' - idx += 1 - if label_html_row != '': - label_html += '%s' % label_html_row - try: - self.vis.images(images, nrow=ncols, win=self.display_id + 1, - padding=2, opts=dict(title=title + ' images')) - label_html = '
- Cute Animal Match APK: A Fun and Educational Game for Kids Do you love animals and puzzles? Do you want to play a game that is both fun and educational for your kids? If yes, then you should try Cute Animal Match APK, a free and safe game that will keep you and your kids entertained for hours. In this article, we will tell you everything you need to know about this game, including what it is, how to download and install it, how to play it, what are its features and benefits, and what are some tips and tricks for playing it. Let's get started!

What is Cute Animal Match APK?

- Cute Animal Match APK is a game that lets you connect cute animals and solve puzzles. It is developed by Nice2Meet, a company that specializes in creating educational games for kids. The game is suitable for all ages, but especially for preschoolers who want to learn about animals, numbers, colors, shapes, and more. The game has over 100 levels of varying difficulty, each with a different animal theme and puzzle. You can play the game offline or online, and you can also share your progress and achievements with your friends on social media.

How to download and install Cute Animal Match APK?

- Downloading and installing Cute Animal Match APK is very easy. You can follow these simple steps: - Go to [Cute Animal Match APK for Android Download - APKPure.com](^1^) on your browser. - Click on the green "Download APK" button. - Wait for the file to download on your device. - Open the file and follow the instructions to install the game. - Enjoy playing Cute Animal Match APK!

How to play Cute Animal Match APK?

- Playing Cute Animal Match APK is very simple. You just need to swipe your finger on the screen to connect two or more animals of the same kind. The more animals you connect, the more points you get. You also need to complete the objectives of each level, such as collecting a certain number of animals, clearing a certain number of tiles, or reaching a certain score. You can use power-ups to help you in your gameplay, such as bombs, magnets, or shuffles. You can also earn coins by completing levels or watching ads, which you can use to buy more power-ups or unlock new animals.

Connect the animals

- To connect the animals, you need to swipe your finger on the screen in any direction. You can connect animals horizontally, vertically, or diagonally. You can also make loops or zigzags to connect more animals. The more animals you connect, the higher your score will be. You can also create combos by connecting multiple groups of animals in a row.

Use the power-ups

- Power-ups are special items that can help you in your gameplay. You can use them by tapping on them on the screen. There are three types of power-ups in Cute Animal Match APK: - The bomb: It will match animal puzzles and destroy all the cute animals around in radius around and catch the match lite. - The magnet: It will attract all the animals of the same kind as the one you tap on. - The shuffle: It will shuffle all the animals on the board. You can get power-ups by connecting five or more animals of the same kind, or by buying them with coins.

Complete the levels

- To complete a level, you need to fulfill the objectives that are shown at the top of the screen. The objectives can vary depending on the level, such as: - Collect a certain number of animals, such as 10 cats, 15 dogs, or 20 rabbits. - Clear a certain number of tiles, such as 30 grass tiles, 40 sand tiles, or 50 water tiles. - Reach a certain score, such as 1000 points, 2000 points, or 3000 points. You have a limited number of moves to complete each level, so use them wisely. You can see how many moves you have left at the bottom of the screen. If you run out of moves before completing the objectives, you will lose the level and have to try again. If you complete the objectives before running out of moves, you will win the level and get bonus points for the remaining moves.

What are the features and benefits of Cute Animal Match APK?

- Cute Animal Match APK is not just a fun game, but also a beneficial one. Here are some of the features and benefits of playing this game:

Cute and colorful graphics

- The game has cute and colorful graphics that will appeal to kids and adults alike. The animals are adorable and animated, and the backgrounds are bright and cheerful. The game also has smooth and easy controls that make it enjoyable to play.

Various animals and puzzles

- The game has over 100 levels of different animals and puzzles. You can meet various animals from different habitats, such as cats, dogs, rabbits, pandas, lions, elephants, penguins, dolphins, and more. You can also solve different puzzles that challenge your logic and creativity, such as matching animals by color, shape, or number.

Educational and entertaining gameplay

- The game is not only entertaining, but also educational for kids. It helps them learn about animals, numbers, colors, shapes, and more. It also improves their memory, concentration, hand-eye coordination, and problem-solving skills. The game is suitable for all ages, but especially for preschoolers who want to have fun while learning.

Free and safe to use

- The game is free and safe to use. You don't need to pay anything to download or play it. You also don't need to worry about any viruses or malware that might harm your device. The game is tested and verified by APKPure.com, a trusted source for downloading Android apps.

What are some tips and tricks for playing Cute Animal Match APK?

- If you want to play Cute Animal Match APK like a pro, here are some tips and tricks that you can use:

Plan your moves ahead

- Before you swipe your finger on the screen, take a moment to look at the board and plan your moves ahead. Try to connect as many animals as possible in one swipe, and avoid leaving isolated animals that are hard to match. Also, try to match the animals that are related to the objectives first, such as the ones that have a number or a color on them.

Save your power-ups for later

- Power-ups can be very helpful in your gameplay, but they are also limited in number. You can get them by connecting five or more animals of the same kind, or by buying them with coins. However, you should save them for later when you really need them, such as when you are stuck or running out of moves. Don't waste them on easy levels or unnecessary matches.

Watch ads for extra rewards

- If you want to get more coins or power-ups without spending real money, you can watch ads for extra rewards. You can watch ads after completing a level or when you run out of moves. You can also watch ads to get more lives when you lose all of them. Watching ads is optional and voluntary, but it can help you in your gameplay.

Conclusion

- Cute Animal Match APK is a fun and educational game that lets you connect cute animals and solve puzzles. It is suitable for all ages, but especially for preschoolers who want to learn about animals, numbers, colors, shapes, and more. The game has over 100 levels of varying difficulty, each with a different animal theme and puzzle. You can play the game offline or online, and you can also share your progress and achievements with your friends on social media. The game has cute and colorful graphics, various animals and puzzles, and educational and entertaining gameplay. The game is free and safe to use, and you can download it from APKPure.com. If you want to play Cute Animal Match APK like a pro, you can use some tips and tricks, such as planning your moves ahead, saving your power-ups for later, and watching ads for extra rewards. Cute Animal Match APK is a game that you and your kids will love, so download it today and have fun!

FAQs

- Here are some frequently asked questions about Cute Animal Match APK: - Q: Is Cute Animal Match APK compatible with my device? - A: Cute Animal Match APK is compatible with most Android devices that have Android 4.4 or higher. - Q: How can I update Cute Animal Match APK to the latest version? - A: You can update Cute Animal Match APK by visiting [Cute Animal Match APK for Android Download - APKPure.com] and downloading the latest version of the game. - Q: How can I contact the developer of Cute Animal Match APK? - A: You can contact the developer of Cute Animal Match APK by visiting their website at [Nice2Meet] or by sending them an email at nice2meet@gmail.com. - Q: How can I rate and review Cute Animal Match APK? - A: You can rate and review Cute Animal Match APK by visiting [Cute Animal Match APK for Android Download - APKPure.com] and clicking on the "Rate" or "Review" button. - Q: How can I share Cute Animal Match APK with my friends? - A: You can share Cute Animal Match APK with your friends by clicking on the "Share" button on the game screen. You can choose to share the game via Facebook, Twitter, WhatsApp, or other social media platforms.

-

cute animal match apk


Download File ->->->-> https://urlin.us/2uSZI7



197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Final Bricks Breaker Mod APK v1.0.54 for Android.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Final Bricks Breaker Mod APK v1.0.54 for Android.md deleted file mode 100644 index 0ef80cd4fe43de99a5020facd0421e54e58889f6..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Final Bricks Breaker Mod APK v1.0.54 for Android.md +++ /dev/null @@ -1,100 +0,0 @@ - -

Final Bricks Breaker Mod APK: A Fun and Challenging Arcade Game

-

If you are looking for a simple yet addictive arcade game to kill some time, you should try Final Bricks Breaker. This game is a classic brick-breaking game with a modern twist. You can enjoy breaking bricks with different shapes, colors, and effects, and use various power-ups to enhance your gameplay. In this article, we will tell you more about Final Bricks Breaker and why you should download its mod apk version.

-

final bricks breaker mod apk


DOWNLOAD >> https://urlin.us/2uT0Fz



-

What is Final Bricks Breaker?

-

Final Bricks Breaker is a arcade game developed by mobirix, a popular developer of casual games. The game has over 10 million downloads on Google Play Store and a 4.4-star rating from more than 100,000 users. The game is suitable for all ages and can be played offline or online.

-

The gameplay of Final Bricks Breaker

-

The gameplay of Final Bricks Breaker is simple and intuitive. You just need to swipe your finger on the screen to control a paddle at the bottom and bounce a ball to hit the bricks at the top. Your goal is to break all the bricks in each level and clear the stage. The game has hundreds of levels with different layouts, themes, and difficulties. Some bricks have special effects, such as moving, rotating, exploding, or changing colors. You can also collect coins and gems by breaking bricks or completing missions. You can use these currencies to buy new balls, paddles, or power-ups.

-

The features of Final Bricks Breaker

-

Final Bricks Breaker has many features that make it fun and enjoyable to play. Some of these features are:

-
    -
  • Various modes: You can choose from different modes, such as Classic, Stage, Multiplayer, or Challenge mode. Each mode has its own rules and objectives.
  • -
  • Power-ups: You can use power-ups to help you break bricks faster or easier. Some power-ups include fireball, laser, magnet, bomb, or extra life.
  • -
  • Achievements and leaderboards: You can unlock achievements by completing certain tasks or reaching milestones. You can also compete with other players around the world on the leaderboards.
  • -
  • Customization: You can customize your ball and paddle with different colors, shapes, and designs. You can also change the background and sound effects of the game.
  • -
-

Why download Final Bricks Breaker Mod APK?

-

While Final Bricks Breaker is free to play, it has some limitations and drawbacks that may affect your gaming experience. For example, you may encounter ads that pop up randomly or interrupt your gameplay. You may also run out of coins or gems quickly and have to wait for them to regenerate or buy them with real money. Moreover, some power-ups and items may be locked or require a certain level to unlock.

-

That's why we recommend you to download Final Bricks Breaker Mod APK from our website. This mod apk version will give you unlimited coins and gems, so you can buy anything you want without worrying about the cost. You will also get unlimited lives, so you can play as long as you want without losing progress. Additionally, you will get all the power-ups and items unlocked from the start, so you can enjoy the game to the fullest. And best of all, you will get rid of all the annoying ads that ruin your fun.

-

final bricks breaker mod apk download
-final bricks breaker mod apk unlimited money
-final bricks breaker mod apk latest version
-final bricks breaker mod apk free
-final bricks breaker mod apk android
-final bricks breaker mod apk hack
-final bricks breaker mod apk offline
-final bricks breaker mod apk no ads
-final bricks breaker mod apk 1.0.54
-final bricks breaker mod apk happymod
-final bricks breaker mod apk 2023
-final bricks breaker mod apk unlimited gems
-final bricks breaker mod apk revdl
-final bricks breaker mod apk rexdl
-final bricks breaker mod apk for pc
-final bricks breaker mod apk online
-final bricks breaker mod apk premium
-final bricks breaker mod apk pro
-final bricks breaker mod apk full version
-final bricks breaker mod apk unlocked
-final bricks breaker mod apk cheats
-final bricks breaker mod apk gameplay
-final bricks breaker mod apk review
-final bricks breaker mod apk features
-final bricks breaker mod apk tips and tricks
-final bricks breaker mod apk best settings
-final bricks breaker mod apk how to play
-final bricks breaker mod apk guide
-final bricks breaker mod apk tutorial
-final bricks breaker mod apk walkthrough
-final bricks breaker mod apk levels
-final bricks breaker mod apk stages
-final bricks breaker mod apk missions
-final bricks breaker mod apk challenges
-final bricks breaker mod apk achievements
-final bricks breaker mod apk rewards
-final bricks breaker mod apk skins
-final bricks breaker mod apk balls
-final bricks breaker mod apk power-ups
-final bricks breaker mod apk boosters
-final bricks breaker mod apk modes
-final bricks breaker mod apk genres
-final bricks breaker mod apk themes
-final bricks breaker mod apk graphics
-final bricks breaker mod apk sounds
-final bricks breaker mod apk music
-final bricks breaker mod apk updates
-final bricks breaker mod apk bugs and fixes
-final bricks breaker mod apk ratings and reviews

-

The benefits of Final Bricks Breaker Mod APK

-

Here are some of the benefits of downloading Final Bricks Breaker Mod APK:

-
%s
%s
' % label_html - self.vis.text(table_css + label_html, win=self.display_id + 2, - opts=dict(title=title + ' labels')) - except VisdomExceptionBase: - self.create_visdom_connections() - - else: # show each image in a separate visdom panel; - idx = 1 - try: - for label, image in visuals.items(): - image_numpy = util.tensor2im(image) - self.vis.image(image_numpy.transpose([2, 0, 1]), opts=dict(title=label), - win=self.display_id + idx) - idx += 1 - except VisdomExceptionBase: - self.create_visdom_connections() - - if self.use_wandb: - columns = [key for key, _ in visuals.items()] - columns.insert(0, 'epoch') - result_table = wandb.Table(columns=columns) - table_row = [epoch] - ims_dict = {} - for label, image in visuals.items(): - image_numpy = util.tensor2im(image) - wandb_image = wandb.Image(image_numpy) - table_row.append(wandb_image) - ims_dict[label] = wandb_image - self.wandb_run.log(ims_dict) - if epoch != self.current_epoch: - self.current_epoch = epoch - result_table.add_data(*table_row) - self.wandb_run.log({"Result": result_table}) - - if self.use_html and (save_result or not self.saved): # save images to an HTML file if they haven't been saved. - self.saved = True - # save images to the disk - for label, image in visuals.items(): - image_numpy = util.tensor2im(image) - img_path = os.path.join(self.img_dir, 'epoch%.3d_%s.png' % (epoch, label)) - util.save_image(image_numpy, img_path) - - # update website - webpage = html.HTML(self.web_dir, 'Experiment name = %s' % self.name, refresh=1) - for n in range(epoch, 0, -1): - webpage.add_header('epoch [%d]' % n) - ims, txts, links = [], [], [] - - for label, image_numpy in visuals.items(): - image_numpy = util.tensor2im(image) - img_path = 'epoch%.3d_%s.png' % (n, label) - ims.append(img_path) - txts.append(label) - links.append(img_path) - webpage.add_images(ims, txts, links, width=self.win_size) - webpage.save() - - def plot_current_losses(self, epoch, counter_ratio, losses): - """display the current losses on visdom display: dictionary of error labels and values - - Parameters: - epoch (int) -- current epoch - counter_ratio (float) -- progress (percentage) in the current epoch, between 0 to 1 - losses (OrderedDict) -- training losses stored in the format of (name, float) pairs - """ - if not hasattr(self, 'plot_data'): - self.plot_data = {'X': [], 'Y': [], 'legend': list(losses.keys())} - self.plot_data['X'].append(epoch + counter_ratio) - self.plot_data['Y'].append([losses[k] for k in self.plot_data['legend']]) - try: - self.vis.line( - X=np.stack([np.array(self.plot_data['X'])] * len(self.plot_data['legend']), 1), - Y=np.array(self.plot_data['Y']), - opts={ - 'title': self.name + ' loss over time', - 'legend': self.plot_data['legend'], - 'xlabel': 'epoch', - 'ylabel': 'loss'}, - win=self.display_id) - except VisdomExceptionBase: - self.create_visdom_connections() - if self.use_wandb: - self.wandb_run.log(losses) - - # losses: same format as |losses| of plot_current_losses - def print_current_losses(self, epoch, iters, losses, t_comp, t_data): - """print current losses on console; also save the losses to the disk - - Parameters: - epoch (int) -- current epoch - iters (int) -- current training iteration during this epoch (reset to 0 at the end of every epoch) - losses (OrderedDict) -- training losses stored in the format of (name, float) pairs - t_comp (float) -- computational time per data point (normalized by batch_size) - t_data (float) -- data loading time per data point (normalized by batch_size) - """ - message = '(epoch: %d, iters: %d, time: %.3f, data: %.3f) ' % (epoch, iters, t_comp, t_data) - for k, v in losses.items(): - message += '%s: %.3f ' % (k, v) - - print(message) # print the message - with open(self.log_name, "a") as log_file: - log_file.write('%s\n' % message) # save the message diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/utils/train_boundary.py b/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/utils/train_boundary.py deleted file mode 100644 index 710d062bc4b42913fcc5b12bd545e47af00c7123..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/utils/train_boundary.py +++ /dev/null @@ -1,158 +0,0 @@ - -import numpy as np -from sklearn import svm - - - - - -def train_boundary(latent_codes, - scores, - chosen_num_or_ratio=0.02, - split_ratio=0.7, - invalid_value=None, - logger=None, - logger_name='train_boundary'): - """Trains boundary in latent space with offline predicted attribute scores. - - Given a collection of latent codes and the attribute scores predicted from the - corresponding images, this function will train a linear SVM by treating it as - a bi-classification problem. Basically, the samples with highest attribute - scores are treated as positive samples, while those with lowest scores as - negative. For now, the latent code can ONLY be with 1 dimension. - - NOTE: The returned boundary is with shape (1, latent_space_dim), and also - normalized with unit norm. - - Args: - latent_codes: Input latent codes as training data. - scores: Input attribute scores used to generate training labels. - chosen_num_or_ratio: How many samples will be chosen as positive (negative) - samples. If this field lies in range (0, 0.5], `chosen_num_or_ratio * - latent_codes_num` will be used. Otherwise, `min(chosen_num_or_ratio, - 0.5 * latent_codes_num)` will be used. (default: 0.02) - split_ratio: Ratio to split training and validation sets. (default: 0.7) - invalid_value: This field is used to filter out data. (default: None) - logger: Logger for recording log messages. If set as `None`, a default - logger, which prints messages from all levels to screen, will be created. - (default: None) - - Returns: - A decision boundary with type `numpy.ndarray`. - - Raises: - ValueError: If the input `latent_codes` or `scores` are with invalid format. - """ -# if not logger: -# logger = setup_logger(work_dir='', logger_name=logger_name) - - if (not isinstance(latent_codes, np.ndarray) or - not len(latent_codes.shape) == 2): - raise ValueError(f'Input `latent_codes` should be with type' - f'`numpy.ndarray`, and shape [num_samples, ' - f'latent_space_dim]!') - num_samples = latent_codes.shape[0] - latent_space_dim = latent_codes.shape[1] - if (not isinstance(scores, np.ndarray) or not len(scores.shape) == 2 or - not scores.shape[0] == num_samples or not scores.shape[1] == 1): - raise ValueError(f'Input `scores` should be with type `numpy.ndarray`, and ' - f'shape [num_samples, 1], where `num_samples` should be ' - f'exactly same as that of input `latent_codes`!') - if chosen_num_or_ratio <= 0: - raise ValueError(f'Input `chosen_num_or_ratio` should be positive, ' - f'but {chosen_num_or_ratio} received!') - -# logger.info(f'Filtering training data.') - print('Filtering training data.') - if invalid_value is not None: - latent_codes = latent_codes[scores[:, 0] != invalid_value] - scores = scores[scores[:, 0] != invalid_value] - -# logger.info(f'Sorting scores to get positive and negative samples.') - print('Sorting scores to get positive and negative samples.') - - sorted_idx = np.argsort(scores, axis=0)[::-1, 0] - latent_codes = latent_codes[sorted_idx] - scores = scores[sorted_idx] - num_samples = latent_codes.shape[0] - if 0 < chosen_num_or_ratio <= 1: - chosen_num = int(num_samples * chosen_num_or_ratio) - else: - chosen_num = int(chosen_num_or_ratio) - chosen_num = min(chosen_num, num_samples // 2) - -# logger.info(f'Spliting training and validation sets:') - print('Filtering training data.') - - train_num = int(chosen_num * split_ratio) - val_num = chosen_num - train_num - # Positive samples. - positive_idx = np.arange(chosen_num) - np.random.shuffle(positive_idx) - positive_train = latent_codes[:chosen_num][positive_idx[:train_num]] - positive_val = latent_codes[:chosen_num][positive_idx[train_num:]] - # Negative samples. - negative_idx = np.arange(chosen_num) - np.random.shuffle(negative_idx) - negative_train = latent_codes[-chosen_num:][negative_idx[:train_num]] - negative_val = latent_codes[-chosen_num:][negative_idx[train_num:]] - # Training set. - train_data = np.concatenate([positive_train, negative_train], axis=0) - train_label = np.concatenate([np.ones(train_num, dtype=np.int), - np.zeros(train_num, dtype=np.int)], axis=0) -# logger.info(f' Training: {train_num} positive, {train_num} negative.') - print(f' Training: {train_num} positive, {train_num} negative.') - # Validation set. - val_data = np.concatenate([positive_val, negative_val], axis=0) - val_label = np.concatenate([np.ones(val_num, dtype=np.int), - np.zeros(val_num, dtype=np.int)], axis=0) -# logger.info(f' Validation: {val_num} positive, {val_num} negative.') - print(f' Validation: {val_num} positive, {val_num} negative.') - - # Remaining set. - remaining_num = num_samples - chosen_num * 2 - remaining_data = latent_codes[chosen_num:-chosen_num] - remaining_scores = scores[chosen_num:-chosen_num] - decision_value = (scores[0] + scores[-1]) / 2 - remaining_label = np.ones(remaining_num, dtype=np.int) - remaining_label[remaining_scores.ravel() < decision_value] = 0 - remaining_positive_num = np.sum(remaining_label == 1) - remaining_negative_num = np.sum(remaining_label == 0) -# logger.info(f' Remaining: {remaining_positive_num} positive, ' -# f'{remaining_negative_num} negative.') - print(f' Remaining: {remaining_positive_num} positive, ' - f'{remaining_negative_num} negative.') -# logger.info(f'Training boundary.') - print(f'Training boundary.') - - clf = svm.SVC(kernel='linear') - classifier = clf.fit(train_data, train_label) -# logger.info(f'Finish training.') - print(f'Finish training.') - - - if val_num: - val_prediction = classifier.predict(val_data) - correct_num = np.sum(val_label == val_prediction) -# logger.info(f'Accuracy for validation set: ' -# f'{correct_num} / {val_num * 2} = ' -# f'{correct_num / (val_num * 2):.6f}') - print(f'Accuracy for validation set: ' - f'{correct_num} / {val_num * 2} = ' - f'{correct_num / (val_num * 2):.6f}') - vacc=correct_num/len(val_label) - ''' - if remaining_num: - remaining_prediction = classifier.predict(remaining_data) - correct_num = np.sum(remaining_label == remaining_prediction) - logger.info(f'Accuracy for remaining set: ' - f'{correct_num} / {remaining_num} = ' - f'{correct_num / remaining_num:.6f}') - ''' - a = classifier.coef_.reshape(1, latent_space_dim).astype(np.float32) - return a / np.linalg.norm(a),vacc - - - - - diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/mulit_token_textual_inversion/textual_inversion_flax.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/mulit_token_textual_inversion/textual_inversion_flax.py deleted file mode 100644 index ecc89f98298e3e4205581fee1689761c519bc4e4..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/mulit_token_textual_inversion/textual_inversion_flax.py +++ /dev/null @@ -1,654 +0,0 @@ -import argparse -import logging -import math -import os -import random -from pathlib import Path - -import jax -import jax.numpy as jnp -import numpy as np -import optax -import PIL -import torch -import torch.utils.checkpoint -import transformers -from flax import jax_utils -from flax.training import train_state -from flax.training.common_utils import shard -from huggingface_hub import create_repo, upload_folder - -# TODO: remove and import from diffusers.utils when the new version of diffusers is released -from packaging import version -from PIL import Image -from torch.utils.data import Dataset -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import CLIPImageProcessor, CLIPTokenizer, FlaxCLIPTextModel, set_seed - -from diffusers import ( - FlaxAutoencoderKL, - FlaxDDPMScheduler, - FlaxPNDMScheduler, - FlaxStableDiffusionPipeline, - FlaxUNet2DConditionModel, -) -from diffusers.pipelines.stable_diffusion import FlaxStableDiffusionSafetyChecker -from diffusers.utils import check_min_version - - -if version.parse(version.parse(PIL.__version__).base_version) >= version.parse("9.1.0"): - PIL_INTERPOLATION = { - "linear": PIL.Image.Resampling.BILINEAR, - "bilinear": PIL.Image.Resampling.BILINEAR, - "bicubic": PIL.Image.Resampling.BICUBIC, - "lanczos": PIL.Image.Resampling.LANCZOS, - "nearest": PIL.Image.Resampling.NEAREST, - } -else: - PIL_INTERPOLATION = { - "linear": PIL.Image.LINEAR, - "bilinear": PIL.Image.BILINEAR, - "bicubic": PIL.Image.BICUBIC, - "lanczos": PIL.Image.LANCZOS, - "nearest": PIL.Image.NEAREST, - } -# ------------------------------------------------------------------------------ - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.14.0.dev0") - -logger = logging.getLogger(__name__) - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--tokenizer_name", - type=str, - default=None, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--train_data_dir", type=str, default=None, required=True, help="A folder containing the training data." - ) - parser.add_argument( - "--placeholder_token", - type=str, - default=None, - required=True, - help="A token to use as a placeholder for the concept.", - ) - parser.add_argument( - "--initializer_token", type=str, default=None, required=True, help="A token to use as initializer word." - ) - parser.add_argument("--learnable_property", type=str, default="object", help="Choose between 'object' and 'style'") - parser.add_argument("--repeats", type=int, default=100, help="How many times to repeat the training data.") - parser.add_argument( - "--output_dir", - type=str, - default="text-inversion-model", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument("--seed", type=int, default=42, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", action="store_true", help="Whether to center crop images before resizing to resolution." - ) - parser.add_argument( - "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument("--num_train_epochs", type=int, default=100) - parser.add_argument( - "--max_train_steps", - type=int, - default=5000, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=1e-4, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=True, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument( - "--use_auth_token", - action="store_true", - help=( - "Will use the token generated when running `huggingface-cli login` (necessary to use this script with" - " private models)." - ), - ) - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - if args.train_data_dir is None: - raise ValueError("You must specify a train data directory.") - - return args - - -imagenet_templates_small = [ - "a photo of a {}", - "a rendering of a {}", - "a cropped photo of the {}", - "the photo of a {}", - "a photo of a clean {}", - "a photo of a dirty {}", - "a dark photo of the {}", - "a photo of my {}", - "a photo of the cool {}", - "a close-up photo of a {}", - "a bright photo of the {}", - "a cropped photo of a {}", - "a photo of the {}", - "a good photo of the {}", - "a photo of one {}", - "a close-up photo of the {}", - "a rendition of the {}", - "a photo of the clean {}", - "a rendition of a {}", - "a photo of a nice {}", - "a good photo of a {}", - "a photo of the nice {}", - "a photo of the small {}", - "a photo of the weird {}", - "a photo of the large {}", - "a photo of a cool {}", - "a photo of a small {}", -] - -imagenet_style_templates_small = [ - "a painting in the style of {}", - "a rendering in the style of {}", - "a cropped painting in the style of {}", - "the painting in the style of {}", - "a clean painting in the style of {}", - "a dirty painting in the style of {}", - "a dark painting in the style of {}", - "a picture in the style of {}", - "a cool painting in the style of {}", - "a close-up painting in the style of {}", - "a bright painting in the style of {}", - "a cropped painting in the style of {}", - "a good painting in the style of {}", - "a close-up painting in the style of {}", - "a rendition in the style of {}", - "a nice painting in the style of {}", - "a small painting in the style of {}", - "a weird painting in the style of {}", - "a large painting in the style of {}", -] - - -class TextualInversionDataset(Dataset): - def __init__( - self, - data_root, - tokenizer, - learnable_property="object", # [object, style] - size=512, - repeats=100, - interpolation="bicubic", - flip_p=0.5, - set="train", - placeholder_token="*", - center_crop=False, - ): - self.data_root = data_root - self.tokenizer = tokenizer - self.learnable_property = learnable_property - self.size = size - self.placeholder_token = placeholder_token - self.center_crop = center_crop - self.flip_p = flip_p - - self.image_paths = [os.path.join(self.data_root, file_path) for file_path in os.listdir(self.data_root)] - - self.num_images = len(self.image_paths) - self._length = self.num_images - - if set == "train": - self._length = self.num_images * repeats - - self.interpolation = { - "linear": PIL_INTERPOLATION["linear"], - "bilinear": PIL_INTERPOLATION["bilinear"], - "bicubic": PIL_INTERPOLATION["bicubic"], - "lanczos": PIL_INTERPOLATION["lanczos"], - }[interpolation] - - self.templates = imagenet_style_templates_small if learnable_property == "style" else imagenet_templates_small - self.flip_transform = transforms.RandomHorizontalFlip(p=self.flip_p) - - def __len__(self): - return self._length - - def __getitem__(self, i): - example = {} - image = Image.open(self.image_paths[i % self.num_images]) - - if not image.mode == "RGB": - image = image.convert("RGB") - - placeholder_string = self.placeholder_token - text = random.choice(self.templates).format(placeholder_string) - - example["input_ids"] = self.tokenizer( - text, - padding="max_length", - truncation=True, - max_length=self.tokenizer.model_max_length, - return_tensors="pt", - ).input_ids[0] - - # default to score-sde preprocessing - img = np.array(image).astype(np.uint8) - - if self.center_crop: - crop = min(img.shape[0], img.shape[1]) - ( - h, - w, - ) = ( - img.shape[0], - img.shape[1], - ) - img = img[(h - crop) // 2 : (h + crop) // 2, (w - crop) // 2 : (w + crop) // 2] - - image = Image.fromarray(img) - image = image.resize((self.size, self.size), resample=self.interpolation) - - image = self.flip_transform(image) - image = np.array(image).astype(np.uint8) - image = (image / 127.5 - 1.0).astype(np.float32) - - example["pixel_values"] = torch.from_numpy(image).permute(2, 0, 1) - return example - - -def resize_token_embeddings(model, new_num_tokens, initializer_token_id, placeholder_token_id, rng): - if model.config.vocab_size == new_num_tokens or new_num_tokens is None: - return - model.config.vocab_size = new_num_tokens - - params = model.params - old_embeddings = params["text_model"]["embeddings"]["token_embedding"]["embedding"] - old_num_tokens, emb_dim = old_embeddings.shape - - initializer = jax.nn.initializers.normal() - - new_embeddings = initializer(rng, (new_num_tokens, emb_dim)) - new_embeddings = new_embeddings.at[:old_num_tokens].set(old_embeddings) - new_embeddings = new_embeddings.at[placeholder_token_id].set(new_embeddings[initializer_token_id]) - params["text_model"]["embeddings"]["token_embedding"]["embedding"] = new_embeddings - - model.params = params - return model - - -def get_params_to_save(params): - return jax.device_get(jax.tree_util.tree_map(lambda x: x[0], params)) - - -def main(): - args = parse_args() - - if args.seed is not None: - set_seed(args.seed) - - if jax.process_index() == 0: - if args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - if args.push_to_hub: - repo_id = create_repo( - repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token - ).repo_id - - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - # Setup logging, we only want one process per machine to log things on the screen. - logger.setLevel(logging.INFO if jax.process_index() == 0 else logging.ERROR) - if jax.process_index() == 0: - transformers.utils.logging.set_verbosity_info() - else: - transformers.utils.logging.set_verbosity_error() - - # Load the tokenizer and add the placeholder token as a additional special token - if args.tokenizer_name: - tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name) - elif args.pretrained_model_name_or_path: - tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer") - - # Add the placeholder token in tokenizer - num_added_tokens = tokenizer.add_tokens(args.placeholder_token) - if num_added_tokens == 0: - raise ValueError( - f"The tokenizer already contains the token {args.placeholder_token}. Please pass a different" - " `placeholder_token` that is not already in the tokenizer." - ) - - # Convert the initializer_token, placeholder_token to ids - token_ids = tokenizer.encode(args.initializer_token, add_special_tokens=False) - # Check if initializer_token is a single token or a sequence of tokens - if len(token_ids) > 1: - raise ValueError("The initializer token must be a single token.") - - initializer_token_id = token_ids[0] - placeholder_token_id = tokenizer.convert_tokens_to_ids(args.placeholder_token) - - # Load models and create wrapper for stable diffusion - text_encoder = FlaxCLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder") - vae, vae_params = FlaxAutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae") - unet, unet_params = FlaxUNet2DConditionModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="unet") - - # Create sampling rng - rng = jax.random.PRNGKey(args.seed) - rng, _ = jax.random.split(rng) - # Resize the token embeddings as we are adding new special tokens to the tokenizer - text_encoder = resize_token_embeddings( - text_encoder, len(tokenizer), initializer_token_id, placeholder_token_id, rng - ) - original_token_embeds = text_encoder.params["text_model"]["embeddings"]["token_embedding"]["embedding"] - - train_dataset = TextualInversionDataset( - data_root=args.train_data_dir, - tokenizer=tokenizer, - size=args.resolution, - placeholder_token=args.placeholder_token, - repeats=args.repeats, - learnable_property=args.learnable_property, - center_crop=args.center_crop, - set="train", - ) - - def collate_fn(examples): - pixel_values = torch.stack([example["pixel_values"] for example in examples]) - input_ids = torch.stack([example["input_ids"] for example in examples]) - - batch = {"pixel_values": pixel_values, "input_ids": input_ids} - batch = {k: v.numpy() for k, v in batch.items()} - - return batch - - total_train_batch_size = args.train_batch_size * jax.local_device_count() - train_dataloader = torch.utils.data.DataLoader( - train_dataset, batch_size=total_train_batch_size, shuffle=True, drop_last=True, collate_fn=collate_fn - ) - - # Optimization - if args.scale_lr: - args.learning_rate = args.learning_rate * total_train_batch_size - - constant_scheduler = optax.constant_schedule(args.learning_rate) - - optimizer = optax.adamw( - learning_rate=constant_scheduler, - b1=args.adam_beta1, - b2=args.adam_beta2, - eps=args.adam_epsilon, - weight_decay=args.adam_weight_decay, - ) - - def create_mask(params, label_fn): - def _map(params, mask, label_fn): - for k in params: - if label_fn(k): - mask[k] = "token_embedding" - else: - if isinstance(params[k], dict): - mask[k] = {} - _map(params[k], mask[k], label_fn) - else: - mask[k] = "zero" - - mask = {} - _map(params, mask, label_fn) - return mask - - def zero_grads(): - # from https://github.com/deepmind/optax/issues/159#issuecomment-896459491 - def init_fn(_): - return () - - def update_fn(updates, state, params=None): - return jax.tree_util.tree_map(jnp.zeros_like, updates), () - - return optax.GradientTransformation(init_fn, update_fn) - - # Zero out gradients of layers other than the token embedding layer - tx = optax.multi_transform( - {"token_embedding": optimizer, "zero": zero_grads()}, - create_mask(text_encoder.params, lambda s: s == "token_embedding"), - ) - - state = train_state.TrainState.create(apply_fn=text_encoder.__call__, params=text_encoder.params, tx=tx) - - noise_scheduler = FlaxDDPMScheduler( - beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000 - ) - noise_scheduler_state = noise_scheduler.create_state() - - # Initialize our training - train_rngs = jax.random.split(rng, jax.local_device_count()) - - # Define gradient train step fn - def train_step(state, vae_params, unet_params, batch, train_rng): - dropout_rng, sample_rng, new_train_rng = jax.random.split(train_rng, 3) - - def compute_loss(params): - vae_outputs = vae.apply( - {"params": vae_params}, batch["pixel_values"], deterministic=True, method=vae.encode - ) - latents = vae_outputs.latent_dist.sample(sample_rng) - # (NHWC) -> (NCHW) - latents = jnp.transpose(latents, (0, 3, 1, 2)) - latents = latents * vae.config.scaling_factor - - noise_rng, timestep_rng = jax.random.split(sample_rng) - noise = jax.random.normal(noise_rng, latents.shape) - bsz = latents.shape[0] - timesteps = jax.random.randint( - timestep_rng, - (bsz,), - 0, - noise_scheduler.config.num_train_timesteps, - ) - noisy_latents = noise_scheduler.add_noise(noise_scheduler_state, latents, noise, timesteps) - encoder_hidden_states = state.apply_fn( - batch["input_ids"], params=params, dropout_rng=dropout_rng, train=True - )[0] - # Predict the noise residual and compute loss - model_pred = unet.apply( - {"params": unet_params}, noisy_latents, timesteps, encoder_hidden_states, train=False - ).sample - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(noise_scheduler_state, latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - loss = (target - model_pred) ** 2 - loss = loss.mean() - - return loss - - grad_fn = jax.value_and_grad(compute_loss) - loss, grad = grad_fn(state.params) - grad = jax.lax.pmean(grad, "batch") - new_state = state.apply_gradients(grads=grad) - - # Keep the token embeddings fixed except the newly added embeddings for the concept, - # as we only want to optimize the concept embeddings - token_embeds = original_token_embeds.at[placeholder_token_id].set( - new_state.params["text_model"]["embeddings"]["token_embedding"]["embedding"][placeholder_token_id] - ) - new_state.params["text_model"]["embeddings"]["token_embedding"]["embedding"] = token_embeds - - metrics = {"loss": loss} - metrics = jax.lax.pmean(metrics, axis_name="batch") - return new_state, metrics, new_train_rng - - # Create parallel version of the train and eval step - p_train_step = jax.pmap(train_step, "batch", donate_argnums=(0,)) - - # Replicate the train state on each device - state = jax_utils.replicate(state) - vae_params = jax_utils.replicate(vae_params) - unet_params = jax_utils.replicate(unet_params) - - # Train! - num_update_steps_per_epoch = math.ceil(len(train_dataloader)) - - # Scheduler and math around the number of training steps. - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel & distributed) = {total_train_batch_size}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - - global_step = 0 - - epochs = tqdm(range(args.num_train_epochs), desc=f"Epoch ... (1/{args.num_train_epochs})", position=0) - for epoch in epochs: - # ======================== Training ================================ - - train_metrics = [] - - steps_per_epoch = len(train_dataset) // total_train_batch_size - train_step_progress_bar = tqdm(total=steps_per_epoch, desc="Training...", position=1, leave=False) - # train - for batch in train_dataloader: - batch = shard(batch) - state, train_metric, train_rngs = p_train_step(state, vae_params, unet_params, batch, train_rngs) - train_metrics.append(train_metric) - - train_step_progress_bar.update(1) - global_step += 1 - - if global_step >= args.max_train_steps: - break - - train_metric = jax_utils.unreplicate(train_metric) - - train_step_progress_bar.close() - epochs.write(f"Epoch... ({epoch + 1}/{args.num_train_epochs} | Loss: {train_metric['loss']})") - - # Create the pipeline using using the trained modules and save it. - if jax.process_index() == 0: - scheduler = FlaxPNDMScheduler( - beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", skip_prk_steps=True - ) - safety_checker = FlaxStableDiffusionSafetyChecker.from_pretrained( - "CompVis/stable-diffusion-safety-checker", from_pt=True - ) - pipeline = FlaxStableDiffusionPipeline( - text_encoder=text_encoder, - vae=vae, - unet=unet, - tokenizer=tokenizer, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=CLIPImageProcessor.from_pretrained("openai/clip-vit-base-patch32"), - ) - - pipeline.save_pretrained( - args.output_dir, - params={ - "text_encoder": get_params_to_save(state.params), - "vae": get_params_to_save(vae_params), - "unet": get_params_to_save(unet_params), - "safety_checker": safety_checker.params, - }, - ) - - # Also save the newly trained embeddings - learned_embeds = get_params_to_save(state.params)["text_model"]["embeddings"]["token_embedding"]["embedding"][ - placeholder_token_id - ] - learned_embeds_dict = {args.placeholder_token: learned_embeds} - jnp.save(os.path.join(args.output_dir, "learned_embeds.npy"), learned_embeds_dict) - - if args.push_to_hub: - upload_folder( - repo_id=repo_id, - folder_path=args.output_dir, - commit_message="End of training", - ignore_patterns=["step_*", "epoch_*"], - ) - - -if __name__ == "__main__": - main() diff --git a/spaces/Andy1621/uniformer_image_detection/configs/ssd/ssd300_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/ssd/ssd300_coco.py deleted file mode 100644 index 75c5e4e5b81a320a7e6bd7bc31e7d5cf49a0b92d..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/ssd/ssd300_coco.py +++ /dev/null @@ -1,62 +0,0 @@ -_base_ = [ - '../_base_/models/ssd300.py', '../_base_/datasets/coco_detection.py', - '../_base_/schedules/schedule_2x.py', '../_base_/default_runtime.py' -] -# dataset settings -dataset_type = 'CocoDataset' -data_root = 'data/coco/' -img_norm_cfg = dict(mean=[123.675, 116.28, 103.53], std=[1, 1, 1], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile', to_float32=True), - dict(type='LoadAnnotations', with_bbox=True), - dict( - type='PhotoMetricDistortion', - brightness_delta=32, - contrast_range=(0.5, 1.5), - saturation_range=(0.5, 1.5), - hue_delta=18), - dict( - type='Expand', - mean=img_norm_cfg['mean'], - to_rgb=img_norm_cfg['to_rgb'], - ratio_range=(1, 4)), - dict( - type='MinIoURandomCrop', - min_ious=(0.1, 0.3, 0.5, 0.7, 0.9), - min_crop_size=0.3), - dict(type='Resize', img_scale=(300, 300), keep_ratio=False), - dict(type='Normalize', **img_norm_cfg), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(300, 300), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=False), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=8, - workers_per_gpu=3, - train=dict( - _delete_=True, - type='RepeatDataset', - times=5, - dataset=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_train2017.json', - img_prefix=data_root + 'train2017/', - pipeline=train_pipeline)), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) -# optimizer -optimizer = dict(type='SGD', lr=2e-3, momentum=0.9, weight_decay=5e-4) -optimizer_config = dict(_delete_=True) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/samplers/iou_balanced_neg_sampler.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/samplers/iou_balanced_neg_sampler.py deleted file mode 100644 index f275e430d1b57c4d9df57387b8f3ae6f0ff68cf1..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/samplers/iou_balanced_neg_sampler.py +++ /dev/null @@ -1,157 +0,0 @@ -import numpy as np -import torch - -from ..builder import BBOX_SAMPLERS -from .random_sampler import RandomSampler - - -@BBOX_SAMPLERS.register_module() -class IoUBalancedNegSampler(RandomSampler): - """IoU Balanced Sampling. - - arXiv: https://arxiv.org/pdf/1904.02701.pdf (CVPR 2019) - - Sampling proposals according to their IoU. `floor_fraction` of needed RoIs - are sampled from proposals whose IoU are lower than `floor_thr` randomly. - The others are sampled from proposals whose IoU are higher than - `floor_thr`. These proposals are sampled from some bins evenly, which are - split by `num_bins` via IoU evenly. - - Args: - num (int): number of proposals. - pos_fraction (float): fraction of positive proposals. - floor_thr (float): threshold (minimum) IoU for IoU balanced sampling, - set to -1 if all using IoU balanced sampling. - floor_fraction (float): sampling fraction of proposals under floor_thr. - num_bins (int): number of bins in IoU balanced sampling. - """ - - def __init__(self, - num, - pos_fraction, - floor_thr=-1, - floor_fraction=0, - num_bins=3, - **kwargs): - super(IoUBalancedNegSampler, self).__init__(num, pos_fraction, - **kwargs) - assert floor_thr >= 0 or floor_thr == -1 - assert 0 <= floor_fraction <= 1 - assert num_bins >= 1 - - self.floor_thr = floor_thr - self.floor_fraction = floor_fraction - self.num_bins = num_bins - - def sample_via_interval(self, max_overlaps, full_set, num_expected): - """Sample according to the iou interval. - - Args: - max_overlaps (torch.Tensor): IoU between bounding boxes and ground - truth boxes. - full_set (set(int)): A full set of indices of boxes。 - num_expected (int): Number of expected samples。 - - Returns: - np.ndarray: Indices of samples - """ - max_iou = max_overlaps.max() - iou_interval = (max_iou - self.floor_thr) / self.num_bins - per_num_expected = int(num_expected / self.num_bins) - - sampled_inds = [] - for i in range(self.num_bins): - start_iou = self.floor_thr + i * iou_interval - end_iou = self.floor_thr + (i + 1) * iou_interval - tmp_set = set( - np.where( - np.logical_and(max_overlaps >= start_iou, - max_overlaps < end_iou))[0]) - tmp_inds = list(tmp_set & full_set) - if len(tmp_inds) > per_num_expected: - tmp_sampled_set = self.random_choice(tmp_inds, - per_num_expected) - else: - tmp_sampled_set = np.array(tmp_inds, dtype=np.int) - sampled_inds.append(tmp_sampled_set) - - sampled_inds = np.concatenate(sampled_inds) - if len(sampled_inds) < num_expected: - num_extra = num_expected - len(sampled_inds) - extra_inds = np.array(list(full_set - set(sampled_inds))) - if len(extra_inds) > num_extra: - extra_inds = self.random_choice(extra_inds, num_extra) - sampled_inds = np.concatenate([sampled_inds, extra_inds]) - - return sampled_inds - - def _sample_neg(self, assign_result, num_expected, **kwargs): - """Sample negative boxes. - - Args: - assign_result (:obj:`AssignResult`): The assigned results of boxes. - num_expected (int): The number of expected negative samples - - Returns: - Tensor or ndarray: sampled indices. - """ - neg_inds = torch.nonzero(assign_result.gt_inds == 0, as_tuple=False) - if neg_inds.numel() != 0: - neg_inds = neg_inds.squeeze(1) - if len(neg_inds) <= num_expected: - return neg_inds - else: - max_overlaps = assign_result.max_overlaps.cpu().numpy() - # balance sampling for negative samples - neg_set = set(neg_inds.cpu().numpy()) - - if self.floor_thr > 0: - floor_set = set( - np.where( - np.logical_and(max_overlaps >= 0, - max_overlaps < self.floor_thr))[0]) - iou_sampling_set = set( - np.where(max_overlaps >= self.floor_thr)[0]) - elif self.floor_thr == 0: - floor_set = set(np.where(max_overlaps == 0)[0]) - iou_sampling_set = set( - np.where(max_overlaps > self.floor_thr)[0]) - else: - floor_set = set() - iou_sampling_set = set( - np.where(max_overlaps > self.floor_thr)[0]) - # for sampling interval calculation - self.floor_thr = 0 - - floor_neg_inds = list(floor_set & neg_set) - iou_sampling_neg_inds = list(iou_sampling_set & neg_set) - num_expected_iou_sampling = int(num_expected * - (1 - self.floor_fraction)) - if len(iou_sampling_neg_inds) > num_expected_iou_sampling: - if self.num_bins >= 2: - iou_sampled_inds = self.sample_via_interval( - max_overlaps, set(iou_sampling_neg_inds), - num_expected_iou_sampling) - else: - iou_sampled_inds = self.random_choice( - iou_sampling_neg_inds, num_expected_iou_sampling) - else: - iou_sampled_inds = np.array( - iou_sampling_neg_inds, dtype=np.int) - num_expected_floor = num_expected - len(iou_sampled_inds) - if len(floor_neg_inds) > num_expected_floor: - sampled_floor_inds = self.random_choice( - floor_neg_inds, num_expected_floor) - else: - sampled_floor_inds = np.array(floor_neg_inds, dtype=np.int) - sampled_inds = np.concatenate( - (sampled_floor_inds, iou_sampled_inds)) - if len(sampled_inds) < num_expected: - num_extra = num_expected - len(sampled_inds) - extra_inds = np.array(list(neg_set - set(sampled_inds))) - if len(extra_inds) > num_extra: - extra_inds = self.random_choice(extra_inds, num_extra) - sampled_inds = np.concatenate((sampled_inds, extra_inds)) - sampled_inds = torch.from_numpy(sampled_inds).long().to( - assign_result.gt_inds.device) - return sampled_inds diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r50-d8_769x769_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r50-d8_769x769_40k_cityscapes.py deleted file mode 100644 index 5c5b94e5a27d7f902d4bdea7ef6c4ef0b816bb99..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r50-d8_769x769_40k_cityscapes.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = [ - '../_base_/models/danet_r50-d8.py', - '../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py', - '../_base_/schedules/schedule_40k.py' -] -model = dict( - decode_head=dict(align_corners=True), - auxiliary_head=dict(align_corners=True), - test_cfg=dict(mode='slide', crop_size=(769, 769), stride=(513, 513))) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18s_512x512_40k_voc12aug.py b/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18s_512x512_40k_voc12aug.py deleted file mode 100644 index 70babc91c99eb99ee4f941b34ea886236531832e..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18s_512x512_40k_voc12aug.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = './ocrnet_hr18_512x512_40k_voc12aug.py' -model = dict( - pretrained='open-mmlab://msra/hrnetv2_w18_small', - backbone=dict( - extra=dict( - stage1=dict(num_blocks=(2, )), - stage2=dict(num_blocks=(2, 2)), - stage3=dict(num_modules=3, num_blocks=(2, 2, 2)), - stage4=dict(num_modules=2, num_blocks=(2, 2, 2, 2))))) diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/models_settings.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/models_settings.py deleted file mode 100644 index aecb7a89ab5e7de14ffd2dd3d81ebfda3741867b..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/models_settings.py +++ /dev/null @@ -1,219 +0,0 @@ -import json -import re -from pathlib import Path - -import yaml - -from modules import loaders, metadata_gguf, shared, ui - - -def get_fallback_settings(): - return { - 'wbits': 'None', - 'groupsize': 'None', - 'desc_act': False, - 'model_type': 'None', - 'max_seq_len': 2048, - 'n_ctx': 2048, - 'rope_freq_base': 0, - 'compress_pos_emb': 1, - 'truncation_length': shared.settings['truncation_length'], - 'skip_special_tokens': shared.settings['skip_special_tokens'], - 'custom_stopping_strings': shared.settings['custom_stopping_strings'], - } - - -def get_model_metadata(model): - model_settings = {} - - # Get settings from models/config.yaml and models/config-user.yaml - settings = shared.model_config - for pat in settings: - if re.match(pat.lower(), model.lower()): - for k in settings[pat]: - model_settings[k] = settings[pat][k] - - if 'loader' not in model_settings: - loader = infer_loader(model, model_settings) - if 'wbits' in model_settings and type(model_settings['wbits']) is int and model_settings['wbits'] > 0: - loader = 'AutoGPTQ' - - model_settings['loader'] = loader - - # Read GGUF metadata - if model_settings['loader'] in ['llama.cpp', 'llamacpp_HF', 'ctransformers']: - path = Path(f'{shared.args.model_dir}/{model}') - if path.is_file(): - model_file = path - else: - model_file = list(path.glob('*.gguf'))[0] - - metadata = metadata_gguf.load_metadata(model_file) - if 'llama.context_length' in metadata: - model_settings['n_ctx'] = metadata['llama.context_length'] - if 'llama.rope.scale_linear' in metadata: - model_settings['compress_pos_emb'] = metadata['llama.rope.scale_linear'] - if 'llama.rope.freq_base' in metadata: - model_settings['rope_freq_base'] = metadata['llama.rope.freq_base'] - - else: - # Read transformers metadata - path = Path(f'{shared.args.model_dir}/{model}/config.json') - if path.exists(): - metadata = json.loads(open(path, 'r').read()) - if 'max_position_embeddings' in metadata: - model_settings['truncation_length'] = metadata['max_position_embeddings'] - model_settings['max_seq_len'] = metadata['max_position_embeddings'] - - if 'rope_theta' in metadata: - model_settings['rope_freq_base'] = metadata['rope_theta'] - - if 'rope_scaling' in metadata and type(metadata['rope_scaling']) is dict and all(key in metadata['rope_scaling'] for key in ('type', 'factor')): - if metadata['rope_scaling']['type'] == 'linear': - model_settings['compress_pos_emb'] = metadata['rope_scaling']['factor'] - - if 'quantization_config' in metadata: - if 'bits' in metadata['quantization_config']: - model_settings['wbits'] = metadata['quantization_config']['bits'] - if 'group_size' in metadata['quantization_config']: - model_settings['groupsize'] = metadata['quantization_config']['group_size'] - if 'desc_act' in metadata['quantization_config']: - model_settings['desc_act'] = metadata['quantization_config']['desc_act'] - - # Read AutoGPTQ metadata - path = Path(f'{shared.args.model_dir}/{model}/quantize_config.json') - if path.exists(): - metadata = json.loads(open(path, 'r').read()) - if 'bits' in metadata: - model_settings['wbits'] = metadata['bits'] - if 'group_size' in metadata: - model_settings['groupsize'] = metadata['group_size'] - if 'desc_act' in metadata: - model_settings['desc_act'] = metadata['desc_act'] - - # Apply user settings from models/config-user.yaml - settings = shared.user_config - for pat in settings: - if re.match(pat.lower(), model.lower()): - for k in settings[pat]: - model_settings[k] = settings[pat][k] - - return model_settings - - -def infer_loader(model_name, model_settings): - path_to_model = Path(f'{shared.args.model_dir}/{model_name}') - if not path_to_model.exists(): - loader = None - elif (path_to_model / 'quantize_config.json').exists() or ('wbits' in model_settings and type(model_settings['wbits']) is int and model_settings['wbits'] > 0): - loader = 'AutoGPTQ' - elif (path_to_model / 'quant_config.json').exists() or re.match(r'.*-awq', model_name.lower()): - loader = 'AutoAWQ' - elif len(list(path_to_model.glob('*.gguf'))) > 0: - loader = 'llama.cpp' - elif re.match(r'.*\.gguf', model_name.lower()): - loader = 'llama.cpp' - elif re.match(r'.*rwkv.*\.pth', model_name.lower()): - loader = 'RWKV' - elif re.match(r'.*exl2', model_name.lower()): - loader = 'ExLlamav2_HF' - else: - loader = 'Transformers' - - return loader - - -# UI: update the command-line arguments based on the interface values -def update_model_parameters(state, initial=False): - elements = ui.list_model_elements() # the names of the parameters - gpu_memories = [] - - for i, element in enumerate(elements): - if element not in state: - continue - - value = state[element] - if element.startswith('gpu_memory'): - gpu_memories.append(value) - continue - - if initial and element in shared.provided_arguments: - continue - - # Setting null defaults - if element in ['wbits', 'groupsize', 'model_type'] and value == 'None': - value = vars(shared.args_defaults)[element] - elif element in ['cpu_memory'] and value == 0: - value = vars(shared.args_defaults)[element] - - # Making some simple conversions - if element in ['wbits', 'groupsize', 'pre_layer']: - value = int(value) - elif element == 'cpu_memory' and value is not None: - value = f"{value}MiB" - - if element in ['pre_layer']: - value = [value] if value > 0 else None - - setattr(shared.args, element, value) - - found_positive = False - for i in gpu_memories: - if i > 0: - found_positive = True - break - - if not (initial and vars(shared.args)['gpu_memory'] != vars(shared.args_defaults)['gpu_memory']): - if found_positive: - shared.args.gpu_memory = [f"{i}MiB" for i in gpu_memories] - else: - shared.args.gpu_memory = None - - -# UI: update the state variable with the model settings -def apply_model_settings_to_state(model, state): - model_settings = get_model_metadata(model) - if 'loader' in model_settings: - loader = model_settings.pop('loader') - - # If the user is using an alternative loader for the same model type, let them keep using it - if not (loader == 'AutoGPTQ' and state['loader'] in ['GPTQ-for-LLaMa', 'ExLlama', 'ExLlama_HF', 'ExLlamav2', 'ExLlamav2_HF']) and not (loader == 'llama.cpp' and state['loader'] in ['llamacpp_HF', 'ctransformers']): - state['loader'] = loader - - for k in model_settings: - if k in state: - if k in ['wbits', 'groupsize']: - state[k] = str(model_settings[k]) - else: - state[k] = model_settings[k] - - return state - - -# Save the settings for this model to models/config-user.yaml -def save_model_settings(model, state): - if model == 'None': - yield ("Not saving the settings because no model is loaded.") - return - - with Path(f'{shared.args.model_dir}/config-user.yaml') as p: - if p.exists(): - user_config = yaml.safe_load(open(p, 'r').read()) - else: - user_config = {} - - model_regex = model + '$' # For exact matches - if model_regex not in user_config: - user_config[model_regex] = {} - - for k in ui.list_model_elements(): - if k == 'loader' or k in loaders.loaders_and_params[state['loader']]: - user_config[model_regex][k] = state[k] - - shared.user_config = user_config - - output = yaml.dump(user_config, sort_keys=False) - with open(p, 'w') as f: - f.write(output) - - yield (f"Settings for {model} saved to {p}") diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/midas/midas/dpt_depth.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/midas/midas/dpt_depth.py deleted file mode 100644 index 4e9aab5d2767dffea39da5b3f30e2798688216f1..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/midas/midas/dpt_depth.py +++ /dev/null @@ -1,109 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .base_model import BaseModel -from .blocks import ( - FeatureFusionBlock, - FeatureFusionBlock_custom, - Interpolate, - _make_encoder, - forward_vit, -) - - -def _make_fusion_block(features, use_bn): - return FeatureFusionBlock_custom( - features, - nn.ReLU(False), - deconv=False, - bn=use_bn, - expand=False, - align_corners=True, - ) - - -class DPT(BaseModel): - def __init__( - self, - head, - features=256, - backbone="vitb_rn50_384", - readout="project", - channels_last=False, - use_bn=False, - ): - - super(DPT, self).__init__() - - self.channels_last = channels_last - - hooks = { - "vitb_rn50_384": [0, 1, 8, 11], - "vitb16_384": [2, 5, 8, 11], - "vitl16_384": [5, 11, 17, 23], - } - - # Instantiate backbone and reassemble blocks - self.pretrained, self.scratch = _make_encoder( - backbone, - features, - False, # Set to true of you want to train from scratch, uses ImageNet weights - groups=1, - expand=False, - exportable=False, - hooks=hooks[backbone], - use_readout=readout, - ) - - self.scratch.refinenet1 = _make_fusion_block(features, use_bn) - self.scratch.refinenet2 = _make_fusion_block(features, use_bn) - self.scratch.refinenet3 = _make_fusion_block(features, use_bn) - self.scratch.refinenet4 = _make_fusion_block(features, use_bn) - - self.scratch.output_conv = head - - - def forward(self, x): - if self.channels_last == True: - x.contiguous(memory_format=torch.channels_last) - - layer_1, layer_2, layer_3, layer_4 = forward_vit(self.pretrained, x) - - layer_1_rn = self.scratch.layer1_rn(layer_1) - layer_2_rn = self.scratch.layer2_rn(layer_2) - layer_3_rn = self.scratch.layer3_rn(layer_3) - layer_4_rn = self.scratch.layer4_rn(layer_4) - - path_4 = self.scratch.refinenet4(layer_4_rn) - path_3 = self.scratch.refinenet3(path_4, layer_3_rn) - path_2 = self.scratch.refinenet2(path_3, layer_2_rn) - path_1 = self.scratch.refinenet1(path_2, layer_1_rn) - - out = self.scratch.output_conv(path_1) - - return out - - -class DPTDepthModel(DPT): - def __init__(self, path=None, non_negative=True, **kwargs): - features = kwargs["features"] if "features" in kwargs else 256 - - head = nn.Sequential( - nn.Conv2d(features, features // 2, kernel_size=3, stride=1, padding=1), - Interpolate(scale_factor=2, mode="bilinear", align_corners=True), - nn.Conv2d(features // 2, 32, kernel_size=3, stride=1, padding=1), - nn.ReLU(True), - nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0), - nn.ReLU(True) if non_negative else nn.Identity(), - nn.Identity(), - ) - - super().__init__(head, **kwargs) - - if path is not None: - self.load(path) - - def forward(self, x): - return super().forward(x).squeeze(dim=1) - diff --git a/spaces/Anonymous-sub/Rerender/gmflow_module/utils/frame_utils.py b/spaces/Anonymous-sub/Rerender/gmflow_module/utils/frame_utils.py deleted file mode 100644 index e2142240fd5c495149e108151abbfdcc12337d9a..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/gmflow_module/utils/frame_utils.py +++ /dev/null @@ -1,131 +0,0 @@ -import numpy as np -from PIL import Image -from os.path import * -import re -import cv2 - -TAG_CHAR = np.array([202021.25], np.float32) - - -def readFlow(fn): - """ Read .flo file in Middlebury format""" - # Code adapted from: - # http://stackoverflow.com/questions/28013200/reading-middlebury-flow-files-with-python-bytes-array-numpy - - # WARNING: this will work on little-endian architectures (eg Intel x86) only! - # print 'fn = %s'%(fn) - with open(fn, 'rb') as f: - magic = np.fromfile(f, np.float32, count=1) - if 202021.25 != magic: - print('Magic number incorrect. Invalid .flo file') - return None - else: - w = np.fromfile(f, np.int32, count=1) - h = np.fromfile(f, np.int32, count=1) - # print 'Reading %d x %d flo file\n' % (w, h) - data = np.fromfile(f, np.float32, count=2 * int(w) * int(h)) - # Reshape testdata into 3D array (columns, rows, bands) - # The reshape here is for visualization, the original code is (w,h,2) - return np.resize(data, (int(h), int(w), 2)) - - -def readPFM(file): - file = open(file, 'rb') - - color = None - width = None - height = None - scale = None - endian = None - - header = file.readline().rstrip() - if header == b'PF': - color = True - elif header == b'Pf': - color = False - else: - raise Exception('Not a PFM file.') - - dim_match = re.match(rb'^(\d+)\s(\d+)\s$', file.readline()) - if dim_match: - width, height = map(int, dim_match.groups()) - else: - raise Exception('Malformed PFM header.') - - scale = float(file.readline().rstrip()) - if scale < 0: # little-endian - endian = '<' - scale = -scale - else: - endian = '>' # big-endian - - data = np.fromfile(file, endian + 'f') - shape = (height, width, 3) if color else (height, width) - - data = np.reshape(data, shape) - data = np.flipud(data) - return data - - -def writeFlow(filename, uv, v=None): - """ Write optical flow to file. - - If v is None, uv is assumed to contain both u and v channels, - stacked in depth. - Original code by Deqing Sun, adapted from Daniel Scharstein. - """ - nBands = 2 - - if v is None: - assert (uv.ndim == 3) - assert (uv.shape[2] == 2) - u = uv[:, :, 0] - v = uv[:, :, 1] - else: - u = uv - - assert (u.shape == v.shape) - height, width = u.shape - f = open(filename, 'wb') - # write the header - f.write(TAG_CHAR) - np.array(width).astype(np.int32).tofile(f) - np.array(height).astype(np.int32).tofile(f) - # arrange into matrix form - tmp = np.zeros((height, width * nBands)) - tmp[:, np.arange(width) * 2] = u - tmp[:, np.arange(width) * 2 + 1] = v - tmp.astype(np.float32).tofile(f) - f.close() - - -def readFlowKITTI(filename): - flow = cv2.imread(filename, cv2.IMREAD_ANYDEPTH | cv2.IMREAD_COLOR) - flow = flow[:, :, ::-1].astype(np.float32) - flow, valid = flow[:, :, :2], flow[:, :, 2] - flow = (flow - 2 ** 15) / 64.0 - return flow, valid - - -def writeFlowKITTI(filename, uv): - uv = 64.0 * uv + 2 ** 15 - valid = np.ones([uv.shape[0], uv.shape[1], 1]) - uv = np.concatenate([uv, valid], axis=-1).astype(np.uint16) - cv2.imwrite(filename, uv[..., ::-1]) - - -def read_gen(file_name, pil=False): - ext = splitext(file_name)[-1] - if ext == '.png' or ext == '.jpeg' or ext == '.ppm' or ext == '.jpg': - return Image.open(file_name) - elif ext == '.bin' or ext == '.raw': - return np.load(file_name) - elif ext == '.flo': - return readFlow(file_name).astype(np.float32) - elif ext == '.pfm': - flow = readPFM(file_name).astype(np.float32) - if len(flow.shape) == 2: - return flow - else: - return flow[:, :, :-1] - return [] diff --git a/spaces/ArkanDash/rvc-models-new/lib/infer_pack/models_onnx.py b/spaces/ArkanDash/rvc-models-new/lib/infer_pack/models_onnx.py deleted file mode 100644 index 963e67b29f828e9fdd096397952054fe77cf3d10..0000000000000000000000000000000000000000 --- a/spaces/ArkanDash/rvc-models-new/lib/infer_pack/models_onnx.py +++ /dev/null @@ -1,819 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from lib.infer_pack import modules -from lib.infer_pack import attentions -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from lib.infer_pack.commons import init_weights -import numpy as np -from lib.infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMsNSFsidM(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - version, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - if version == "v1": - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - else: - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - self.speaker_map = None - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def construct_spkmixmap(self, n_speaker): - self.speaker_map = torch.zeros((n_speaker, 1, 1, self.gin_channels)) - for i in range(n_speaker): - self.speaker_map[i] = self.emb_g(torch.LongTensor([[i]])) - self.speaker_map = self.speaker_map.unsqueeze(0) - - def forward(self, phone, phone_lengths, pitch, nsff0, g, rnd, max_len=None): - if self.speaker_map is not None: # [N, S] * [S, B, 1, H] - g = g.reshape((g.shape[0], g.shape[1], 1, 1, 1)) # [N, S, B, 1, 1] - g = g * self.speaker_map # [N, S, B, 1, H] - g = torch.sum(g, dim=1) # [N, 1, B, 1, H] - g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N] - else: - g = g.unsqueeze(0) - g = self.emb_g(g).transpose(1, 2) - - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/Arsenii2023/Demo1/app.py b/spaces/Arsenii2023/Demo1/app.py deleted file mode 100644 index a699bc5b3c2e987102ca93e0ee28d601e0a93d02..0000000000000000000000000000000000000000 --- a/spaces/Arsenii2023/Demo1/app.py +++ /dev/null @@ -1,7 +0,0 @@ -import gradio as gr - -def greet(name): - return "Hello " + name + "!!" - -iface = gr.Interface(fn=greet, inputs="text", outputs="text") -iface.launch() \ No newline at end of file diff --git a/spaces/ArtificialWF/Voice-Recognition/README.md b/spaces/ArtificialWF/Voice-Recognition/README.md deleted file mode 100644 index 8c2b53ef253bd7729d8d8e9731f51d3867682da3..0000000000000000000000000000000000000000 --- a/spaces/ArtificialWF/Voice-Recognition/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Voice to Text -emoji: 🌖 -colorFrom: gray -colorTo: indigo -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false - ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py deleted file mode 100644 index 8663097b447cdd80c52e2b2abde33a4736ddb9c2..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py +++ /dev/null @@ -1,155 +0,0 @@ -"""Utilities to lazily create and visit candidates found. - -Creating and visiting a candidate is a *very* costly operation. It involves -fetching, extracting, potentially building modules from source, and verifying -distribution metadata. It is therefore crucial for performance to keep -everything here lazy all the way down, so we only touch candidates that we -absolutely need, and not "download the world" when we only need one version of -something. -""" - -import functools -from collections.abc import Sequence -from typing import TYPE_CHECKING, Any, Callable, Iterator, Optional, Set, Tuple - -from pip._vendor.packaging.version import _BaseVersion - -from .base import Candidate - -IndexCandidateInfo = Tuple[_BaseVersion, Callable[[], Optional[Candidate]]] - -if TYPE_CHECKING: - SequenceCandidate = Sequence[Candidate] -else: - # For compatibility: Python before 3.9 does not support using [] on the - # Sequence class. - # - # >>> from collections.abc import Sequence - # >>> Sequence[str] - # Traceback (most recent call last): - # File "", line 1, in - # TypeError: 'ABCMeta' object is not subscriptable - # - # TODO: Remove this block after dropping Python 3.8 support. - SequenceCandidate = Sequence - - -def _iter_built(infos: Iterator[IndexCandidateInfo]) -> Iterator[Candidate]: - """Iterator for ``FoundCandidates``. - - This iterator is used when the package is not already installed. Candidates - from index come later in their normal ordering. - """ - versions_found: Set[_BaseVersion] = set() - for version, func in infos: - if version in versions_found: - continue - candidate = func() - if candidate is None: - continue - yield candidate - versions_found.add(version) - - -def _iter_built_with_prepended( - installed: Candidate, infos: Iterator[IndexCandidateInfo] -) -> Iterator[Candidate]: - """Iterator for ``FoundCandidates``. - - This iterator is used when the resolver prefers the already-installed - candidate and NOT to upgrade. The installed candidate is therefore - always yielded first, and candidates from index come later in their - normal ordering, except skipped when the version is already installed. - """ - yield installed - versions_found: Set[_BaseVersion] = {installed.version} - for version, func in infos: - if version in versions_found: - continue - candidate = func() - if candidate is None: - continue - yield candidate - versions_found.add(version) - - -def _iter_built_with_inserted( - installed: Candidate, infos: Iterator[IndexCandidateInfo] -) -> Iterator[Candidate]: - """Iterator for ``FoundCandidates``. - - This iterator is used when the resolver prefers to upgrade an - already-installed package. Candidates from index are returned in their - normal ordering, except replaced when the version is already installed. - - The implementation iterates through and yields other candidates, inserting - the installed candidate exactly once before we start yielding older or - equivalent candidates, or after all other candidates if they are all newer. - """ - versions_found: Set[_BaseVersion] = set() - for version, func in infos: - if version in versions_found: - continue - # If the installed candidate is better, yield it first. - if installed.version >= version: - yield installed - versions_found.add(installed.version) - candidate = func() - if candidate is None: - continue - yield candidate - versions_found.add(version) - - # If the installed candidate is older than all other candidates. - if installed.version not in versions_found: - yield installed - - -class FoundCandidates(SequenceCandidate): - """A lazy sequence to provide candidates to the resolver. - - The intended usage is to return this from `find_matches()` so the resolver - can iterate through the sequence multiple times, but only access the index - page when remote packages are actually needed. This improve performances - when suitable candidates are already installed on disk. - """ - - def __init__( - self, - get_infos: Callable[[], Iterator[IndexCandidateInfo]], - installed: Optional[Candidate], - prefers_installed: bool, - incompatible_ids: Set[int], - ): - self._get_infos = get_infos - self._installed = installed - self._prefers_installed = prefers_installed - self._incompatible_ids = incompatible_ids - - def __getitem__(self, index: Any) -> Any: - # Implemented to satisfy the ABC check. This is not needed by the - # resolver, and should not be used by the provider either (for - # performance reasons). - raise NotImplementedError("don't do this") - - def __iter__(self) -> Iterator[Candidate]: - infos = self._get_infos() - if not self._installed: - iterator = _iter_built(infos) - elif self._prefers_installed: - iterator = _iter_built_with_prepended(self._installed, infos) - else: - iterator = _iter_built_with_inserted(self._installed, infos) - return (c for c in iterator if id(c) not in self._incompatible_ids) - - def __len__(self) -> int: - # Implemented to satisfy the ABC check. This is not needed by the - # resolver, and should not be used by the provider either (for - # performance reasons). - raise NotImplementedError("don't do this") - - @functools.lru_cache(maxsize=1) - def __bool__(self) -> bool: - if self._prefers_installed and self._installed: - return True - return any(self) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/packaging/specifiers.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/packaging/specifiers.py deleted file mode 100644 index 0e218a6f9f75ea2060a8b08d1f1a043fdad68df8..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/packaging/specifiers.py +++ /dev/null @@ -1,802 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -import abc -import functools -import itertools -import re -import warnings -from typing import ( - Callable, - Dict, - Iterable, - Iterator, - List, - Optional, - Pattern, - Set, - Tuple, - TypeVar, - Union, -) - -from .utils import canonicalize_version -from .version import LegacyVersion, Version, parse - -ParsedVersion = Union[Version, LegacyVersion] -UnparsedVersion = Union[Version, LegacyVersion, str] -VersionTypeVar = TypeVar("VersionTypeVar", bound=UnparsedVersion) -CallableOperator = Callable[[ParsedVersion, str], bool] - - -class InvalidSpecifier(ValueError): - """ - An invalid specifier was found, users should refer to PEP 440. - """ - - -class BaseSpecifier(metaclass=abc.ABCMeta): - @abc.abstractmethod - def __str__(self) -> str: - """ - Returns the str representation of this Specifier like object. This - should be representative of the Specifier itself. - """ - - @abc.abstractmethod - def __hash__(self) -> int: - """ - Returns a hash value for this Specifier like object. - """ - - @abc.abstractmethod - def __eq__(self, other: object) -> bool: - """ - Returns a boolean representing whether or not the two Specifier like - objects are equal. - """ - - @abc.abstractproperty - def prereleases(self) -> Optional[bool]: - """ - Returns whether or not pre-releases as a whole are allowed by this - specifier. - """ - - @prereleases.setter - def prereleases(self, value: bool) -> None: - """ - Sets whether or not pre-releases as a whole are allowed by this - specifier. - """ - - @abc.abstractmethod - def contains(self, item: str, prereleases: Optional[bool] = None) -> bool: - """ - Determines if the given item is contained within this specifier. - """ - - @abc.abstractmethod - def filter( - self, iterable: Iterable[VersionTypeVar], prereleases: Optional[bool] = None - ) -> Iterable[VersionTypeVar]: - """ - Takes an iterable of items and filters them so that only items which - are contained within this specifier are allowed in it. - """ - - -class _IndividualSpecifier(BaseSpecifier): - - _operators: Dict[str, str] = {} - _regex: Pattern[str] - - def __init__(self, spec: str = "", prereleases: Optional[bool] = None) -> None: - match = self._regex.search(spec) - if not match: - raise InvalidSpecifier(f"Invalid specifier: '{spec}'") - - self._spec: Tuple[str, str] = ( - match.group("operator").strip(), - match.group("version").strip(), - ) - - # Store whether or not this Specifier should accept prereleases - self._prereleases = prereleases - - def __repr__(self) -> str: - pre = ( - f", prereleases={self.prereleases!r}" - if self._prereleases is not None - else "" - ) - - return f"<{self.__class__.__name__}({str(self)!r}{pre})>" - - def __str__(self) -> str: - return "{}{}".format(*self._spec) - - @property - def _canonical_spec(self) -> Tuple[str, str]: - return self._spec[0], canonicalize_version(self._spec[1]) - - def __hash__(self) -> int: - return hash(self._canonical_spec) - - def __eq__(self, other: object) -> bool: - if isinstance(other, str): - try: - other = self.__class__(str(other)) - except InvalidSpecifier: - return NotImplemented - elif not isinstance(other, self.__class__): - return NotImplemented - - return self._canonical_spec == other._canonical_spec - - def _get_operator(self, op: str) -> CallableOperator: - operator_callable: CallableOperator = getattr( - self, f"_compare_{self._operators[op]}" - ) - return operator_callable - - def _coerce_version(self, version: UnparsedVersion) -> ParsedVersion: - if not isinstance(version, (LegacyVersion, Version)): - version = parse(version) - return version - - @property - def operator(self) -> str: - return self._spec[0] - - @property - def version(self) -> str: - return self._spec[1] - - @property - def prereleases(self) -> Optional[bool]: - return self._prereleases - - @prereleases.setter - def prereleases(self, value: bool) -> None: - self._prereleases = value - - def __contains__(self, item: str) -> bool: - return self.contains(item) - - def contains( - self, item: UnparsedVersion, prereleases: Optional[bool] = None - ) -> bool: - - # Determine if prereleases are to be allowed or not. - if prereleases is None: - prereleases = self.prereleases - - # Normalize item to a Version or LegacyVersion, this allows us to have - # a shortcut for ``"2.0" in Specifier(">=2") - normalized_item = self._coerce_version(item) - - # Determine if we should be supporting prereleases in this specifier - # or not, if we do not support prereleases than we can short circuit - # logic if this version is a prereleases. - if normalized_item.is_prerelease and not prereleases: - return False - - # Actually do the comparison to determine if this item is contained - # within this Specifier or not. - operator_callable: CallableOperator = self._get_operator(self.operator) - return operator_callable(normalized_item, self.version) - - def filter( - self, iterable: Iterable[VersionTypeVar], prereleases: Optional[bool] = None - ) -> Iterable[VersionTypeVar]: - - yielded = False - found_prereleases = [] - - kw = {"prereleases": prereleases if prereleases is not None else True} - - # Attempt to iterate over all the values in the iterable and if any of - # them match, yield them. - for version in iterable: - parsed_version = self._coerce_version(version) - - if self.contains(parsed_version, **kw): - # If our version is a prerelease, and we were not set to allow - # prereleases, then we'll store it for later in case nothing - # else matches this specifier. - if parsed_version.is_prerelease and not ( - prereleases or self.prereleases - ): - found_prereleases.append(version) - # Either this is not a prerelease, or we should have been - # accepting prereleases from the beginning. - else: - yielded = True - yield version - - # Now that we've iterated over everything, determine if we've yielded - # any values, and if we have not and we have any prereleases stored up - # then we will go ahead and yield the prereleases. - if not yielded and found_prereleases: - for version in found_prereleases: - yield version - - -class LegacySpecifier(_IndividualSpecifier): - - _regex_str = r""" - (?P(==|!=|<=|>=|<|>)) - \s* - (?P - [^,;\s)]* # Since this is a "legacy" specifier, and the version - # string can be just about anything, we match everything - # except for whitespace, a semi-colon for marker support, - # a closing paren since versions can be enclosed in - # them, and a comma since it's a version separator. - ) - """ - - _regex = re.compile(r"^\s*" + _regex_str + r"\s*$", re.VERBOSE | re.IGNORECASE) - - _operators = { - "==": "equal", - "!=": "not_equal", - "<=": "less_than_equal", - ">=": "greater_than_equal", - "<": "less_than", - ">": "greater_than", - } - - def __init__(self, spec: str = "", prereleases: Optional[bool] = None) -> None: - super().__init__(spec, prereleases) - - warnings.warn( - "Creating a LegacyVersion has been deprecated and will be " - "removed in the next major release", - DeprecationWarning, - ) - - def _coerce_version(self, version: UnparsedVersion) -> LegacyVersion: - if not isinstance(version, LegacyVersion): - version = LegacyVersion(str(version)) - return version - - def _compare_equal(self, prospective: LegacyVersion, spec: str) -> bool: - return prospective == self._coerce_version(spec) - - def _compare_not_equal(self, prospective: LegacyVersion, spec: str) -> bool: - return prospective != self._coerce_version(spec) - - def _compare_less_than_equal(self, prospective: LegacyVersion, spec: str) -> bool: - return prospective <= self._coerce_version(spec) - - def _compare_greater_than_equal( - self, prospective: LegacyVersion, spec: str - ) -> bool: - return prospective >= self._coerce_version(spec) - - def _compare_less_than(self, prospective: LegacyVersion, spec: str) -> bool: - return prospective < self._coerce_version(spec) - - def _compare_greater_than(self, prospective: LegacyVersion, spec: str) -> bool: - return prospective > self._coerce_version(spec) - - -def _require_version_compare( - fn: Callable[["Specifier", ParsedVersion, str], bool] -) -> Callable[["Specifier", ParsedVersion, str], bool]: - @functools.wraps(fn) - def wrapped(self: "Specifier", prospective: ParsedVersion, spec: str) -> bool: - if not isinstance(prospective, Version): - return False - return fn(self, prospective, spec) - - return wrapped - - -class Specifier(_IndividualSpecifier): - - _regex_str = r""" - (?P(~=|==|!=|<=|>=|<|>|===)) - (?P - (?: - # The identity operators allow for an escape hatch that will - # do an exact string match of the version you wish to install. - # This will not be parsed by PEP 440 and we cannot determine - # any semantic meaning from it. This operator is discouraged - # but included entirely as an escape hatch. - (?<====) # Only match for the identity operator - \s* - [^\s]* # We just match everything, except for whitespace - # since we are only testing for strict identity. - ) - | - (?: - # The (non)equality operators allow for wild card and local - # versions to be specified so we have to define these two - # operators separately to enable that. - (?<===|!=) # Only match for equals and not equals - - \s* - v? - (?:[0-9]+!)? # epoch - [0-9]+(?:\.[0-9]+)* # release - (?: # pre release - [-_\.]? - (a|b|c|rc|alpha|beta|pre|preview) - [-_\.]? - [0-9]* - )? - (?: # post release - (?:-[0-9]+)|(?:[-_\.]?(post|rev|r)[-_\.]?[0-9]*) - )? - - # You cannot use a wild card and a dev or local version - # together so group them with a | and make them optional. - (?: - (?:[-_\.]?dev[-_\.]?[0-9]*)? # dev release - (?:\+[a-z0-9]+(?:[-_\.][a-z0-9]+)*)? # local - | - \.\* # Wild card syntax of .* - )? - ) - | - (?: - # The compatible operator requires at least two digits in the - # release segment. - (?<=~=) # Only match for the compatible operator - - \s* - v? - (?:[0-9]+!)? # epoch - [0-9]+(?:\.[0-9]+)+ # release (We have a + instead of a *) - (?: # pre release - [-_\.]? - (a|b|c|rc|alpha|beta|pre|preview) - [-_\.]? - [0-9]* - )? - (?: # post release - (?:-[0-9]+)|(?:[-_\.]?(post|rev|r)[-_\.]?[0-9]*) - )? - (?:[-_\.]?dev[-_\.]?[0-9]*)? # dev release - ) - | - (?: - # All other operators only allow a sub set of what the - # (non)equality operators do. Specifically they do not allow - # local versions to be specified nor do they allow the prefix - # matching wild cards. - (?=": "greater_than_equal", - "<": "less_than", - ">": "greater_than", - "===": "arbitrary", - } - - @_require_version_compare - def _compare_compatible(self, prospective: ParsedVersion, spec: str) -> bool: - - # Compatible releases have an equivalent combination of >= and ==. That - # is that ~=2.2 is equivalent to >=2.2,==2.*. This allows us to - # implement this in terms of the other specifiers instead of - # implementing it ourselves. The only thing we need to do is construct - # the other specifiers. - - # We want everything but the last item in the version, but we want to - # ignore suffix segments. - prefix = ".".join( - list(itertools.takewhile(_is_not_suffix, _version_split(spec)))[:-1] - ) - - # Add the prefix notation to the end of our string - prefix += ".*" - - return self._get_operator(">=")(prospective, spec) and self._get_operator("==")( - prospective, prefix - ) - - @_require_version_compare - def _compare_equal(self, prospective: ParsedVersion, spec: str) -> bool: - - # We need special logic to handle prefix matching - if spec.endswith(".*"): - # In the case of prefix matching we want to ignore local segment. - prospective = Version(prospective.public) - # Split the spec out by dots, and pretend that there is an implicit - # dot in between a release segment and a pre-release segment. - split_spec = _version_split(spec[:-2]) # Remove the trailing .* - - # Split the prospective version out by dots, and pretend that there - # is an implicit dot in between a release segment and a pre-release - # segment. - split_prospective = _version_split(str(prospective)) - - # Shorten the prospective version to be the same length as the spec - # so that we can determine if the specifier is a prefix of the - # prospective version or not. - shortened_prospective = split_prospective[: len(split_spec)] - - # Pad out our two sides with zeros so that they both equal the same - # length. - padded_spec, padded_prospective = _pad_version( - split_spec, shortened_prospective - ) - - return padded_prospective == padded_spec - else: - # Convert our spec string into a Version - spec_version = Version(spec) - - # If the specifier does not have a local segment, then we want to - # act as if the prospective version also does not have a local - # segment. - if not spec_version.local: - prospective = Version(prospective.public) - - return prospective == spec_version - - @_require_version_compare - def _compare_not_equal(self, prospective: ParsedVersion, spec: str) -> bool: - return not self._compare_equal(prospective, spec) - - @_require_version_compare - def _compare_less_than_equal(self, prospective: ParsedVersion, spec: str) -> bool: - - # NB: Local version identifiers are NOT permitted in the version - # specifier, so local version labels can be universally removed from - # the prospective version. - return Version(prospective.public) <= Version(spec) - - @_require_version_compare - def _compare_greater_than_equal( - self, prospective: ParsedVersion, spec: str - ) -> bool: - - # NB: Local version identifiers are NOT permitted in the version - # specifier, so local version labels can be universally removed from - # the prospective version. - return Version(prospective.public) >= Version(spec) - - @_require_version_compare - def _compare_less_than(self, prospective: ParsedVersion, spec_str: str) -> bool: - - # Convert our spec to a Version instance, since we'll want to work with - # it as a version. - spec = Version(spec_str) - - # Check to see if the prospective version is less than the spec - # version. If it's not we can short circuit and just return False now - # instead of doing extra unneeded work. - if not prospective < spec: - return False - - # This special case is here so that, unless the specifier itself - # includes is a pre-release version, that we do not accept pre-release - # versions for the version mentioned in the specifier (e.g. <3.1 should - # not match 3.1.dev0, but should match 3.0.dev0). - if not spec.is_prerelease and prospective.is_prerelease: - if Version(prospective.base_version) == Version(spec.base_version): - return False - - # If we've gotten to here, it means that prospective version is both - # less than the spec version *and* it's not a pre-release of the same - # version in the spec. - return True - - @_require_version_compare - def _compare_greater_than(self, prospective: ParsedVersion, spec_str: str) -> bool: - - # Convert our spec to a Version instance, since we'll want to work with - # it as a version. - spec = Version(spec_str) - - # Check to see if the prospective version is greater than the spec - # version. If it's not we can short circuit and just return False now - # instead of doing extra unneeded work. - if not prospective > spec: - return False - - # This special case is here so that, unless the specifier itself - # includes is a post-release version, that we do not accept - # post-release versions for the version mentioned in the specifier - # (e.g. >3.1 should not match 3.0.post0, but should match 3.2.post0). - if not spec.is_postrelease and prospective.is_postrelease: - if Version(prospective.base_version) == Version(spec.base_version): - return False - - # Ensure that we do not allow a local version of the version mentioned - # in the specifier, which is technically greater than, to match. - if prospective.local is not None: - if Version(prospective.base_version) == Version(spec.base_version): - return False - - # If we've gotten to here, it means that prospective version is both - # greater than the spec version *and* it's not a pre-release of the - # same version in the spec. - return True - - def _compare_arbitrary(self, prospective: Version, spec: str) -> bool: - return str(prospective).lower() == str(spec).lower() - - @property - def prereleases(self) -> bool: - - # If there is an explicit prereleases set for this, then we'll just - # blindly use that. - if self._prereleases is not None: - return self._prereleases - - # Look at all of our specifiers and determine if they are inclusive - # operators, and if they are if they are including an explicit - # prerelease. - operator, version = self._spec - if operator in ["==", ">=", "<=", "~=", "==="]: - # The == specifier can include a trailing .*, if it does we - # want to remove before parsing. - if operator == "==" and version.endswith(".*"): - version = version[:-2] - - # Parse the version, and if it is a pre-release than this - # specifier allows pre-releases. - if parse(version).is_prerelease: - return True - - return False - - @prereleases.setter - def prereleases(self, value: bool) -> None: - self._prereleases = value - - -_prefix_regex = re.compile(r"^([0-9]+)((?:a|b|c|rc)[0-9]+)$") - - -def _version_split(version: str) -> List[str]: - result: List[str] = [] - for item in version.split("."): - match = _prefix_regex.search(item) - if match: - result.extend(match.groups()) - else: - result.append(item) - return result - - -def _is_not_suffix(segment: str) -> bool: - return not any( - segment.startswith(prefix) for prefix in ("dev", "a", "b", "rc", "post") - ) - - -def _pad_version(left: List[str], right: List[str]) -> Tuple[List[str], List[str]]: - left_split, right_split = [], [] - - # Get the release segment of our versions - left_split.append(list(itertools.takewhile(lambda x: x.isdigit(), left))) - right_split.append(list(itertools.takewhile(lambda x: x.isdigit(), right))) - - # Get the rest of our versions - left_split.append(left[len(left_split[0]) :]) - right_split.append(right[len(right_split[0]) :]) - - # Insert our padding - left_split.insert(1, ["0"] * max(0, len(right_split[0]) - len(left_split[0]))) - right_split.insert(1, ["0"] * max(0, len(left_split[0]) - len(right_split[0]))) - - return (list(itertools.chain(*left_split)), list(itertools.chain(*right_split))) - - -class SpecifierSet(BaseSpecifier): - def __init__( - self, specifiers: str = "", prereleases: Optional[bool] = None - ) -> None: - - # Split on , to break each individual specifier into it's own item, and - # strip each item to remove leading/trailing whitespace. - split_specifiers = [s.strip() for s in specifiers.split(",") if s.strip()] - - # Parsed each individual specifier, attempting first to make it a - # Specifier and falling back to a LegacySpecifier. - parsed: Set[_IndividualSpecifier] = set() - for specifier in split_specifiers: - try: - parsed.add(Specifier(specifier)) - except InvalidSpecifier: - parsed.add(LegacySpecifier(specifier)) - - # Turn our parsed specifiers into a frozen set and save them for later. - self._specs = frozenset(parsed) - - # Store our prereleases value so we can use it later to determine if - # we accept prereleases or not. - self._prereleases = prereleases - - def __repr__(self) -> str: - pre = ( - f", prereleases={self.prereleases!r}" - if self._prereleases is not None - else "" - ) - - return f"" - - def __str__(self) -> str: - return ",".join(sorted(str(s) for s in self._specs)) - - def __hash__(self) -> int: - return hash(self._specs) - - def __and__(self, other: Union["SpecifierSet", str]) -> "SpecifierSet": - if isinstance(other, str): - other = SpecifierSet(other) - elif not isinstance(other, SpecifierSet): - return NotImplemented - - specifier = SpecifierSet() - specifier._specs = frozenset(self._specs | other._specs) - - if self._prereleases is None and other._prereleases is not None: - specifier._prereleases = other._prereleases - elif self._prereleases is not None and other._prereleases is None: - specifier._prereleases = self._prereleases - elif self._prereleases == other._prereleases: - specifier._prereleases = self._prereleases - else: - raise ValueError( - "Cannot combine SpecifierSets with True and False prerelease " - "overrides." - ) - - return specifier - - def __eq__(self, other: object) -> bool: - if isinstance(other, (str, _IndividualSpecifier)): - other = SpecifierSet(str(other)) - elif not isinstance(other, SpecifierSet): - return NotImplemented - - return self._specs == other._specs - - def __len__(self) -> int: - return len(self._specs) - - def __iter__(self) -> Iterator[_IndividualSpecifier]: - return iter(self._specs) - - @property - def prereleases(self) -> Optional[bool]: - - # If we have been given an explicit prerelease modifier, then we'll - # pass that through here. - if self._prereleases is not None: - return self._prereleases - - # If we don't have any specifiers, and we don't have a forced value, - # then we'll just return None since we don't know if this should have - # pre-releases or not. - if not self._specs: - return None - - # Otherwise we'll see if any of the given specifiers accept - # prereleases, if any of them do we'll return True, otherwise False. - return any(s.prereleases for s in self._specs) - - @prereleases.setter - def prereleases(self, value: bool) -> None: - self._prereleases = value - - def __contains__(self, item: UnparsedVersion) -> bool: - return self.contains(item) - - def contains( - self, item: UnparsedVersion, prereleases: Optional[bool] = None - ) -> bool: - - # Ensure that our item is a Version or LegacyVersion instance. - if not isinstance(item, (LegacyVersion, Version)): - item = parse(item) - - # Determine if we're forcing a prerelease or not, if we're not forcing - # one for this particular filter call, then we'll use whatever the - # SpecifierSet thinks for whether or not we should support prereleases. - if prereleases is None: - prereleases = self.prereleases - - # We can determine if we're going to allow pre-releases by looking to - # see if any of the underlying items supports them. If none of them do - # and this item is a pre-release then we do not allow it and we can - # short circuit that here. - # Note: This means that 1.0.dev1 would not be contained in something - # like >=1.0.devabc however it would be in >=1.0.debabc,>0.0.dev0 - if not prereleases and item.is_prerelease: - return False - - # We simply dispatch to the underlying specs here to make sure that the - # given version is contained within all of them. - # Note: This use of all() here means that an empty set of specifiers - # will always return True, this is an explicit design decision. - return all(s.contains(item, prereleases=prereleases) for s in self._specs) - - def filter( - self, iterable: Iterable[VersionTypeVar], prereleases: Optional[bool] = None - ) -> Iterable[VersionTypeVar]: - - # Determine if we're forcing a prerelease or not, if we're not forcing - # one for this particular filter call, then we'll use whatever the - # SpecifierSet thinks for whether or not we should support prereleases. - if prereleases is None: - prereleases = self.prereleases - - # If we have any specifiers, then we want to wrap our iterable in the - # filter method for each one, this will act as a logical AND amongst - # each specifier. - if self._specs: - for spec in self._specs: - iterable = spec.filter(iterable, prereleases=bool(prereleases)) - return iterable - # If we do not have any specifiers, then we need to have a rough filter - # which will filter out any pre-releases, unless there are no final - # releases, and which will filter out LegacyVersion in general. - else: - filtered: List[VersionTypeVar] = [] - found_prereleases: List[VersionTypeVar] = [] - - item: UnparsedVersion - parsed_version: Union[Version, LegacyVersion] - - for item in iterable: - # Ensure that we some kind of Version class for this item. - if not isinstance(item, (LegacyVersion, Version)): - parsed_version = parse(item) - else: - parsed_version = item - - # Filter out any item which is parsed as a LegacyVersion - if isinstance(parsed_version, LegacyVersion): - continue - - # Store any item which is a pre-release for later unless we've - # already found a final version or we are accepting prereleases - if parsed_version.is_prerelease and not prereleases: - if not filtered: - found_prereleases.append(item) - else: - filtered.append(item) - - # If we've found no items except for pre-releases, then we'll go - # ahead and use the pre-releases - if not filtered and found_prereleases and prereleases is None: - return found_prereleases - - return filtered diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/common/models/retinanet.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/common/models/retinanet.py deleted file mode 100644 index 83cfda4b6001750c676c22feb5e3560cba394140..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/common/models/retinanet.py +++ /dev/null @@ -1,53 +0,0 @@ -# -*- coding: utf-8 -*- - -from detectron2.config import LazyCall as L -from detectron2.layers import ShapeSpec -from detectron2.modeling.meta_arch import RetinaNet -from detectron2.modeling.anchor_generator import DefaultAnchorGenerator -from detectron2.modeling.backbone.fpn import LastLevelP6P7 -from detectron2.modeling.backbone import BasicStem, FPN, ResNet -from detectron2.modeling.box_regression import Box2BoxTransform -from detectron2.modeling.matcher import Matcher -from detectron2.modeling.meta_arch.retinanet import RetinaNetHead - -model = L(RetinaNet)( - backbone=L(FPN)( - bottom_up=L(ResNet)( - stem=L(BasicStem)(in_channels=3, out_channels=64, norm="FrozenBN"), - stages=L(ResNet.make_default_stages)( - depth=50, - stride_in_1x1=True, - norm="FrozenBN", - ), - out_features=["res3", "res4", "res5"], - ), - in_features=["res3", "res4", "res5"], - out_channels=256, - top_block=L(LastLevelP6P7)(in_channels=2048, out_channels="${..out_channels}"), - ), - head=L(RetinaNetHead)( - # Shape for each input feature map - input_shape=[ShapeSpec(channels=256)] * 5, - num_classes="${..num_classes}", - conv_dims=[256, 256, 256, 256], - prior_prob=0.01, - num_anchors=9, - ), - anchor_generator=L(DefaultAnchorGenerator)( - sizes=[[x, x * 2 ** (1.0 / 3), x * 2 ** (2.0 / 3)] for x in [32, 64, 128, 256, 512]], - aspect_ratios=[0.5, 1.0, 2.0], - strides=[8, 16, 32, 64, 128], - offset=0.0, - ), - box2box_transform=L(Box2BoxTransform)(weights=[1.0, 1.0, 1.0, 1.0]), - anchor_matcher=L(Matcher)( - thresholds=[0.4, 0.5], labels=[0, -1, 1], allow_low_quality_matches=True - ), - num_classes=80, - head_in_features=["p3", "p4", "p5", "p6", "p7"], - focal_loss_alpha=0.25, - focal_loss_gamma=2.0, - pixel_mean=[103.530, 116.280, 123.675], - pixel_std=[1.0, 1.0, 1.0], - input_format="BGR", -) diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/proposal_generator/rpn.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/proposal_generator/rpn.py deleted file mode 100644 index 99cd536d2f9880d2049390c45f73eb22335e1b82..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/proposal_generator/rpn.py +++ /dev/null @@ -1,533 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from typing import Dict, List, Optional, Tuple, Union -import torch -import torch.nn.functional as F -from torch import nn - -from detectron2.config import configurable -from detectron2.layers import Conv2d, ShapeSpec, cat -from detectron2.structures import Boxes, ImageList, Instances, pairwise_iou -from detectron2.utils.events import get_event_storage -from detectron2.utils.memory import retry_if_cuda_oom -from detectron2.utils.registry import Registry - -from ..anchor_generator import build_anchor_generator -from ..box_regression import Box2BoxTransform, _dense_box_regression_loss -from ..matcher import Matcher -from ..sampling import subsample_labels -from .build import PROPOSAL_GENERATOR_REGISTRY -from .proposal_utils import find_top_rpn_proposals - -RPN_HEAD_REGISTRY = Registry("RPN_HEAD") -RPN_HEAD_REGISTRY.__doc__ = """ -Registry for RPN heads, which take feature maps and perform -objectness classification and bounding box regression for anchors. - -The registered object will be called with `obj(cfg, input_shape)`. -The call should return a `nn.Module` object. -""" - - -""" -Shape shorthand in this module: - - N: number of images in the minibatch - L: number of feature maps per image on which RPN is run - A: number of cell anchors (must be the same for all feature maps) - Hi, Wi: height and width of the i-th feature map - B: size of the box parameterization - -Naming convention: - - objectness: refers to the binary classification of an anchor as object vs. not object. - - deltas: refers to the 4-d (dx, dy, dw, dh) deltas that parameterize the box2box - transform (see :class:`box_regression.Box2BoxTransform`), or 5d for rotated boxes. - - pred_objectness_logits: predicted objectness scores in [-inf, +inf]; use - sigmoid(pred_objectness_logits) to estimate P(object). - - gt_labels: ground-truth binary classification labels for objectness - - pred_anchor_deltas: predicted box2box transform deltas - - gt_anchor_deltas: ground-truth box2box transform deltas -""" - - -def build_rpn_head(cfg, input_shape): - """ - Build an RPN head defined by `cfg.MODEL.RPN.HEAD_NAME`. - """ - name = cfg.MODEL.RPN.HEAD_NAME - return RPN_HEAD_REGISTRY.get(name)(cfg, input_shape) - - -@RPN_HEAD_REGISTRY.register() -class StandardRPNHead(nn.Module): - """ - Standard RPN classification and regression heads described in :paper:`Faster R-CNN`. - Uses a 3x3 conv to produce a shared hidden state from which one 1x1 conv predicts - objectness logits for each anchor and a second 1x1 conv predicts bounding-box deltas - specifying how to deform each anchor into an object proposal. - """ - - @configurable - def __init__( - self, *, in_channels: int, num_anchors: int, box_dim: int = 4, conv_dims: List[int] = (-1,) - ): - """ - NOTE: this interface is experimental. - - Args: - in_channels (int): number of input feature channels. When using multiple - input features, they must have the same number of channels. - num_anchors (int): number of anchors to predict for *each spatial position* - on the feature map. The total number of anchors for each - feature map will be `num_anchors * H * W`. - box_dim (int): dimension of a box, which is also the number of box regression - predictions to make for each anchor. An axis aligned box has - box_dim=4, while a rotated box has box_dim=5. - conv_dims (list[int]): a list of integers representing the output channels - of N conv layers. Set it to -1 to use the same number of output channels - as input channels. - """ - super().__init__() - cur_channels = in_channels - # Keeping the old variable names and structure for backwards compatiblity. - # Otherwise the old checkpoints will fail to load. - if len(conv_dims) == 1: - out_channels = cur_channels if conv_dims[0] == -1 else conv_dims[0] - # 3x3 conv for the hidden representation - self.conv = self._get_rpn_conv(cur_channels, out_channels) - cur_channels = out_channels - else: - self.conv = nn.Sequential() - for k, conv_dim in enumerate(conv_dims): - out_channels = cur_channels if conv_dim == -1 else conv_dim - if out_channels <= 0: - raise ValueError( - f"Conv output channels should be greater than 0. Got {out_channels}" - ) - conv = self._get_rpn_conv(cur_channels, out_channels) - self.conv.add_module(f"conv{k}", conv) - cur_channels = out_channels - # 1x1 conv for predicting objectness logits - self.objectness_logits = nn.Conv2d(cur_channels, num_anchors, kernel_size=1, stride=1) - # 1x1 conv for predicting box2box transform deltas - self.anchor_deltas = nn.Conv2d(cur_channels, num_anchors * box_dim, kernel_size=1, stride=1) - - # Keeping the order of weights initialization same for backwards compatiblility. - for layer in self.modules(): - if isinstance(layer, nn.Conv2d): - nn.init.normal_(layer.weight, std=0.01) - nn.init.constant_(layer.bias, 0) - - def _get_rpn_conv(self, in_channels, out_channels): - return Conv2d( - in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1, - activation=nn.ReLU(), - ) - - @classmethod - def from_config(cls, cfg, input_shape): - # Standard RPN is shared across levels: - in_channels = [s.channels for s in input_shape] - assert len(set(in_channels)) == 1, "Each level must have the same channel!" - in_channels = in_channels[0] - - # RPNHead should take the same input as anchor generator - # NOTE: it assumes that creating an anchor generator does not have unwanted side effect. - anchor_generator = build_anchor_generator(cfg, input_shape) - num_anchors = anchor_generator.num_anchors - box_dim = anchor_generator.box_dim - assert ( - len(set(num_anchors)) == 1 - ), "Each level must have the same number of anchors per spatial position" - return { - "in_channels": in_channels, - "num_anchors": num_anchors[0], - "box_dim": box_dim, - "conv_dims": cfg.MODEL.RPN.CONV_DIMS, - } - - def forward(self, features: List[torch.Tensor]): - """ - Args: - features (list[Tensor]): list of feature maps - - Returns: - list[Tensor]: A list of L elements. - Element i is a tensor of shape (N, A, Hi, Wi) representing - the predicted objectness logits for all anchors. A is the number of cell anchors. - list[Tensor]: A list of L elements. Element i is a tensor of shape - (N, A*box_dim, Hi, Wi) representing the predicted "deltas" used to transform anchors - to proposals. - """ - pred_objectness_logits = [] - pred_anchor_deltas = [] - for x in features: - t = self.conv(x) - pred_objectness_logits.append(self.objectness_logits(t)) - pred_anchor_deltas.append(self.anchor_deltas(t)) - return pred_objectness_logits, pred_anchor_deltas - - -@PROPOSAL_GENERATOR_REGISTRY.register() -class RPN(nn.Module): - """ - Region Proposal Network, introduced by :paper:`Faster R-CNN`. - """ - - @configurable - def __init__( - self, - *, - in_features: List[str], - head: nn.Module, - anchor_generator: nn.Module, - anchor_matcher: Matcher, - box2box_transform: Box2BoxTransform, - batch_size_per_image: int, - positive_fraction: float, - pre_nms_topk: Tuple[float, float], - post_nms_topk: Tuple[float, float], - nms_thresh: float = 0.7, - min_box_size: float = 0.0, - anchor_boundary_thresh: float = -1.0, - loss_weight: Union[float, Dict[str, float]] = 1.0, - box_reg_loss_type: str = "smooth_l1", - smooth_l1_beta: float = 0.0, - ): - """ - NOTE: this interface is experimental. - - Args: - in_features (list[str]): list of names of input features to use - head (nn.Module): a module that predicts logits and regression deltas - for each level from a list of per-level features - anchor_generator (nn.Module): a module that creates anchors from a - list of features. Usually an instance of :class:`AnchorGenerator` - anchor_matcher (Matcher): label the anchors by matching them with ground truth. - box2box_transform (Box2BoxTransform): defines the transform from anchors boxes to - instance boxes - batch_size_per_image (int): number of anchors per image to sample for training - positive_fraction (float): fraction of foreground anchors to sample for training - pre_nms_topk (tuple[float]): (train, test) that represents the - number of top k proposals to select before NMS, in - training and testing. - post_nms_topk (tuple[float]): (train, test) that represents the - number of top k proposals to select after NMS, in - training and testing. - nms_thresh (float): NMS threshold used to de-duplicate the predicted proposals - min_box_size (float): remove proposal boxes with any side smaller than this threshold, - in the unit of input image pixels - anchor_boundary_thresh (float): legacy option - loss_weight (float|dict): weights to use for losses. Can be single float for weighting - all rpn losses together, or a dict of individual weightings. Valid dict keys are: - "loss_rpn_cls" - applied to classification loss - "loss_rpn_loc" - applied to box regression loss - box_reg_loss_type (str): Loss type to use. Supported losses: "smooth_l1", "giou". - smooth_l1_beta (float): beta parameter for the smooth L1 regression loss. Default to - use L1 loss. Only used when `box_reg_loss_type` is "smooth_l1" - """ - super().__init__() - self.in_features = in_features - self.rpn_head = head - self.anchor_generator = anchor_generator - self.anchor_matcher = anchor_matcher - self.box2box_transform = box2box_transform - self.batch_size_per_image = batch_size_per_image - self.positive_fraction = positive_fraction - # Map from self.training state to train/test settings - self.pre_nms_topk = {True: pre_nms_topk[0], False: pre_nms_topk[1]} - self.post_nms_topk = {True: post_nms_topk[0], False: post_nms_topk[1]} - self.nms_thresh = nms_thresh - self.min_box_size = float(min_box_size) - self.anchor_boundary_thresh = anchor_boundary_thresh - if isinstance(loss_weight, float): - loss_weight = {"loss_rpn_cls": loss_weight, "loss_rpn_loc": loss_weight} - self.loss_weight = loss_weight - self.box_reg_loss_type = box_reg_loss_type - self.smooth_l1_beta = smooth_l1_beta - - @classmethod - def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]): - in_features = cfg.MODEL.RPN.IN_FEATURES - ret = { - "in_features": in_features, - "min_box_size": cfg.MODEL.PROPOSAL_GENERATOR.MIN_SIZE, - "nms_thresh": cfg.MODEL.RPN.NMS_THRESH, - "batch_size_per_image": cfg.MODEL.RPN.BATCH_SIZE_PER_IMAGE, - "positive_fraction": cfg.MODEL.RPN.POSITIVE_FRACTION, - "loss_weight": { - "loss_rpn_cls": cfg.MODEL.RPN.LOSS_WEIGHT, - "loss_rpn_loc": cfg.MODEL.RPN.BBOX_REG_LOSS_WEIGHT * cfg.MODEL.RPN.LOSS_WEIGHT, - }, - "anchor_boundary_thresh": cfg.MODEL.RPN.BOUNDARY_THRESH, - "box2box_transform": Box2BoxTransform(weights=cfg.MODEL.RPN.BBOX_REG_WEIGHTS), - "box_reg_loss_type": cfg.MODEL.RPN.BBOX_REG_LOSS_TYPE, - "smooth_l1_beta": cfg.MODEL.RPN.SMOOTH_L1_BETA, - } - - ret["pre_nms_topk"] = (cfg.MODEL.RPN.PRE_NMS_TOPK_TRAIN, cfg.MODEL.RPN.PRE_NMS_TOPK_TEST) - ret["post_nms_topk"] = (cfg.MODEL.RPN.POST_NMS_TOPK_TRAIN, cfg.MODEL.RPN.POST_NMS_TOPK_TEST) - - ret["anchor_generator"] = build_anchor_generator(cfg, [input_shape[f] for f in in_features]) - ret["anchor_matcher"] = Matcher( - cfg.MODEL.RPN.IOU_THRESHOLDS, cfg.MODEL.RPN.IOU_LABELS, allow_low_quality_matches=True - ) - ret["head"] = build_rpn_head(cfg, [input_shape[f] for f in in_features]) - return ret - - def _subsample_labels(self, label): - """ - Randomly sample a subset of positive and negative examples, and overwrite - the label vector to the ignore value (-1) for all elements that are not - included in the sample. - - Args: - labels (Tensor): a vector of -1, 0, 1. Will be modified in-place and returned. - """ - pos_idx, neg_idx = subsample_labels( - label, self.batch_size_per_image, self.positive_fraction, 0 - ) - # Fill with the ignore label (-1), then set positive and negative labels - label.fill_(-1) - label.scatter_(0, pos_idx, 1) - label.scatter_(0, neg_idx, 0) - return label - - @torch.jit.unused - @torch.no_grad() - def label_and_sample_anchors( - self, anchors: List[Boxes], gt_instances: List[Instances] - ) -> Tuple[List[torch.Tensor], List[torch.Tensor]]: - """ - Args: - anchors (list[Boxes]): anchors for each feature map. - gt_instances: the ground-truth instances for each image. - - Returns: - list[Tensor]: - List of #img tensors. i-th element is a vector of labels whose length is - the total number of anchors across all feature maps R = sum(Hi * Wi * A). - Label values are in {-1, 0, 1}, with meanings: -1 = ignore; 0 = negative - class; 1 = positive class. - list[Tensor]: - i-th element is a Rx4 tensor. The values are the matched gt boxes for each - anchor. Values are undefined for those anchors not labeled as 1. - """ - anchors = Boxes.cat(anchors) - - gt_boxes = [x.gt_boxes for x in gt_instances] - image_sizes = [x.image_size for x in gt_instances] - del gt_instances - - gt_labels = [] - matched_gt_boxes = [] - for image_size_i, gt_boxes_i in zip(image_sizes, gt_boxes): - """ - image_size_i: (h, w) for the i-th image - gt_boxes_i: ground-truth boxes for i-th image - """ - - match_quality_matrix = retry_if_cuda_oom(pairwise_iou)(gt_boxes_i, anchors) - matched_idxs, gt_labels_i = retry_if_cuda_oom(self.anchor_matcher)(match_quality_matrix) - # Matching is memory-expensive and may result in CPU tensors. But the result is small - gt_labels_i = gt_labels_i.to(device=gt_boxes_i.device) - del match_quality_matrix - - if self.anchor_boundary_thresh >= 0: - # Discard anchors that go out of the boundaries of the image - # NOTE: This is legacy functionality that is turned off by default in Detectron2 - anchors_inside_image = anchors.inside_box(image_size_i, self.anchor_boundary_thresh) - gt_labels_i[~anchors_inside_image] = -1 - - # A vector of labels (-1, 0, 1) for each anchor - gt_labels_i = self._subsample_labels(gt_labels_i) - - if len(gt_boxes_i) == 0: - # These values won't be used anyway since the anchor is labeled as background - matched_gt_boxes_i = torch.zeros_like(anchors.tensor) - else: - # TODO wasted indexing computation for ignored boxes - matched_gt_boxes_i = gt_boxes_i[matched_idxs].tensor - - gt_labels.append(gt_labels_i) # N,AHW - matched_gt_boxes.append(matched_gt_boxes_i) - return gt_labels, matched_gt_boxes - - @torch.jit.unused - def losses( - self, - anchors: List[Boxes], - pred_objectness_logits: List[torch.Tensor], - gt_labels: List[torch.Tensor], - pred_anchor_deltas: List[torch.Tensor], - gt_boxes: List[torch.Tensor], - ) -> Dict[str, torch.Tensor]: - """ - Return the losses from a set of RPN predictions and their associated ground-truth. - - Args: - anchors (list[Boxes or RotatedBoxes]): anchors for each feature map, each - has shape (Hi*Wi*A, B), where B is box dimension (4 or 5). - pred_objectness_logits (list[Tensor]): A list of L elements. - Element i is a tensor of shape (N, Hi*Wi*A) representing - the predicted objectness logits for all anchors. - gt_labels (list[Tensor]): Output of :meth:`label_and_sample_anchors`. - pred_anchor_deltas (list[Tensor]): A list of L elements. Element i is a tensor of shape - (N, Hi*Wi*A, 4 or 5) representing the predicted "deltas" used to transform anchors - to proposals. - gt_boxes (list[Tensor]): Output of :meth:`label_and_sample_anchors`. - - Returns: - dict[loss name -> loss value]: A dict mapping from loss name to loss value. - Loss names are: `loss_rpn_cls` for objectness classification and - `loss_rpn_loc` for proposal localization. - """ - num_images = len(gt_labels) - gt_labels = torch.stack(gt_labels) # (N, sum(Hi*Wi*Ai)) - - # Log the number of positive/negative anchors per-image that's used in training - pos_mask = gt_labels == 1 - num_pos_anchors = pos_mask.sum().item() - num_neg_anchors = (gt_labels == 0).sum().item() - storage = get_event_storage() - storage.put_scalar("rpn/num_pos_anchors", num_pos_anchors / num_images) - storage.put_scalar("rpn/num_neg_anchors", num_neg_anchors / num_images) - - localization_loss = _dense_box_regression_loss( - anchors, - self.box2box_transform, - pred_anchor_deltas, - gt_boxes, - pos_mask, - box_reg_loss_type=self.box_reg_loss_type, - smooth_l1_beta=self.smooth_l1_beta, - ) - - valid_mask = gt_labels >= 0 - objectness_loss = F.binary_cross_entropy_with_logits( - cat(pred_objectness_logits, dim=1)[valid_mask], - gt_labels[valid_mask].to(torch.float32), - reduction="sum", - ) - normalizer = self.batch_size_per_image * num_images - losses = { - "loss_rpn_cls": objectness_loss / normalizer, - # The original Faster R-CNN paper uses a slightly different normalizer - # for loc loss. But it doesn't matter in practice - "loss_rpn_loc": localization_loss / normalizer, - } - losses = {k: v * self.loss_weight.get(k, 1.0) for k, v in losses.items()} - return losses - - def forward( - self, - images: ImageList, - features: Dict[str, torch.Tensor], - gt_instances: Optional[List[Instances]] = None, - ): - """ - Args: - images (ImageList): input images of length `N` - features (dict[str, Tensor]): input data as a mapping from feature - map name to tensor. Axis 0 represents the number of images `N` in - the input data; axes 1-3 are channels, height, and width, which may - vary between feature maps (e.g., if a feature pyramid is used). - gt_instances (list[Instances], optional): a length `N` list of `Instances`s. - Each `Instances` stores ground-truth instances for the corresponding image. - - Returns: - proposals: list[Instances]: contains fields "proposal_boxes", "objectness_logits" - loss: dict[Tensor] or None - """ - features = [features[f] for f in self.in_features] - anchors = self.anchor_generator(features) - - pred_objectness_logits, pred_anchor_deltas = self.rpn_head(features) - # Transpose the Hi*Wi*A dimension to the middle: - pred_objectness_logits = [ - # (N, A, Hi, Wi) -> (N, Hi, Wi, A) -> (N, Hi*Wi*A) - score.permute(0, 2, 3, 1).flatten(1) - for score in pred_objectness_logits - ] - pred_anchor_deltas = [ - # (N, A*B, Hi, Wi) -> (N, A, B, Hi, Wi) -> (N, Hi, Wi, A, B) -> (N, Hi*Wi*A, B) - x.view(x.shape[0], -1, self.anchor_generator.box_dim, x.shape[-2], x.shape[-1]) - .permute(0, 3, 4, 1, 2) - .flatten(1, -2) - for x in pred_anchor_deltas - ] - - if self.training: - assert gt_instances is not None, "RPN requires gt_instances in training!" - gt_labels, gt_boxes = self.label_and_sample_anchors(anchors, gt_instances) - losses = self.losses( - anchors, pred_objectness_logits, gt_labels, pred_anchor_deltas, gt_boxes - ) - else: - losses = {} - proposals = self.predict_proposals( - anchors, pred_objectness_logits, pred_anchor_deltas, images.image_sizes - ) - return proposals, losses - - def predict_proposals( - self, - anchors: List[Boxes], - pred_objectness_logits: List[torch.Tensor], - pred_anchor_deltas: List[torch.Tensor], - image_sizes: List[Tuple[int, int]], - ): - """ - Decode all the predicted box regression deltas to proposals. Find the top proposals - by applying NMS and removing boxes that are too small. - - Returns: - proposals (list[Instances]): list of N Instances. The i-th Instances - stores post_nms_topk object proposals for image i, sorted by their - objectness score in descending order. - """ - # The proposals are treated as fixed for joint training with roi heads. - # This approach ignores the derivative w.r.t. the proposal boxes’ coordinates that - # are also network responses. - with torch.no_grad(): - pred_proposals = self._decode_proposals(anchors, pred_anchor_deltas) - return find_top_rpn_proposals( - pred_proposals, - pred_objectness_logits, - image_sizes, - self.nms_thresh, - self.pre_nms_topk[self.training], - self.post_nms_topk[self.training], - self.min_box_size, - self.training, - ) - - def _decode_proposals(self, anchors: List[Boxes], pred_anchor_deltas: List[torch.Tensor]): - """ - Transform anchors into proposals by applying the predicted anchor deltas. - - Returns: - proposals (list[Tensor]): A list of L tensors. Tensor i has shape - (N, Hi*Wi*A, B) - """ - N = pred_anchor_deltas[0].shape[0] - proposals = [] - # For each feature map - for anchors_i, pred_anchor_deltas_i in zip(anchors, pred_anchor_deltas): - B = anchors_i.tensor.size(1) - pred_anchor_deltas_i = pred_anchor_deltas_i.reshape(-1, B) - # Expand anchors to shape (N*Hi*Wi*A, B) - anchors_i = anchors_i.tensor.unsqueeze(0).expand(N, -1, -1).reshape(-1, B) - proposals_i = self.box2box_transform.apply_deltas(pred_anchor_deltas_i, anchors_i) - # Append feature map proposals with shape (N, Hi*Wi*A, B) - proposals.append(proposals_i.view(N, -1, B)) - return proposals diff --git a/spaces/Benson/text-generation/Examples/Cazador Asesino Hack Mod Apk Todos Los Personajes Desbloqueados.md b/spaces/Benson/text-generation/Examples/Cazador Asesino Hack Mod Apk Todos Los Personajes Desbloqueados.md deleted file mode 100644 index c1573270e803b089fa1cb33cb87e297dd1392764..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Cazador Asesino Hack Mod Apk Todos Los Personajes Desbloqueados.md +++ /dev/null @@ -1,65 +0,0 @@ - -

Hunter Assassin Hack Mod APK: Todos los personajes desbloqueados

-

Si usted está buscando un juego divertido y adictivo que desafía sus reflejos y habilidades de sigilo, es posible que desee probar Hunter Assassin. Este juego es un éxito entre millones de jugadores que disfrutan escabulléndose y eliminando enemigos con un cuchillo. Pero lo que si quieres desbloquear todos los personajes y disfrutar del juego sin limitaciones? Ahí es donde Hunter Assassin Hack Mod APK entra en juego. En este artículo, le diremos todo lo que necesita saber acerca de este apk mod, cómo descargar e instalar, y algunos consejos y trucos para jugar Hunter Assassin.

-

¿Qué es Hunter Assassin?

-

Hunter Assassin es un juego desarrollado por Ruby Game Studio, los mismos creadores de juegos populares como Gym Flip y Idle Digging Tycoon. El juego está disponible para dispositivos Android e iOS, y tiene más de 100 millones de descargas en Google Play Store. El juego tiene una premisa simple pero atractiva: eres un asesino que tiene que infiltrarse en una base llena de guardias armados y eliminarlos uno por uno. Suena fácil, ¿verdad? Bueno, no del todo. Los guardias tienen armas y pueden dispararte desde la distancia, mientras que solo tienes un cuchillo y tu agilidad. Tienes que usar las sombras, evitar los focos y planificar tus movimientos cuidadosamente para evitar ser detectado y asesinado.

-

cazador asesino hack mod apk todos los personajes desbloqueados


Download Ziphttps://bltlly.com/2v6K7z



-

Juego y características

-

El modo de juego de Hunter Assassin es sencillo: toca la pantalla para mover a tu personaje y atacar a los guardias. Tienes que ser rápido y preciso, ya que los guardias reaccionarán a cualquier ruido o movimiento. También debes tener cuidado con las trampas, como las minas y los láseres, que pueden dañarte. El juego tiene cientos de niveles, cada uno con un diseño diferente y el número de enemigos. La dificultad aumenta a medida que avanzas, y te enfrentarás a más desafíos y obstáculos.

- -

Cómo desbloquear caracteres

-

Como se mencionó anteriormente, hay dos maneras de desbloquear personajes en Hunter Assassin: gemas y llaves. Las gemas son la moneda principal del juego, y puedes usarlas para comprar personajes aleatorios de la tienda. El precio de cada personaje varía dependiendo de su rareza, desde 500 gemas para los comunes hasta 1000 gemas para los legendarios. También puedes usar gemas para mejorar tus personajes y aumentar sus estadísticas.

-

Las llaves son otra forma de desbloquear caracteres, pero son más difíciles de conseguir. Las llaves se usan para abrir cofres que contienen caracteres o gemas aleatorias. Puedes obtener claves completando ciertos niveles o logros, o viendo anuncios. Necesitas 36 teclas para abrir un cofre, lo que significa que tienes que jugar muchos niveles o ver muchos anuncios para obtener suficientes teclas.

-

¿Qué es Hunter Assassin Hack Mod APK?

-

Si no quieres pasar horas jugando niveles o viendo anuncios para desbloquear personajes, hay otra opción: Hunter Assassin Hack Mod APK. Esta es una versión modificada del juego original que te da cristales ilimitados y todos los personajes desbloqueados desde el principio. De esta forma, podrás disfrutar del juego sin restricciones ni limitaciones.

-

Beneficios de usar el mod apk

-

Hay muchos beneficios de usar Hunter Assassin Hack Mod APK, tales como:

-
    -
  • Puedes acceder a todos los caracteres sin gastar ninguna joya o clave.
  • -
  • Puedes actualizar tus personajes al nivel máximo sin gastar ninguna joya.
  • -
  • Puedes jugar a cualquier nivel sin preocuparte por tu salud o enemigos.
  • -
  • Puedes disfrutar del juego sin anuncios ni interrupciones.
  • -
  • Usted puede tener más diversión y emoción con el juego.
  • -
-

Cómo descargar e instalar el mod apk

-

Descargar e instalar Hunter Assassin Hack Mod APK es muy fácil y simple. Solo tienes que seguir estos pasos:

-

-
    -
  1. Haga clic en este enlace para descargar el archivo apk mod: [Hunter Assassin Hack Mod APK Download].
  2. - -
  3. Localice el archivo descargado en el administrador de archivos de su dispositivo y toque en él para instalarlo.
  4. -
  5. Iniciar el juego y disfrutar!
  6. -
-

Consejos y trucos para jugar Hunter Assassin

-

Ahora que tienes Hunter Assassin Hack Mod APK, puede jugar el juego con más facilidad y diversión. Sin embargo, eso no significa que no necesites ninguna habilidad o estrategia para jugar el juego. Aquí hay algunos consejos y trucos que pueden ayudarle a dominar el juego y convertirse en un asesino profesional:

-

Usa sigilo y velocidad

-

La clave para jugar Hunter Assassin es ser sigiloso y rápido. Tienes que evitar ser visto u oído por los guardias, ya que te dispararán en el acto. También tienes que ser rápido y decisivo, ya que los guardias reaccionarán a cualquier movimiento o ruido. Puede utilizar las sombras, paredes, cajas y otros objetos para esconderse y escabullirse. También puedes usar el mapa para ver dónde están los guardias y planificar tus movimientos en consecuencia. Recuerda, el tiempo es todo en este juego.

-

Actualiza tus caracteres

-

A pesar de que tienes todos los caracteres desbloqueados, todavía necesitas actualizarlos para mejorar su rendimiento. Cada personaje tiene tres estadísticas: velocidad, salud y habilidad. La velocidad determina qué tan rápido se mueve y ataca tu personaje. La salud determina cuánto daño puede sufrir tu personaje antes de morir. La habilidad determina cuán efectiva es la habilidad especial de tu personaje. Puedes actualizar estas estadísticas gastando gemas, que puedes ganar jugando niveles o abriendo cofres. Actualizar tus personajes los hará más potentes y versátiles, y te ayudará a completar los niveles más rápido y fácil.

-

Recoge gemas y llaves

- -

Conclusión

-

Hunter Assassin es un juego divertido y adictivo que pone a prueba tus reflejos y habilidades de sigilo. Tienes que infiltrarte en una base llena de guardias armados y eliminarlos uno por uno con un cuchillo. Puedes desbloquear diferentes personajes con habilidades y estadísticas únicas, y actualizarlos para hacerlos más poderosos. También puede utilizar Hunter Assassin Hack Mod APK para obtener gemas ilimitadas y todos los personajes desbloqueados desde el principio, lo que hará que el juego más agradable y emocionante.

-

Resumen de los puntos principales

-

En este artículo, hemos cubierto:

-
    -
  • Qué es Hunter Assassin y cómo jugarlo.
  • -
  • ¿Qué es Hunter Assassin Hack Mod APK y cómo descargarlo e instalarlo.
  • -
  • Consejos y trucos para jugar Hunter Assassin.
  • -
-

Llamada a la acción

-

Si usted está listo para convertirse en un asesino cazador, descargar Hunter Assassin Hack Mod APK ahora y empezar a jugar! Te encantará este juego si te gusta el sigilo, la acción y el desafío. No se olvide de compartir este artículo con sus amigos que también pueden disfrutar de este juego. Caza feliz!

-

Preguntas frecuentes

-

Aquí hay algunas preguntas frecuentes sobre Hunter Assassin Hack Mod APK:

-

Q: ¿Es seguro usar Hunter Assassin Hack Mod APK?

-

A: Sí, Hunter Assassin Hack Mod APK es seguro de usar. No contiene ningún virus o malware que pueda dañar su dispositivo o datos. Sin embargo, siempre debes descargarlo de una fuente de confianza como esta, ya que algunos sitios web pueden ofrecer archivos falsos o dañinos.

-

Q: ¿Necesito raíz o jailbreak mi dispositivo para utilizar Hunter Assassin Hack Mod APK?

-

A: No, usted no necesita raíz o jailbreak su dispositivo para utilizar Hunter Assassin Hack Mod APK. Funciona tanto en dispositivos rooteados como no rooteados, así como en dispositivos Android e iOS.

-

Q: ¿Voy a obtener prohibido del juego si uso Hunter Assassin Hack Mod APK?

- -

Q: ¿Cómo puedo actualizar Hunter Assassin Hack Mod APK?

-

A: Hunter Assassin Hack Mod APK se actualiza regularmente para que coincida con la última versión del juego original. Siempre que haya una nueva actualización, puede descargarla de este sitio web e instalarla sobre la existente. No necesitas desinstalar o reinstalar el juego, solo sobrescribe el archivo antiguo con el nuevo.

-

Q: ¿Puedo jugar Hunter Assassin Hack Mod APK offline?

-

A: Sí, puedes jugar Hunter Assassin Hack Mod APK fuera de línea. El juego no requiere una conexión a Internet para funcionar, y se puede disfrutar de todas las características de la apk mod sin ningún problema. Sin embargo, es posible que necesite una conexión a Internet para acceder a algunas funciones en línea, como tablas de clasificación o logros.

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/BetterAPI/BetterChat/src/lib/types/Message.ts b/spaces/BetterAPI/BetterChat/src/lib/types/Message.ts deleted file mode 100644 index aee67c9b7049880ff2d4b2a9471270015b478a3f..0000000000000000000000000000000000000000 --- a/spaces/BetterAPI/BetterChat/src/lib/types/Message.ts +++ /dev/null @@ -1,5 +0,0 @@ -export interface Message { - from: "user" | "assistant"; - id: ReturnType; - content: string; -} diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/__init__.py deleted file mode 100644 index 73f58d7740813264d20047ffe918c82e1acc84eb..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/__init__.py +++ /dev/null @@ -1,177 +0,0 @@ -"""Rich text and beautiful formatting in the terminal.""" - -import os -from typing import IO, TYPE_CHECKING, Any, Callable, Optional, Union - -from ._extension import load_ipython_extension # noqa: F401 - -__all__ = ["get_console", "reconfigure", "print", "inspect", "print_json"] - -if TYPE_CHECKING: - from .console import Console - -# Global console used by alternative print -_console: Optional["Console"] = None - -try: - _IMPORT_CWD = os.path.abspath(os.getcwd()) -except FileNotFoundError: - # Can happen if the cwd has been deleted - _IMPORT_CWD = "" - - -def get_console() -> "Console": - """Get a global :class:`~rich.console.Console` instance. This function is used when Rich requires a Console, - and hasn't been explicitly given one. - - Returns: - Console: A console instance. - """ - global _console - if _console is None: - from .console import Console - - _console = Console() - - return _console - - -def reconfigure(*args: Any, **kwargs: Any) -> None: - """Reconfigures the global console by replacing it with another. - - Args: - *args (Any): Positional arguments for the replacement :class:`~rich.console.Console`. - **kwargs (Any): Keyword arguments for the replacement :class:`~rich.console.Console`. - """ - from pip._vendor.rich.console import Console - - new_console = Console(*args, **kwargs) - _console = get_console() - _console.__dict__ = new_console.__dict__ - - -def print( - *objects: Any, - sep: str = " ", - end: str = "\n", - file: Optional[IO[str]] = None, - flush: bool = False, -) -> None: - r"""Print object(s) supplied via positional arguments. - This function has an identical signature to the built-in print. - For more advanced features, see the :class:`~rich.console.Console` class. - - Args: - sep (str, optional): Separator between printed objects. Defaults to " ". - end (str, optional): Character to write at end of output. Defaults to "\\n". - file (IO[str], optional): File to write to, or None for stdout. Defaults to None. - flush (bool, optional): Has no effect as Rich always flushes output. Defaults to False. - - """ - from .console import Console - - write_console = get_console() if file is None else Console(file=file) - return write_console.print(*objects, sep=sep, end=end) - - -def print_json( - json: Optional[str] = None, - *, - data: Any = None, - indent: Union[None, int, str] = 2, - highlight: bool = True, - skip_keys: bool = False, - ensure_ascii: bool = False, - check_circular: bool = True, - allow_nan: bool = True, - default: Optional[Callable[[Any], Any]] = None, - sort_keys: bool = False, -) -> None: - """Pretty prints JSON. Output will be valid JSON. - - Args: - json (str): A string containing JSON. - data (Any): If json is not supplied, then encode this data. - indent (int, optional): Number of spaces to indent. Defaults to 2. - highlight (bool, optional): Enable highlighting of output: Defaults to True. - skip_keys (bool, optional): Skip keys not of a basic type. Defaults to False. - ensure_ascii (bool, optional): Escape all non-ascii characters. Defaults to False. - check_circular (bool, optional): Check for circular references. Defaults to True. - allow_nan (bool, optional): Allow NaN and Infinity values. Defaults to True. - default (Callable, optional): A callable that converts values that can not be encoded - in to something that can be JSON encoded. Defaults to None. - sort_keys (bool, optional): Sort dictionary keys. Defaults to False. - """ - - get_console().print_json( - json, - data=data, - indent=indent, - highlight=highlight, - skip_keys=skip_keys, - ensure_ascii=ensure_ascii, - check_circular=check_circular, - allow_nan=allow_nan, - default=default, - sort_keys=sort_keys, - ) - - -def inspect( - obj: Any, - *, - console: Optional["Console"] = None, - title: Optional[str] = None, - help: bool = False, - methods: bool = False, - docs: bool = True, - private: bool = False, - dunder: bool = False, - sort: bool = True, - all: bool = False, - value: bool = True, -) -> None: - """Inspect any Python object. - - * inspect() to see summarized info. - * inspect(, methods=True) to see methods. - * inspect(, help=True) to see full (non-abbreviated) help. - * inspect(, private=True) to see private attributes (single underscore). - * inspect(, dunder=True) to see attributes beginning with double underscore. - * inspect(, all=True) to see all attributes. - - Args: - obj (Any): An object to inspect. - title (str, optional): Title to display over inspect result, or None use type. Defaults to None. - help (bool, optional): Show full help text rather than just first paragraph. Defaults to False. - methods (bool, optional): Enable inspection of callables. Defaults to False. - docs (bool, optional): Also render doc strings. Defaults to True. - private (bool, optional): Show private attributes (beginning with underscore). Defaults to False. - dunder (bool, optional): Show attributes starting with double underscore. Defaults to False. - sort (bool, optional): Sort attributes alphabetically. Defaults to True. - all (bool, optional): Show all attributes. Defaults to False. - value (bool, optional): Pretty print value. Defaults to True. - """ - _console = console or get_console() - from pip._vendor.rich._inspect import Inspect - - # Special case for inspect(inspect) - is_inspect = obj is inspect - - _inspect = Inspect( - obj, - title=title, - help=is_inspect or help, - methods=is_inspect or methods, - docs=is_inspect or docs, - private=private, - dunder=dunder, - sort=sort, - all=all, - value=value, - ) - _console.print(_inspect) - - -if __name__ == "__main__": # pragma: no cover - print("Hello, **World**") diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/rotate.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/rotate.py deleted file mode 100644 index 74795ba922bb376e24858760e63dc9124ef22b9f..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/rotate.py +++ /dev/null @@ -1,64 +0,0 @@ -from distutils.util import convert_path -from distutils import log -from distutils.errors import DistutilsOptionError -import os -import shutil - -from setuptools import Command - - -class rotate(Command): - """Delete older distributions""" - - description = "delete older distributions, keeping N newest files" - user_options = [ - ('match=', 'm', "patterns to match (required)"), - ('dist-dir=', 'd', "directory where the distributions are"), - ('keep=', 'k', "number of matching distributions to keep"), - ] - - boolean_options = [] - - def initialize_options(self): - self.match = None - self.dist_dir = None - self.keep = None - - def finalize_options(self): - if self.match is None: - raise DistutilsOptionError( - "Must specify one or more (comma-separated) match patterns " - "(e.g. '.zip' or '.egg')" - ) - if self.keep is None: - raise DistutilsOptionError("Must specify number of files to keep") - try: - self.keep = int(self.keep) - except ValueError as e: - raise DistutilsOptionError("--keep must be an integer") from e - if isinstance(self.match, str): - self.match = [ - convert_path(p.strip()) for p in self.match.split(',') - ] - self.set_undefined_options('bdist', ('dist_dir', 'dist_dir')) - - def run(self): - self.run_command("egg_info") - from glob import glob - - for pattern in self.match: - pattern = self.distribution.get_name() + '*' + pattern - files = glob(os.path.join(self.dist_dir, pattern)) - files = [(os.path.getmtime(f), f) for f in files] - files.sort() - files.reverse() - - log.info("%d file(s) matching %s", len(files), pattern) - files = files[self.keep:] - for (t, f) in files: - log.info("Deleting %s", f) - if not self.dry_run: - if os.path.isdir(f): - shutil.rmtree(f) - else: - os.unlink(f) diff --git a/spaces/BigData-KSU/VQA-in-Medical-Imagery/README.md b/spaces/BigData-KSU/VQA-in-Medical-Imagery/README.md deleted file mode 100644 index 5bb42b89d35a22a63323a75e2e91e26659e473c5..0000000000000000000000000000000000000000 --- a/spaces/BigData-KSU/VQA-in-Medical-Imagery/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Visual Question Answering in Medical Imagery -emoji: 🧑‍⚕️ -colorFrom: yellow -colorTo: red -sdk: gradio -sdk_version: 3.15.0 -app_file: MED_VQA_Huggyface_Gradio.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Bokanovskii/Image-to-music/app.py b/spaces/Bokanovskii/Image-to-music/app.py deleted file mode 100644 index 79a5cae3a9269324d7b5cc40954b8e071f70451f..0000000000000000000000000000000000000000 --- a/spaces/Bokanovskii/Image-to-music/app.py +++ /dev/null @@ -1,429 +0,0 @@ -import gradio as gr -import spotipy -from spotipy import oauth2 - -from transformers import ViTForImageClassification, ViTImageProcessor -import torch -from torch.nn import functional as F -from torchvision.io import read_image - -import tensorflow as tf - -from fastapi import FastAPI -from starlette.middleware.sessions import SessionMiddleware -from starlette.responses import HTMLResponse, RedirectResponse -from starlette.requests import Request -import gradio as gr -import uvicorn -from fastapi.responses import HTMLResponse -from fastapi.responses import RedirectResponse - -import numpy as np -import base64 -from io import BytesIO -from PIL import Image -import time - -import shred_model - -# Xception fine tuned from pretrained imagenet weights for identifying Sraddha -SRADDHA_MODEL_PATH = "shred_model" -SHRED_MODEL = tf.keras.models.load_model(SRADDHA_MODEL_PATH) - -SPOTIPY_TOKEN = None # Set in the homepage function - -device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') -print("Grabbing model") -mood_model = ViTForImageClassification.from_pretrained("jayanta/google-vit-base-patch16-224-cartoon-emotion-detection") -mood_model.eval() -mood_model.to(device) -print("Grabbing feature extractor") -mood_feature_extractor = ViTImageProcessor.from_pretrained("jayanta/google-vit-base-patch16-224-cartoon-emotion-detection") - -def main(img, playlist_length, privacy, gen_mode, genre_choice, request: gr.Request): - if img is None: - return None - print("Getting image inference from tansformer") - mood_dict = get_image_mood_dict_from_transformer(img) - print("Getting Sraddha Found Boolean from model") - sraddha_found = get_sraddha(img) - print("Building playlist") - playlist = get_playlist(mood_dict, img, playlist_length, privacy, gen_mode, genre_choice, request) - if playlist is None: - playlist = "Spotipy account token not set" - - ret = playlist - if sraddha_found: - valentines_jokes = ["Why shouldn't you trust a pastry chef on Valentine's Day? Because he will dessert you.", - "What do you give your Valentine in France? A big quiche.", - "What did the tortoise say on Valentine's Day? I turt-ally love you.", - "How did the squirrel get his Valentine's attention? He acted like a nut.", - "What do you call sweets that can keep a beat? Candy rappers.", - "What did the paper clip say to the magnet? I find you very attractive.", - "What did the caclulator say to the pencil? You can count on me."] - joke = valentines_jokes[np.random.randint(0, len(valentines_jokes)-1)] - sraddha_msg = """Sraddha, you are the love of my life and seeing you always lifts my spirits. Hopefully these tunes and a joke can do the same for you. -

-

""" + \ - f"

{joke}

" + \ - """- With Love, Scoob""" - return gr.update(value=ret, visible=True), gr.update(value=sraddha_msg, visible=True) - return gr.update(value=ret, visible=True), gr.update(visible=False) - -def get_image_mood_dict_from_transformer(img): - img = read_image(img) - encoding = mood_feature_extractor(images=img, return_tensors="pt") - pixel_values = encoding['pixel_values'].to(device) - - print('Running mood prediction') - outputs = mood_model(pixel_values) - - logits = outputs.logits - probabilities = F.softmax(logits, dim = -1).detach().numpy()[0] - mood_dict = dict(zip(mood_model.config.id2label.values(), probabilities)) - return mood_dict - -def get_sraddha(img): - fixed_img = shred_model.prepare_image(img) - prob = SHRED_MODEL.predict(fixed_img)[0] - if prob >= .5: - return True - -def compute_mood(mood_dict): - print(mood_dict) - return mood_dict['happy'] + mood_dict['angry'] * .5 + mood_dict['sad'] * .1 - -def get_playlist(mood_dict, img, playlist_length, privacy, gen_mode, genre_choice, request: gr.Request): - token = request.request.session.get('token') - genre_map = {'Rock': ['alt-rock', 'alternative', 'indie', 'r-n-b', 'rock'], 'Hip-hop': ['hip-hop'], 'Party': ['house', 'pop', 'party'], 'Mellow': ['blues', 'jazz', 'happy'], 'Indian': ['idm', 'indian'], 'Pop': ['pop', 'new-age'], 'Study': ['study', 'classical', 'jazz', 'happy', 'chill'], 'Romance': ['romance', 'happy', 'pop']} - - if token: - mood = compute_mood(mood_dict) - if gen_mode == "By a Chosen Genre": - playlist_name = "Mood " + str(round(mood * 100, 1)) + f": {genre_choice}" - else: - playlist_name = "Mood " + str(round(mood * 100, 1)) + f": {gen_mode}" - sp = spotipy.Spotify(token) - - if gen_mode == 'Recently Played': - top_tracks_uri = set([x['track']['uri'] for x in sp.current_user_recently_played(limit=50)['items']]) - # I honestly don't know if this errors for people with not enough saved tracks - # Shouldn't be a problem for Sraddha - first_few = [x['track']['uri'] for x in sp.current_user_saved_tracks(limit=50)['items']] - top_tracks_uri.update(first_few) - top_tracks_uri.update([x['track']['uri'] for x in sp.current_user_saved_tracks(limit=50, offset=50)['items']]) - top_tracks_uri.update([x['track']['uri'] for x in sp.current_user_saved_tracks(limit=50, offset=100)['items']]) - top_tracks_uri.update([x['track']['uri'] for x in sp.current_user_saved_tracks(limit=50, offset=150)['items']]) - top_tracks_uri.update([x['uri'] for x in sp.recommendations(seed_tracks=first_few[:5], limit=50)['tracks']]) - top_tracks_uri.update([x['uri'] for x in sp.recommendations(seed_tracks=first_few[5:10], limit=50)['tracks']]) - top_tracks_uri = list(top_tracks_uri) - elif gen_mode == 'By a Chosen Genre': - genres = genre_map[genre_choice] - final_track_list = [x['uri'] for x in sp.recommendations( - seed_genres=genres, limit=playlist_length, max_valence=mood+.15, - min_valence=mood-.15, min_danceability=mood/1.75, max_danceability=mood*8, - min_energy=mood/2)['tracks']] - else: - top_artists_uri = aggregate_favorite_artists(sp) - top_tracks_uri = aggregate_top_tracks(sp, top_artists_uri) - - if gen_mode != 'By a Chosen Genre': - final_track_list = filter_tracks(sp, top_tracks_uri, mood, playlist_length) - - # If no tracks fit the filter: generate some results anyways - if len(final_track_list) != playlist_length: - diff = playlist_length - len(final_track_list) - print(f'Filling playlist with {diff} more songs (filter too big)') - seed = [x['track']['uri'] for x in sp.current_user_recently_played(limit=5)['items']] - final_track_list += [x['uri'] for x in sp.recommendations( - seed_tracks=seed, limit=diff, - min_valence=mood-.3, min_energy=mood/3)['tracks']] - - iframe_embedding = create_playlist(sp, img, final_track_list, playlist_name, - privacy) - return iframe_embedding - return None - -def create_playlist(sp, img, tracks, playlist_name, privacy): - privacy = privacy == "Public" - user_id = sp.current_user()['id'] - playlist_description = "This playlist was created using the img-to-music application built by the best boyfriend there ever was and ever will be" - playlist_data = sp.user_playlist_create(user_id, playlist_name, public=privacy, - description=playlist_description) - playlist_id = playlist_data['id'] - if len(tracks) == 0: - return """No tracks could be generated from this image""" - sp.user_playlist_add_tracks(user_id, playlist_id, tracks) - - def upload_img(): - with Image.open(img) as im_file: - im_file.thumbnail((300, 300)) - buffered = BytesIO() - im_file.save(buffered, format="JPEG") - img_str = base64.b64encode(buffered.getvalue()) - sp.playlist_upload_cover_image(playlist_id, img_str) - try: - upload_img() - except spotipy.exceptions.SpotifyException as e: - print(f"SpotiftException on image upload: {e}") - print("Retrying") - time.sleep(5) - try: - upload_img() - except Exception as e: - print(e) - except requests.exceptions.ReadTimeout as e: - print(f"Image upload request timeout: {e}") - print("Retrying...") - time.sleep(5) - try: - upload_img() - except Exception as e: - print(e) - time.sleep(3) - iframe_embedding = f"""""" - return iframe_embedding - -def aggregate_favorite_artists(sp): - top_artists_name = set() - top_artists_uri = [] - - ranges = ['short_term', 'medium_term', 'long_term'] - for r in ranges: - top_artists_all_data = sp.current_user_top_artists(limit=50, time_range=r) - top_artists_data = top_artists_all_data['items'] - for artist_data in top_artists_data: - if artist_data["name"] not in top_artists_name: - top_artists_name.add(artist_data['name']) - top_artists_uri.append(artist_data['uri']) - - followed_artists_all_data = sp.current_user_followed_artists(limit=50) - followed_artsits_data = followed_artists_all_data['artists'] - for artist_data in followed_artsits_data['items']: - if artist_data["name"] not in top_artists_name: - top_artists_name.add(artist_data['name']) - top_artists_uri.append(artist_data['uri']) - - # attempt to garauntee 200 artists - i = 0 - while len(top_artists_uri) < 200: - related_artists_all_data = sp.artist_related_artists(top_artists_uri[i]) - i += 1 - related_artists_data = related_artists_all_data['artists'] - for artist_data in related_artists_data: - if artist_data["name"] not in top_artists_name: - top_artists_name.add(artist_data['name']) - top_artists_uri.append(artist_data['uri']) - if i == len(top_artists_uri): - # could build in a deeper artist recommendation finder here - # would do this if it was going to production but Sraddha follows lots of artists - break - - return top_artists_uri - -def aggregate_top_tracks(sp, top_artists_uri): - top_tracks_uri = [] - for artist in top_artists_uri: - top_tracks_all_data = sp.artist_top_tracks(artist) - top_tracks_data = top_tracks_all_data['tracks'] - for track_data in top_tracks_data: - top_tracks_uri.append(track_data['uri']) - return top_tracks_uri - -def filter_tracks(sp, top_tracks_uri, mood, playlist_length): - selected_tracks_uri = [] - - np.random.shuffle(top_tracks_uri) - # Batch network requests - BATCH_SIZE = 100 - i = 0 - all_track_data = [] - while i + BATCH_SIZE < len(top_tracks_uri): - all_track_data += sp.audio_features(top_tracks_uri[i:i+BATCH_SIZE]) - i += BATCH_SIZE - all_track_data += sp.audio_features(top_tracks_uri[i:]) - - for i, track in enumerate(top_tracks_uri): - track_data = all_track_data[i] - if track_data is None: - continue - - valence = track_data['valence'] - danceability = track_data['danceability'] - energy = track_data['energy'] - if mood < .1: - if valence <= mood + .15 and \ - danceability <= mood * 8 and \ - energy <= mood * 10: - selected_tracks_uri.append(track) - elif mood < .25: - if (mood - .1) <= valence <= (mood + .1) and \ - danceability <= mood * 4 and \ - energy <= mood * 5: - selected_tracks_uri.append(track) - elif mood < .5: - if mood - .05 <= valence <= mood + .05 and \ - danceability <= mood * 1.75 and \ - energy <= mood * 1.75: - selected_tracks_uri.append(track) - elif mood < .75: - if mood - .1 <= valence <= mood + .1 and \ - danceability >= mood / 2.5 and \ - energy >= mood / 2: - selected_tracks_uri.append(track) - elif mood < .9: - if mood - .1 <= valence <= mood + .1 and \ - danceability >= mood / 2 and \ - energy >= mood / 1.75: - selected_tracks_uri.append(track) - else: - if mood - .15 <= valence <= 1 and \ - danceability >= mood / 1.75 and \ - energy >= mood / 1.5: - selected_tracks_uri.append(track) - - if len(selected_tracks_uri) >= playlist_length: - break - return selected_tracks_uri - -# Define login and frontend -PORT_NUMBER = 8080 -SPOTIPY_CLIENT_ID = '2320153024d042c8ba138a108066246c' -SPOTIPY_CLIENT_SECRET = 'da2746490f6542a3b0cfcff50893e8e8' -#SPOTIPY_REDIRECT_URI = 'http://localhost:7860' -SPOTIPY_REDIRECT_URI = "https://Bokanovskii-Image-to-music.hf.space" -SCOPE = 'ugc-image-upload playlist-read-private playlist-read-collaborative playlist-modify-private playlist-modify-public user-top-read user-read-playback-position user-read-recently-played user-read-email user-follow-read user-library-modify user-library-read user-read-email user-read-private user-read-playback-state user-modify-playback-state user-read-currently-playing app-remote-control streaming' - -sp_oauth = oauth2.SpotifyOAuth(SPOTIPY_CLIENT_ID, SPOTIPY_CLIENT_SECRET, SPOTIPY_REDIRECT_URI, scope=SCOPE) - -app = FastAPI() -app.add_middleware(SessionMiddleware, secret_key="w.o.w") - -@app.get('/', response_class=HTMLResponse) -async def homepage(request: Request): - url = str(request.url) - auth_url = sp_oauth.get_authorize_url() - try: - code = sp_oauth.parse_response_code(url) - if code != url: - request.session['token'] = sp_oauth.get_access_token(code, as_dict=False, check_cache=False) - return RedirectResponse("/gradio") - except: - return """
-
-

- Image to Music Generator -

\n""" + \ - "

The server couldn't make a connection with Spotify: please try again

\n" + \ - f"Login to Spotify\n" + \ - """

-

-

-

- - Click 'Open in a new window/tab' - -
- - This applet requires a whitelisted Spotify account (contact Charlie Ward) - """ - return """
-
-

- Image to Music Generator -

\n""" + \ - f"Login to Spotify\n" + \ - """

-

-

-

- - Click 'Open in a new window/tab' - -
- - This applet requires a whitelisted Spotify account (contact Charlie Ward) - """ - -with gr.Blocks(css="style.css") as demo: - with gr.Column(elem_id="col-container"): - gr.HTML("""
-
-

- Image to Music Generator -

""") - - input_img = gr.Image(type="filepath", elem_id="input-img") - sraddhas_box = gr.HTML(label="Sraddha's Box", elem_id="sraddhas-box", visible=False) - playlist_output = gr.HTML(label="Generated Playlist", elem_id="app-output", visible=True) - - with gr.Accordion(label="Playlist Generation Options", open=False): - playlist_length = gr.Slider(minimum=5, maximum=100, value=30, step=5, - label="Playlist Length", elem_id="playlist-length") - with gr.Row(): - privacy = gr.Radio(label="Playlist Privacy Level", choices=["Public", "Private"], - value="Private") - gen_mode = gr.Radio(label="Recommendation Base", choices=["Favorites", "Recently Played", "By a Chosen Genre"], value="Favorites") - with gr.Row(visible=False) as genre_choice_row: - genre_choice = gr.Dropdown(label='Choose a Genre', choices=['Rock', 'Pop', 'Hip-hop', 'Party', 'Mellow', 'Indian', 'Study', 'Romance'], value='Pop') - - def sraddha_box_hide(): - return {sraddhas_box: gr.update(visible=False)} - - def genre_dropdown_toggle(gen_mode): - if gen_mode == 'By a Chosen Genre': - return {genre_choice_row: gr.update(visible=True)} - else: - return {genre_choice_row: gr.update(visible=False)} - - generate = gr.Button("Generate Playlist from Image") - - article = """ - - """ - gr.HTML(article) - gen_mode.change(genre_dropdown_toggle, inputs=[gen_mode], outputs=[genre_choice_row]) - generate.click(sraddha_box_hide, outputs=[sraddhas_box]) - generate.click(main, inputs=[input_img, playlist_length, privacy, gen_mode, genre_choice], - outputs=[playlist_output, sraddhas_box], api_name="img-to-music") - -gradio_app = gr.mount_gradio_app(app, demo, "/gradio") -uvicorn.run(app, host="0.0.0.0", port=7860) diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_exceptions.py b/spaces/CVPR/LIVE/pybind11/tests/test_exceptions.py deleted file mode 100644 index 7d7088d00b8fec6aeab23f02c2646e3254b53917..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/tests/test_exceptions.py +++ /dev/null @@ -1,191 +0,0 @@ -# -*- coding: utf-8 -*- -import sys - -import pytest - -from pybind11_tests import exceptions as m -import pybind11_cross_module_tests as cm - - -def test_std_exception(msg): - with pytest.raises(RuntimeError) as excinfo: - m.throw_std_exception() - assert msg(excinfo.value) == "This exception was intentionally thrown." - - -def test_error_already_set(msg): - with pytest.raises(RuntimeError) as excinfo: - m.throw_already_set(False) - assert msg(excinfo.value) == "Unknown internal error occurred" - - with pytest.raises(ValueError) as excinfo: - m.throw_already_set(True) - assert msg(excinfo.value) == "foo" - - -def test_cross_module_exceptions(): - with pytest.raises(RuntimeError) as excinfo: - cm.raise_runtime_error() - assert str(excinfo.value) == "My runtime error" - - with pytest.raises(ValueError) as excinfo: - cm.raise_value_error() - assert str(excinfo.value) == "My value error" - - with pytest.raises(ValueError) as excinfo: - cm.throw_pybind_value_error() - assert str(excinfo.value) == "pybind11 value error" - - with pytest.raises(TypeError) as excinfo: - cm.throw_pybind_type_error() - assert str(excinfo.value) == "pybind11 type error" - - with pytest.raises(StopIteration) as excinfo: - cm.throw_stop_iteration() - - -def test_python_call_in_catch(): - d = {} - assert m.python_call_in_destructor(d) is True - assert d["good"] is True - - -def test_python_alreadyset_in_destructor(monkeypatch, capsys): - hooked = False - triggered = [False] # mutable, so Python 2.7 closure can modify it - - if hasattr(sys, 'unraisablehook'): # Python 3.8+ - hooked = True - default_hook = sys.unraisablehook - - def hook(unraisable_hook_args): - exc_type, exc_value, exc_tb, err_msg, obj = unraisable_hook_args - if obj == 'already_set demo': - triggered[0] = True - default_hook(unraisable_hook_args) - return - - # Use monkeypatch so pytest can apply and remove the patch as appropriate - monkeypatch.setattr(sys, 'unraisablehook', hook) - - assert m.python_alreadyset_in_destructor('already_set demo') is True - if hooked: - assert triggered[0] is True - - _, captured_stderr = capsys.readouterr() - # Error message is different in Python 2 and 3, check for words that appear in both - assert 'ignored' in captured_stderr and 'already_set demo' in captured_stderr - - -def test_exception_matches(): - assert m.exception_matches() - assert m.exception_matches_base() - assert m.modulenotfound_exception_matches_base() - - -def test_custom(msg): - # Can we catch a MyException? - with pytest.raises(m.MyException) as excinfo: - m.throws1() - assert msg(excinfo.value) == "this error should go to a custom type" - - # Can we translate to standard Python exceptions? - with pytest.raises(RuntimeError) as excinfo: - m.throws2() - assert msg(excinfo.value) == "this error should go to a standard Python exception" - - # Can we handle unknown exceptions? - with pytest.raises(RuntimeError) as excinfo: - m.throws3() - assert msg(excinfo.value) == "Caught an unknown exception!" - - # Can we delegate to another handler by rethrowing? - with pytest.raises(m.MyException) as excinfo: - m.throws4() - assert msg(excinfo.value) == "this error is rethrown" - - # Can we fall-through to the default handler? - with pytest.raises(RuntimeError) as excinfo: - m.throws_logic_error() - assert msg(excinfo.value) == "this error should fall through to the standard handler" - - # OverFlow error translation. - with pytest.raises(OverflowError) as excinfo: - m.throws_overflow_error() - - # Can we handle a helper-declared exception? - with pytest.raises(m.MyException5) as excinfo: - m.throws5() - assert msg(excinfo.value) == "this is a helper-defined translated exception" - - # Exception subclassing: - with pytest.raises(m.MyException5) as excinfo: - m.throws5_1() - assert msg(excinfo.value) == "MyException5 subclass" - assert isinstance(excinfo.value, m.MyException5_1) - - with pytest.raises(m.MyException5_1) as excinfo: - m.throws5_1() - assert msg(excinfo.value) == "MyException5 subclass" - - with pytest.raises(m.MyException5) as excinfo: - try: - m.throws5() - except m.MyException5_1: - raise RuntimeError("Exception error: caught child from parent") - assert msg(excinfo.value) == "this is a helper-defined translated exception" - - -def test_nested_throws(capture): - """Tests nested (e.g. C++ -> Python -> C++) exception handling""" - - def throw_myex(): - raise m.MyException("nested error") - - def throw_myex5(): - raise m.MyException5("nested error 5") - - # In the comments below, the exception is caught in the first step, thrown in the last step - - # C++ -> Python - with capture: - m.try_catch(m.MyException5, throw_myex5) - assert str(capture).startswith("MyException5: nested error 5") - - # Python -> C++ -> Python - with pytest.raises(m.MyException) as excinfo: - m.try_catch(m.MyException5, throw_myex) - assert str(excinfo.value) == "nested error" - - def pycatch(exctype, f, *args): - try: - f(*args) - except m.MyException as e: - print(e) - - # C++ -> Python -> C++ -> Python - with capture: - m.try_catch( - m.MyException5, pycatch, m.MyException, m.try_catch, m.MyException, throw_myex5) - assert str(capture).startswith("MyException5: nested error 5") - - # C++ -> Python -> C++ - with capture: - m.try_catch(m.MyException, pycatch, m.MyException5, m.throws4) - assert capture == "this error is rethrown" - - # Python -> C++ -> Python -> C++ - with pytest.raises(m.MyException5) as excinfo: - m.try_catch(m.MyException, pycatch, m.MyException, m.throws5) - assert str(excinfo.value) == "this is a helper-defined translated exception" - - -# This can often happen if you wrap a pybind11 class in a Python wrapper -def test_invalid_repr(): - - class MyRepr(object): - def __repr__(self): - raise AttributeError("Example error") - - with pytest.raises(TypeError): - m.simple_bool_passthrough(MyRepr()) diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/range/tail_flags.h b/spaces/CVPR/LIVE/thrust/thrust/detail/range/tail_flags.h deleted file mode 100644 index 32ccb53c6a36c2ce1ce75a3a9475729f652e1d75..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/range/tail_flags.h +++ /dev/null @@ -1,134 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include -#include -#include -#include -#include - -namespace thrust -{ -namespace detail -{ - - -template::type>, - typename ValueType = bool, - typename IndexType = typename thrust::iterator_difference::type> - class tail_flags -{ - // XXX WAR cudafe bug - //private: - public: - struct tail_flag_functor - { - BinaryPredicate binary_pred; // this must be the first member for performance reasons - RandomAccessIterator iter; - IndexType n; - - typedef ValueType result_type; - - __host__ __device__ - tail_flag_functor(RandomAccessIterator first, RandomAccessIterator last) - : binary_pred(), iter(first), n(last - first) - {} - - __host__ __device__ - tail_flag_functor(RandomAccessIterator first, RandomAccessIterator last, BinaryPredicate binary_pred) - : binary_pred(binary_pred), iter(first), n(last - first) - {} - - __host__ __device__ __thrust_forceinline__ - result_type operator()(const IndexType &i) - { - return (i == (n - 1) || !binary_pred(iter[i], iter[i+1])); - } - }; - - typedef thrust::counting_iterator counting_iterator; - - public: - typedef thrust::transform_iterator< - tail_flag_functor, - counting_iterator - > iterator; - - __thrust_exec_check_disable__ - __host__ __device__ - tail_flags(RandomAccessIterator first, RandomAccessIterator last) - : m_begin(thrust::make_transform_iterator(thrust::counting_iterator(0), - tail_flag_functor(first, last))), - m_end(m_begin + (last - first)) - {} - - __thrust_exec_check_disable__ - __host__ __device__ - tail_flags(RandomAccessIterator first, RandomAccessIterator last, BinaryPredicate binary_pred) - : m_begin(thrust::make_transform_iterator(thrust::counting_iterator(0), - tail_flag_functor(first, last, binary_pred))), - m_end(m_begin + (last - first)) - {} - - __host__ __device__ - iterator begin() const - { - return m_begin; - } - - __host__ __device__ - iterator end() const - { - return m_end; - } - - template - __host__ __device__ - typename iterator::reference operator[](OtherIndex i) - { - return *(begin() + i); - } - - private: - iterator m_begin, m_end; -}; - - -template -__host__ __device__ -tail_flags - make_tail_flags(RandomAccessIterator first, RandomAccessIterator last, BinaryPredicate binary_pred) -{ - return tail_flags(first, last, binary_pred); -} - - -template -__host__ __device__ -tail_flags - make_tail_flags(RandomAccessIterator first, RandomAccessIterator last) -{ - return tail_flags(first, last); -} - - -} // end detail -} // end thrust - diff --git a/spaces/CVPR/LIVE/thrust/thrust/inner_product.h b/spaces/CVPR/LIVE/thrust/thrust/inner_product.h deleted file mode 100644 index 0206eff38a4800282e3c585162fbeea1c6a350ca..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/inner_product.h +++ /dev/null @@ -1,264 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file inner_product.h - * \brief Mathematical inner product between ranges - */ - -#pragma once - -#include -#include - -namespace thrust -{ - - -/*! \addtogroup reductions - * \{ - * \addtogroup transformed_reductions Transformed Reductions - * \ingroup reductions - * \{ - */ - - -/*! \p inner_product calculates an inner product of the ranges - * [first1, last1) and [first2, first2 + (last1 - first1)). - * - * Specifically, this version of \p inner_product computes the sum - * init + (*first1 * *first2) + (*(first1+1) * *(first2+1)) + ... - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first1 The beginning of the first sequence. - * \param last1 The end of the first sequence. - * \param first2 The beginning of the second sequence. - * \param init Initial value of the result. - * \return The inner product of sequences [first1, last1) - * and [first2, last2) plus \p init. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \tparam InputIterator2 is a model of Input Iterator, - * \tparam OutputType is a model of Assignable, - * and if \c x is an object of type \p OutputType, and \c y is an object of \p InputIterator1's \c value_type, - * and \c z is an object of \p InputIterator2's \c value_type, then x + y * z is defined - * and is convertible to \p OutputType. - * - * The following code demonstrates how to use \p inner_product to - * compute the dot product of two vectors using the \p thrust::host execution policy for parallelization. - * - * \code - * #include - * #include - * ... - * float vec1[3] = {1.0f, 2.0f, 5.0f}; - * float vec2[3] = {4.0f, 1.0f, 5.0f}; - * - * float result = thrust::inner_product(thrust::host, vec1, vec1 + 3, vec2, 0.0f); - * - * // result == 31.0f - * \endcode - * - * \see http://www.sgi.com/tech/stl/inner_product.html - */ -template -__host__ __device__ -OutputType inner_product(const thrust::detail::execution_policy_base &exec, - InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - OutputType init); - - -/*! \p inner_product calculates an inner product of the ranges - * [first1, last1) and [first2, first2 + (last1 - first1)). - * - * Specifically, this version of \p inner_product computes the sum - * init + (*first1 * *first2) + (*(first1+1) * *(first2+1)) + ... - * - * Unlike the C++ Standard Template Library function std::inner_product, - * this version offers no guarantee on order of execution. - * - * \param first1 The beginning of the first sequence. - * \param last1 The end of the first sequence. - * \param first2 The beginning of the second sequence. - * \param init Initial value of the result. - * \return The inner product of sequences [first1, last1) - * and [first2, last2) plus \p init. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \tparam InputIterator2 is a model of Input Iterator, - * \tparam OutputType is a model of Assignable, - * and if \c x is an object of type \p OutputType, and \c y is an object of \p InputIterator1's \c value_type, - * and \c z is an object of \p InputIterator2's \c value_type, then x + y * z is defined - * and is convertible to \p OutputType. - * - * The following code demonstrates how to use \p inner_product to - * compute the dot product of two vectors. - * - * \code - * #include - * ... - * float vec1[3] = {1.0f, 2.0f, 5.0f}; - * float vec2[3] = {4.0f, 1.0f, 5.0f}; - * - * float result = thrust::inner_product(vec1, vec1 + 3, vec2, 0.0f); - * - * // result == 31.0f - * \endcode - * - * \see http://www.sgi.com/tech/stl/inner_product.html - */ -template -OutputType inner_product(InputIterator1 first1, InputIterator1 last1, - InputIterator2 first2, OutputType init); - - -/*! \p inner_product calculates an inner product of the ranges - * [first1, last1) and [first2, first2 + (last1 - first1)). - * - * This version of \p inner_product is identical to the first, except that is uses - * two user-supplied function objects instead of \c operator+ and \c operator*. - * - * Specifically, this version of \p inner_product computes the sum - * binary_op1( init, binary_op2(*first1, *first2) ), ... - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first1 The beginning of the first sequence. - * \param last1 The end of the first sequence. - * \param first2 The beginning of the second sequence. - * \param init Initial value of the result. - * \param binary_op1 Generalized addition operation. - * \param binary_op2 Generalized multiplication operation. - * \return The inner product of sequences [first1, last1) and [first2, last2). - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * and \p InputIterator1's \c value_type is convertible to \p BinaryFunction2's \c first_argument_type. - * \tparam InputIterator2 is a model of Input Iterator. - * and \p InputIterator2's \c value_type is convertible to \p BinaryFunction2's \c second_argument_type. - * \tparam OutputType is a model of Assignable, - * and \p OutputType is convertible to \p BinaryFunction1's \c first_argument_type. - * \tparam BinaryFunction1 is a model of Binary Function, - * and \p BinaryFunction1's \c return_type is convertible to \p OutputType. - * \tparam BinaryFunction2 is a model of Binary Function, - * and \p BinaryFunction2's \c return_type is convertible to \p BinaryFunction1's \c second_argument_type. - * - * \code - * #include - * #include - * ... - * float vec1[3] = {1.0f, 2.0f, 5.0f}; - * float vec2[3] = {4.0f, 1.0f, 5.0f}; - * - * float init = 0.0f; - * thrust::plus binary_op1; - * thrust::multiplies binary_op2; - * - * float result = thrust::inner_product(thrust::host, vec1, vec1 + 3, vec2, init, binary_op1, binary_op2); - * - * // result == 31.0f - * \endcode - * - * \see http://www.sgi.com/tech/stl/inner_product.html - */ -template -__host__ __device__ -OutputType inner_product(const thrust::detail::execution_policy_base &exec, - InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - OutputType init, - BinaryFunction1 binary_op1, - BinaryFunction2 binary_op2); - - -/*! \p inner_product calculates an inner product of the ranges - * [first1, last1) and [first2, first2 + (last1 - first1)). - * - * This version of \p inner_product is identical to the first, except that is uses - * two user-supplied function objects instead of \c operator+ and \c operator*. - * - * Specifically, this version of \p inner_product computes the sum - * binary_op1( init, binary_op2(*first1, *first2) ), ... - * - * Unlike the C++ Standard Template Library function std::inner_product, - * this version offers no guarantee on order of execution. - * - * \param first1 The beginning of the first sequence. - * \param last1 The end of the first sequence. - * \param first2 The beginning of the second sequence. - * \param init Initial value of the result. - * \param binary_op1 Generalized addition operation. - * \param binary_op2 Generalized multiplication operation. - * \return The inner product of sequences [first1, last1) and [first2, last2). - * - * \tparam InputIterator1 is a model of Input Iterator, - * and \p InputIterator1's \c value_type is convertible to \p BinaryFunction2's \c first_argument_type. - * \tparam InputIterator2 is a model of Input Iterator. - * and \p InputIterator2's \c value_type is convertible to \p BinaryFunction2's \c second_argument_type. - * \tparam OutputType is a model of Assignable, - * and \p OutputType is convertible to \p BinaryFunction1's \c first_argument_type. - * \tparam BinaryFunction1 is a model of Binary Function, - * and \p BinaryFunction1's \c return_type is convertible to \p OutputType. - * \tparam BinaryFunction2 is a model of Binary Function, - * and \p BinaryFunction2's \c return_type is convertible to \p BinaryFunction1's \c second_argument_type. - * - * \code - * #include - * ... - * float vec1[3] = {1.0f, 2.0f, 5.0f}; - * float vec2[3] = {4.0f, 1.0f, 5.0f}; - * - * float init = 0.0f; - * thrust::plus binary_op1; - * thrust::multiplies binary_op2; - * - * float result = thrust::inner_product(vec1, vec1 + 3, vec2, init, binary_op1, binary_op2); - * - * // result == 31.0f - * \endcode - * - * \see http://www.sgi.com/tech/stl/inner_product.html - */ -template -OutputType inner_product(InputIterator1 first1, InputIterator1 last1, - InputIterator2 first2, OutputType init, - BinaryFunction1 binary_op1, BinaryFunction2 binary_op2); - - -/*! \} // end transformed_reductions - * \} // end reductions - */ - -} // end namespace thrust - -#include - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/adjacent_difference.h b/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/adjacent_difference.h deleted file mode 100644 index b82242c7c0798b58c6d2c2d3da12770dd373d562..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/adjacent_difference.h +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system inherits adjacent_difference -#include - diff --git a/spaces/CVPR/WALT/mmdet/models/detectors/nasfcos.py b/spaces/CVPR/WALT/mmdet/models/detectors/nasfcos.py deleted file mode 100644 index fb0148351546f45a451ef5f7a2a9ef4024e85b7c..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/models/detectors/nasfcos.py +++ /dev/null @@ -1,20 +0,0 @@ -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class NASFCOS(SingleStageDetector): - """NAS-FCOS: Fast Neural Architecture Search for Object Detection. - - https://arxiv.org/abs/1906.0442 - """ - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(NASFCOS, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) diff --git a/spaces/CVPR/regionclip-demo/detectron2/utils/__init__.py b/spaces/CVPR/regionclip-demo/detectron2/utils/__init__.py deleted file mode 100644 index 9020c2df23e2af280b7bb168b996ae9eaf312eb8..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/utils/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. diff --git a/spaces/CVPR/v-doc_abstractive_mac/model.py b/spaces/CVPR/v-doc_abstractive_mac/model.py deleted file mode 100644 index 6ad7a70856a43a581f6ee335dbe4733fdc010d9a..0000000000000000000000000000000000000000 --- a/spaces/CVPR/v-doc_abstractive_mac/model.py +++ /dev/null @@ -1,802 +0,0 @@ -import time -import math -import numpy as np -import tensorflow as tf - -import ops -from config import config -from mac_cell import MACCell -''' -The MAC network model. It performs reasoning processes to answer a question over -knowledge base (the image) by decomposing it into attention-based computational steps, -each perform by a recurrent MAC cell. - -The network has three main components. -Input unit: processes the network inputs: raw question strings and image into -distributional representations. - -The MAC network: calls the MACcells (mac_cell.py) config.netLength number of times, -to perform the reasoning process over the question and image. - -The output unit: a classifier that receives the question and final state of the MAC -network and uses them to compute log-likelihood over the possible one-word answers. -''' -class MACnet(object): - - '''Initialize the class. - - Args: - embeddingsInit: initialization for word embeddings (random / glove). - answerDict: answers dictionary (mapping between integer id and symbol). - ''' - def __init__(self, embeddingsInit, answerDict): - self.embeddingsInit = embeddingsInit - self.answerDict = answerDict - self.build() - - ''' - Initializes placeholders. - questionsIndicesAll: integer ids of question words. - [batchSize, questionLength] - - questionLengthsAll: length of each question. - [batchSize] - - imagesPlaceholder: image features. - [batchSize, channels, height, width] - (converted internally to [batchSize, height, width, channels]) - - answersIndicesAll: integer ids of answer words. - [batchSize] - - lr: learning rate (tensor scalar) - train: train / evaluation (tensor boolean) - - dropout values dictionary (tensor scalars) - ''' - # change to H x W x C? - def addPlaceholders(self): - with tf.variable_scope("Placeholders"): - ## data - # questions - self.questionsIndicesAll = tf.placeholder(tf.int32, shape = (None, None)) - self.questionLengthsAll = tf.placeholder(tf.int32, shape = (None, )) - - # images - # put image known dimension as last dim? - self.imagesPlaceholder = tf.placeholder(tf.float32, shape = (None, None, None, None)) - self.imagesAll = tf.transpose(self.imagesPlaceholder, (0, 2, 3, 1)) - # self.imageH = tf.shape(self.imagesAll)[1] - # self.imageW = tf.shape(self.imagesAll)[2] - - # answers - self.answersIndicesAll = tf.placeholder(tf.int32, shape = (None, )) - - ## optimization - self.lr = tf.placeholder(tf.float32, shape = ()) - self.train = tf.placeholder(tf.bool, shape = ()) - self.batchSizeAll = tf.shape(self.questionsIndicesAll)[0] - - ## dropouts - # TODO: change dropouts to be 1 - current - self.dropouts = { - "encInput": tf.placeholder(tf.float32, shape = ()), - "encState": tf.placeholder(tf.float32, shape = ()), - "stem": tf.placeholder(tf.float32, shape = ()), - "question": tf.placeholder(tf.float32, shape = ()), - # self.dropouts["question"]Out = tf.placeholder(tf.float32, shape = ()) - # self.dropouts["question"]MAC = tf.placeholder(tf.float32, shape = ()) - "read": tf.placeholder(tf.float32, shape = ()), - "write": tf.placeholder(tf.float32, shape = ()), - "memory": tf.placeholder(tf.float32, shape = ()), - "output": tf.placeholder(tf.float32, shape = ()) - } - - # batch norm params - self.batchNorm = {"decay": config.bnDecay, "train": self.train} - - # if config.parametricDropout: - # self.dropouts["question"] = parametricDropout("qDropout", self.train) - # self.dropouts["read"] = parametricDropout("readDropout", self.train) - # else: - # self.dropouts["question"] = self.dropouts["_q"] - # self.dropouts["read"] = self.dropouts["_read"] - - # if config.tempDynamic: - # self.tempAnnealRate = tf.placeholder(tf.float32, shape = ()) - - self.H, self.W, self.imageInDim = config.imageDims - - # Feeds data into placeholders. See addPlaceholders method for further details. - def createFeedDict(self, data, images, train): - feedDict = { - self.questionsIndicesAll: np.array(data["question"]), - self.questionLengthsAll: np.array(data["questionLength"]), - self.imagesPlaceholder: images, - # self.answersIndicesAll: [0], - - self.dropouts["encInput"]: config.encInputDropout if train else 1.0, - self.dropouts["encState"]: config.encStateDropout if train else 1.0, - self.dropouts["stem"]: config.stemDropout if train else 1.0, - self.dropouts["question"]: config.qDropout if train else 1.0, #_ - self.dropouts["memory"]: config.memoryDropout if train else 1.0, - self.dropouts["read"]: config.readDropout if train else 1.0, #_ - self.dropouts["write"]: config.writeDropout if train else 1.0, - self.dropouts["output"]: config.outputDropout if train else 1.0, - # self.dropouts["question"]Out: config.qDropoutOut if train else 1.0, - # self.dropouts["question"]MAC: config.qDropoutMAC if train else 1.0, - - self.lr: config.lr, - self.train: train - } - - # if config.tempDynamic: - # feedDict[self.tempAnnealRate] = tempAnnealRate - - return feedDict - - # Splits data to a specific GPU (tower) for parallelization - def initTowerBatch(self, towerI, towersNum, dataSize): - towerBatchSize = tf.floordiv(dataSize, towersNum) - start = towerI * towerBatchSize - end = (towerI + 1) * towerBatchSize if towerI < towersNum - 1 else dataSize - - self.questionsIndices = self.questionsIndicesAll[start:end] - self.questionLengths = self.questionLengthsAll[start:end] - self.images = self.imagesAll[start:end] - self.answersIndices = self.answersIndicesAll[start:end] - - self.batchSize = end - start - - ''' - The Image Input Unit (stem). Passes the image features through a CNN-network - Optionally adds position encoding (doesn't in the default behavior). - Flatten the image into Height * Width "Knowledge base" array. - - Args: - images: image input. [batchSize, height, width, inDim] - inDim: input image dimension - outDim: image out dimension - addLoc: if not None, adds positional encoding to the image - - Returns preprocessed images. - [batchSize, height * width, outDim] - ''' - def stem(self, images, inDim, outDim, addLoc = None): - - with tf.variable_scope("stem"): - if addLoc is None: - addLoc = config.locationAware - - if config.stemLinear: - features = ops.linear(images, inDim, outDim) - else: - dims = [inDim] + ([config.stemDim] * (config.stemNumLayers - 1)) + [outDim] - - if addLoc: - images, inDim = ops.addLocation(images, inDim, config.locationDim, - h = self.H, w = self.W, locType = config.locationType) - dims[0] = inDim - - # if config.locationType == "PE": - # dims[-1] /= 4 - # dims[-1] *= 3 - # else: - # dims[-1] -= 2 - features = ops.CNNLayer(images, dims, - batchNorm = self.batchNorm if config.stemBN else None, - dropout = self.dropouts["stem"], - kernelSizes = config.stemKernelSizes, - strides = config.stemStrideSizes) - - # if addLoc: - # lDim = outDim / 4 - # lDim /= 4 - # features, _ = addLocation(features, dims[-1], lDim, h = H, w = W, - # locType = config.locationType) - - if config.stemGridRnn: - features = ops.multigridRNNLayer(features, H, W, outDim) - - # flatten the 2d images into a 1d KB - features = tf.reshape(features, (self.batchSize, -1, outDim)) - - return features - - # Embed question using parametrized word embeddings. - # The embedding are initialized to the values supported to the class initialization - def qEmbeddingsOp(self, qIndices, embInit): - with tf.variable_scope("qEmbeddings"): - # if config.useCPU: - # with tf.device('/cpu:0'): - # embeddingsVar = tf.Variable(self.embeddingsInit, name = "embeddings", dtype = tf.float32) - # else: - # embeddingsVar = tf.Variable(self.embeddingsInit, name = "embeddings", dtype = tf.float32) - embeddingsVar = tf.get_variable("emb", initializer = tf.to_float(embInit), - dtype = tf.float32, trainable = (not config.wrdEmbFixed)) - embeddings = tf.concat([tf.zeros((1, config.wrdEmbDim)), embeddingsVar], axis = 0) - questions = tf.nn.embedding_lookup(embeddings, qIndices) - - return questions, embeddings - - # Embed answer words - def aEmbeddingsOp(self, embInit): - with tf.variable_scope("aEmbeddings"): - if embInit is None: - return None - answerEmbeddings = tf.get_variable("emb", initializer = tf.to_float(embInit), - dtype = tf.float32) - return answerEmbeddings - - # Embed question and answer words with tied embeddings - def qaEmbeddingsOp(self, qIndices, embInit): - questions, qaEmbeddings = self.qEmbeddingsOp(qIndices, embInit["qa"]) - aEmbeddings = tf.nn.embedding_lookup(qaEmbeddings, embInit["ansMap"]) - - return questions, qaEmbeddings, aEmbeddings - - ''' - Embed question (and optionally answer) using parametrized word embeddings. - The embedding are initialized to the values supported to the class initialization - ''' - def embeddingsOp(self, qIndices, embInit): - if config.ansEmbMod == "SHARED": - questions, qEmb, aEmb = self.qaEmbeddingsOp(qIndices, embInit) - else: - questions, qEmb = self.qEmbeddingsOp(qIndices, embInit["q"]) - aEmb = self.aEmbeddingsOp(embInit["a"]) - - return questions, qEmb, aEmb - - ''' - The Question Input Unit embeds the questions to randomly-initialized word vectors, - and runs a recurrent bidirectional encoder (RNN/LSTM etc.) that gives back - vector representations for each question (the RNN final hidden state), and - representations for each of the question words (the RNN outputs for each word). - - The method uses bidirectional LSTM, by default. - Optionally projects the outputs of the LSTM (with linear projection / - optionally with some activation). - - Args: - questions: question word embeddings - [batchSize, questionLength, wordEmbDim] - - questionLengths: the question lengths. - [batchSize] - - projWords: True to apply projection on RNN outputs. - projQuestion: True to apply projection on final RNN state. - projDim: projection dimension in case projection is applied. - - Returns: - Contextual Words: RNN outputs for the words. - [batchSize, questionLength, ctrlDim] - - Vectorized Question: Final hidden state representing the whole question. - [batchSize, ctrlDim] - ''' - def encoder(self, questions, questionLengths, projWords = False, - projQuestion = False, projDim = None): - - with tf.variable_scope("encoder"): - # variational dropout option - varDp = None - if config.encVariationalDropout: - varDp = {"stateDp": self.dropouts["stateInput"], - "inputDp": self.dropouts["encInput"], - "inputSize": config.wrdEmbDim} - - # rnns - for i in range(config.encNumLayers): - questionCntxWords, vecQuestions = ops.RNNLayer(questions, questionLengths, - config.encDim, bi = config.encBi, cellType = config.encType, - dropout = self.dropouts["encInput"], varDp = varDp, name = "rnn%d" % i) - - # dropout for the question vector - vecQuestions = tf.nn.dropout(vecQuestions, self.dropouts["question"]) - - # projection of encoder outputs - if projWords: - questionCntxWords = ops.linear(questionCntxWords, config.encDim, projDim, - name = "projCW") - if projQuestion: - vecQuestions = ops.linear(vecQuestions, config.encDim, projDim, - act = config.encProjQAct, name = "projQ") - - return questionCntxWords, vecQuestions - - ''' - Stacked Attention Layer for baseline. Computes interaction between images - and the previous memory, and casts it back to compute attention over the - image, which in turn is summed up with the previous memory to result in the - new one. - - Args: - images: input image. - [batchSize, H * W, inDim] - - memory: previous memory value - [batchSize, inDim] - - inDim: inputs dimension - hDim: hidden dimension to compute interactions between image and memory - - Returns the new memory value. - ''' - def baselineAttLayer(self, images, memory, inDim, hDim, name = "", reuse = None): - with tf.variable_scope("attLayer" + name, reuse = reuse): - # projImages = ops.linear(images, inDim, hDim, name = "projImage") - # projMemory = tf.expand_dims(ops.linear(memory, inDim, hDim, name = "projMemory"), axis = -2) - # if config.saMultiplicative: - # interactions = projImages * projMemory - # else: - # interactions = tf.tanh(projImages + projMemory) - interactions, _ = ops.mul(images, memory, inDim, proj = {"dim": hDim, "shared": False}, - interMod = config.baselineAttType) - - attention = ops.inter2att(interactions, hDim) - summary = ops.att2Smry(attention, images) - newMemory = memory + summary - - return newMemory - - ''' - Baseline approach: - If baselineAtt is True, applies several layers (baselineAttNumLayers) - of stacked attention to image and memory, when memory is initialized - to the vector questions. See baselineAttLayer for further details. - - Otherwise, computes result output features based on image representation - (baselineCNN), or question (baselineLSTM) or both. - - Args: - vecQuestions: question vector representation - [batchSize, questionDim] - - questionDim: dimension of question vectors - - images: (flattened) image representation - [batchSize, imageDim] - - imageDim: dimension of image representations. - - hDim: hidden dimension to compute interactions between image and memory - (for attention-based baseline). - - Returns final features to use in later classifier. - [batchSize, outDim] (out dimension depends on baseline method) - ''' - def baseline(self, vecQuestions, questionDim, images, imageDim, hDim): - with tf.variable_scope("baseline"): - if config.baselineAtt: - memory = self.linear(vecQuestions, questionDim, hDim, name = "qProj") - images = self.linear(images, imageDim, hDim, name = "iProj") - - for i in range(config.baselineAttNumLayers): - memory = self.baselineAttLayer(images, memory, hDim, hDim, - name = "baseline%d" % i) - memDim = hDim - else: - images, imagesDim = ops.linearizeFeatures(images, self.H, self.W, - imageDim, projDim = config.baselineProjDim) - if config.baselineLSTM and config.baselineCNN: - memory = tf.concat([vecQuestions, images], axis = -1) - memDim = questionDim + imageDim - elif config.baselineLSTM: - memory = vecQuestions - memDim = questionDim - else: # config.baselineCNN - memory = images - memDim = imageDim - - return memory, memDim - - ''' - Runs the MAC recurrent network to perform the reasoning process. - Initializes a MAC cell and runs netLength iterations. - - Currently it passes the question and knowledge base to the cell during - its creating, such that it doesn't need to interact with it through - inputs / outputs while running. The recurrent computation happens - by working iteratively over the hidden (control, memory) states. - - Args: - images: flattened image features. Used as the "Knowledge Base". - (Received by default model behavior from the Image Input Units). - [batchSize, H * W, memDim] - - vecQuestions: vector questions representations. - (Received by default model behavior from the Question Input Units - as the final RNN state). - [batchSize, ctrlDim] - - questionWords: question word embeddings. - [batchSize, questionLength, ctrlDim] - - questionCntxWords: question contextual words. - (Received by default model behavior from the Question Input Units - as the series of RNN output states). - [batchSize, questionLength, ctrlDim] - - questionLengths: question lengths. - [batchSize] - - Returns the final control state and memory state resulted from the network. - ([batchSize, ctrlDim], [bathSize, memDim]) - ''' - def MACnetwork(self, images, vecQuestions, questionWords, questionCntxWords, - questionLengths, name = "", reuse = None): - - with tf.variable_scope("MACnetwork" + name, reuse = reuse): - - self.macCell = MACCell( - vecQuestions = vecQuestions, - questionWords = questionWords, - questionCntxWords = questionCntxWords, - questionLengths = questionLengths, - knowledgeBase = images, - memoryDropout = self.dropouts["memory"], - readDropout = self.dropouts["read"], - writeDropout = self.dropouts["write"], - # qDropoutMAC = self.qDropoutMAC, - batchSize = self.batchSize, - train = self.train, - reuse = reuse) - - state = self.macCell.zero_state(self.batchSize, tf.float32) - - # inSeq = tf.unstack(inSeq, axis = 1) - none = tf.zeros((self.batchSize, 1), dtype = tf.float32) - - # for i, inp in enumerate(inSeq): - for i in range(config.netLength): - self.macCell.iteration = i - # if config.unsharedCells: - # with tf.variable_scope("iteration%d" % i): - # macCell.myNameScope = "iteration%d" % i - _, state = self.macCell(none, state) - # else: - # _, state = macCell(none, state) - # macCell.reuse = True - - # self.autoEncMMLoss = macCell.autoEncMMLossI - # inputSeqL = None - # _, lastOutputs = tf.nn.dynamic_rnn(macCell, inputSeq, # / static - # sequence_length = inputSeqL, - # initial_state = initialState, - # swap_memory = True) - - # self.postModules = None - # if (config.controlPostRNN or config.selfAttentionMod == "POST"): # may not work well with dlogits - # self.postModules, _ = self.RNNLayer(cLogits, None, config.encDim, bi = False, - # name = "decPostRNN", cellType = config.controlPostRNNmod) - # if config.controlPostRNN: - # logits = self.postModules - # self.postModules = tf.unstack(self.postModules, axis = 1) - - # self.autoEncCtrlLoss = tf.constant(0.0) - # if config.autoEncCtrl: - # autoEncCtrlCellType = ("GRU" if config.autoEncCtrlGRU else "RNN") - # autoEncCtrlinp = logits - # _, autoEncHid = self.RNNLayer(autoEncCtrlinp, None, config.encDim, - # bi = True, name = "autoEncCtrl", cellType = autoEncCtrlCellType) - # self.autoEncCtrlLoss = (tf.nn.l2_loss(vecQuestions - autoEncHid)) / tf.to_float(self.batchSize) - - finalControl = state.control - finalMemory = state.memory - - return finalControl, finalMemory - - ''' - Output Unit (step 1): chooses the inputs to the output classifier. - - By default the classifier input will be the the final memory state of the MAC network. - If outQuestion is True, concatenate the question representation to that. - If outImage is True, concatenate the image flattened representation. - - Args: - memory: (final) memory state of the MAC network. - [batchSize, memDim] - - vecQuestions: question vector representation. - [batchSize, ctrlDim] - - images: image features. - [batchSize, H, W, imageInDim] - - imageInDim: images dimension. - - Returns the resulted features and their dimension. - ''' - def outputOp(self, memory, vecQuestions, images, imageInDim): - with tf.variable_scope("outputUnit"): - features = memory - dim = config.memDim - - if config.outQuestion: - eVecQuestions = ops.linear(vecQuestions, config.ctrlDim, config.memDim, name = "outQuestion") - features, dim = ops.concat(features, eVecQuestions, config.memDim, mul = config.outQuestionMul) - - if config.outImage: - images, imagesDim = ops.linearizeFeatures(images, self.H, self.W, self.imageInDim, - outputDim = config.outImageDim) - images = ops.linear(images, config.memDim, config.outImageDim, name = "outImage") - features = tf.concat([features, images], axis = -1) - dim += config.outImageDim - - return features, dim - - ''' - Output Unit (step 2): Computes the logits for the answers. Passes the features - through fully-connected network to get the logits over the possible answers. - Optionally uses answer word embeddings in computing the logits (by default, it doesn't). - - Args: - features: features used to compute logits - [batchSize, inDim] - - inDim: features dimension - - aEmbedding: supported word embeddings for answer words in case answerMod is not NON. - Optionally computes logits by computing dot-product with answer embeddings. - - Returns: the computed logits. - [batchSize, answerWordsNum] - ''' - def classifier(self, features, inDim, aEmbeddings = None): - with tf.variable_scope("classifier"): - outDim = config.answerWordsNum - dims = [inDim] + config.outClassifierDims + [outDim] - if config.answerMod != "NON": - dims[-1] = config.wrdEmbDim - - - logits = ops.FCLayer(features, dims, - batchNorm = self.batchNorm if config.outputBN else None, - dropout = self.dropouts["output"]) - - if config.answerMod != "NON": - logits = tf.nn.dropout(logits, self.dropouts["output"]) - interactions = ops.mul(aEmbeddings, logits, dims[-1], interMod = config.answerMod) - logits = ops.inter2logits(interactions, dims[-1], sumMod = "SUM") - logits += ops.getBias((outputDim, ), "ans") - - # answersWeights = tf.transpose(aEmbeddings) - - # if config.answerMod == "BL": - # Wans = ops.getWeight((dims[-1], config.wrdEmbDim), "ans") - # logits = tf.matmul(logits, Wans) - # elif config.answerMod == "DIAG": - # Wans = ops.getWeight((config.wrdEmbDim, ), "ans") - # logits = logits * Wans - - # logits = tf.matmul(logits, answersWeights) - - return logits - - # def getTemp(): - # with tf.variable_scope("temperature"): - # if config.tempParametric: - # self.temperatureVar = tf.get_variable("temperature", shape = (), - # initializer = tf.constant_initializer(5), dtype = tf.float32) - # temperature = tf.sigmoid(self.temperatureVar) - # else: - # temperature = config.temperature - - # if config.tempDynamic: - # temperature *= self.tempAnnealRate - - # return temperature - - # Computes mean cross entropy loss between logits and answers. - def addAnswerLossOp(self, logits, answers): - with tf.variable_scope("answerLoss"): - losses = tf.nn.sparse_softmax_cross_entropy_with_logits(labels = answers, logits = logits) - loss = tf.reduce_mean(losses) - self.answerLossList.append(loss) - - return loss, losses - - # Computes predictions (by finding maximal logit value, corresponding to highest probability) - # and mean accuracy between predictions and answers. - def addPredOp(self, logits, answers): - with tf.variable_scope("pred"): - preds = tf.to_int32(tf.argmax(logits, axis = -1)) # tf.nn.softmax( - corrects = tf.equal(preds, answers) - correctNum = tf.reduce_sum(tf.to_int32(corrects)) - acc = tf.reduce_mean(tf.to_float(corrects)) - self.correctNumList.append(correctNum) - self.answerAccList.append(acc) - - return preds, corrects, correctNum - - # Creates optimizer (adam) - def addOptimizerOp(self): - with tf.variable_scope("trainAddOptimizer"): - self.globalStep = tf.Variable(0, dtype = tf.int32, trainable = False, name = "globalStep") # init to 0 every run? - optimizer = tf.train.AdamOptimizer(learning_rate = self.lr) - - return optimizer - - ''' - Computes gradients for all variables or subset of them, based on provided loss, - using optimizer. - ''' - def computeGradients(self, optimizer, loss, trainableVars = None): # tf.trainable_variables() - with tf.variable_scope("computeGradients"): - if config.trainSubset: - trainableVars = [] - allVars = tf.trainable_variables() - for var in allVars: - if any((s in var.name) for s in config.varSubset): - trainableVars.append(var) - - gradients_vars = optimizer.compute_gradients(loss, trainableVars) - return gradients_vars - - ''' - Apply gradients. Optionally clip them, and update exponential moving averages - for parameters. - ''' - def addTrainingOp(self, optimizer, gradients_vars): - with tf.variable_scope("train"): - gradients, variables = zip(*gradients_vars) - norm = tf.global_norm(gradients) - - # gradient clipping - if config.clipGradients: - clippedGradients, _ = tf.clip_by_global_norm(gradients, config.gradMaxNorm, use_norm = norm) - gradients_vars = zip(clippedGradients, variables) - - # updates ops (for batch norm) and train op - updateOps = tf.get_collection(tf.GraphKeys.UPDATE_OPS) - with tf.control_dependencies(updateOps): - train = optimizer.apply_gradients(gradients_vars, global_step = self.globalStep) - - # exponential moving average - if config.useEMA: - ema = tf.train.ExponentialMovingAverage(decay = config.emaDecayRate) - maintainAveragesOp = ema.apply(tf.trainable_variables()) - - with tf.control_dependencies([train]): - trainAndUpdateOp = tf.group(maintainAveragesOp) - - train = trainAndUpdateOp - - self.emaDict = ema.variables_to_restore() - - return train, norm - - # TODO (add back support for multi-gpu..) - def averageAcrossTowers(self, gpusNum): - self.lossAll = self.lossList[0] - - self.answerLossAll = self.answerLossList[0] - self.correctNumAll = self.correctNumList[0] - self.answerAccAll = self.answerAccList[0] - self.predsAll = self.predsList[0] - self.gradientVarsAll = self.gradientVarsList[0] - - def trim2DVectors(self, vectors, vectorsLengths): - maxLength = np.max(vectorsLengths) - return vectors[:,:maxLength] - - def trimData(self, data): - data["question"] = self.trim2DVectors(data["question"], data["questionLength"]) - return data - - ''' - Builds predictions JSON, by adding the model's predictions and attention maps - back to the original data JSON. - ''' - def buildPredsList(self, prediction): - - return self.answerDict.decodeId(prediction) - - ''' - Processes a batch of data with the model. - - Args: - sess: TF session - - data: Data batch. Dictionary that contains numpy array for: - questions, questionLengths, answers. - See preprocess.py for further information of the batch structure. - - images: batch of image features, as numpy array. images["images"] contains - [batchSize, channels, h, w] - - train: True to run batch for training. - - getAtt: True to return attention maps for question and image (and optionally - self-attention and gate values). - - Returns results: e.g. loss, accuracy, running time. - ''' - def runBatch(self, sess, data, images, train, getAtt = False): - data = self.trimData(data) - - predsOp = self.predsAll - - time0 = time.time() - feed = self.createFeedDict(data, images, train) - - time1 = time.time() - predsInfo = sess.run( - predsOp, - feed_dict = feed) - time2 = time.time() - - predsList = self.buildPredsList(predsInfo[0]) - - return predsList - - def build(self): - self.addPlaceholders() - self.optimizer = self.addOptimizerOp() - - self.gradientVarsList = [] - self.lossList = [] - - self.answerLossList = [] - self.correctNumList = [] - self.answerAccList = [] - self.predsList = [] - - with tf.variable_scope("macModel"): - for i in range(config.gpusNum): - with tf.device("/gpu:{}".format(i)): - with tf.name_scope("tower{}".format(i)) as scope: - self.initTowerBatch(i, config.gpusNum, self.batchSizeAll) - - self.loss = tf.constant(0.0) - - # embed questions words (and optionally answer words) - questionWords, qEmbeddings, aEmbeddings = \ - self.embeddingsOp(self.questionsIndices, self.embeddingsInit) - - projWords = projQuestion = ((config.encDim != config.ctrlDim) or config.encProj) - questionCntxWords, vecQuestions = self.encoder(questionWords, - self.questionLengths, projWords, projQuestion, config.ctrlDim) - - # Image Input Unit (stem) - imageFeatures = self.stem(self.images, self.imageInDim, config.memDim) - - # baseline model - if config.useBaseline: - output, dim = self.baseline(vecQuestions, config.ctrlDim, - self.images, self.imageInDim, config.attDim) - # MAC model - else: - # self.temperature = self.getTemp() - - finalControl, finalMemory = self.MACnetwork(imageFeatures, vecQuestions, - questionWords, questionCntxWords, self.questionLengths) - - # Output Unit - step 1 (preparing classifier inputs) - output, dim = self.outputOp(finalMemory, vecQuestions, - self.images, self.imageInDim) - - # Output Unit - step 2 (classifier) - logits = self.classifier(output, dim, aEmbeddings) - - # compute loss, predictions, accuracy - answerLoss, self.losses = self.addAnswerLossOp(logits, self.answersIndices) - self.preds, self.corrects, self.correctNum = self.addPredOp(logits, self.answersIndices) - self.loss += answerLoss - self.predsList.append(self.preds) - - self.lossList.append(self.loss) - - # compute gradients - gradient_vars = self.computeGradients(self.optimizer, self.loss, trainableVars = None) - self.gradientVarsList.append(gradient_vars) - - # reuse variables in next towers - tf.get_variable_scope().reuse_variables() - - self.averageAcrossTowers(config.gpusNum) - - self.trainOp, self.gradNorm = self.addTrainingOp(self.optimizer, self.gradientVarsAll) - self.noOp = tf.no_op() diff --git a/spaces/ChandraMohanNayal/AutoGPT/main.py b/spaces/ChandraMohanNayal/AutoGPT/main.py deleted file mode 100644 index 160addc390b94a8b143a3a2e18991a560f9b032e..0000000000000000000000000000000000000000 --- a/spaces/ChandraMohanNayal/AutoGPT/main.py +++ /dev/null @@ -1 +0,0 @@ -from autogpt import main diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/system/botOperate.js b/spaces/CikeyQI/Yunzai/Yunzai/plugins/system/botOperate.js deleted file mode 100644 index acceaab74024b7f87b7b4f7fc8b34f5614fc900e..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/system/botOperate.js +++ /dev/null @@ -1,45 +0,0 @@ -export class botOperate extends plugin { - constructor () { - super({ - name: "Bot 操作", - dsc: "Bot 操作", - event: "message", - rule: [ - { - reg: "^#(Bot|机器人)验证.+:.+$", - fnc: "Verify", - permission: "master", - }, - { - reg: "^#(Bot|机器人)(上|下)线.+$", - fnc: "Operate", - permission: "master", - } - ] - }) - } - - Verify() { - const data = { msg: this.e.msg.replace(/^#(Bot|机器人)验证/, "").trim().split(":") } - data.self_id = data.msg.shift() - data.msg = data.msg.join(":") - Bot.em(`verify.${data.self_id}`, data) - } - - Operate() { - const bot = Bot[this.e.msg.replace(/^#(Bot|机器人)(上|下)线/, "").trim()] - if (typeof bot != "object") { - this.reply("Bot 不存在", true) - return false - } - if (this.e.msg.includes("上线") && typeof bot.login == "function") { - this.reply("已发送上线操作", true) - bot.login() - } else if (this.e.msg.includes("下线") && typeof bot.logout == "function") { - this.reply("已发送下线操作", true) - bot.logout() - } else { - this.reply("暂不支持此操作", true) - } - } -} \ No newline at end of file diff --git a/spaces/CjangCjengh/Sanskrit-TTS/monotonic_align/core.c b/spaces/CjangCjengh/Sanskrit-TTS/monotonic_align/core.c deleted file mode 100644 index 78f6aff68257660702f0b0ad278757a9728e84d5..0000000000000000000000000000000000000000 --- a/spaces/CjangCjengh/Sanskrit-TTS/monotonic_align/core.c +++ /dev/null @@ -1,21608 +0,0 @@ -/* Generated by Cython 0.29.32 */ - -/* BEGIN: Cython Metadata -{ - "distutils": { - "name": "monotonic_align.core", - "sources": [ - "core.pyx" - ] - }, - "module_name": "monotonic_align.core" -} -END: Cython Metadata */ - -#ifndef PY_SSIZE_T_CLEAN -#define PY_SSIZE_T_CLEAN -#endif /* PY_SSIZE_T_CLEAN */ -#include "Python.h" -#ifndef Py_PYTHON_H - #error Python headers needed to compile C extensions, please install development version of Python. -#elif PY_VERSION_HEX < 0x02060000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000) - #error Cython requires Python 2.6+ or Python 3.3+. -#else -#define CYTHON_ABI "0_29_32" -#define CYTHON_HEX_VERSION 0x001D20F0 -#define CYTHON_FUTURE_DIVISION 0 -#include -#ifndef offsetof - #define offsetof(type, member) ( (size_t) & ((type*)0) -> member ) -#endif -#if !defined(WIN32) && !defined(MS_WINDOWS) - #ifndef __stdcall - #define __stdcall - #endif - #ifndef __cdecl - #define __cdecl - #endif - #ifndef __fastcall - #define __fastcall - #endif -#endif -#ifndef DL_IMPORT - #define DL_IMPORT(t) t -#endif -#ifndef DL_EXPORT - #define DL_EXPORT(t) t -#endif -#define __PYX_COMMA , -#ifndef HAVE_LONG_LONG - #if PY_VERSION_HEX >= 0x02070000 - #define HAVE_LONG_LONG - #endif -#endif -#ifndef PY_LONG_LONG - #define PY_LONG_LONG LONG_LONG -#endif -#ifndef Py_HUGE_VAL - #define Py_HUGE_VAL HUGE_VAL -#endif -#ifdef PYPY_VERSION - #define CYTHON_COMPILING_IN_PYPY 1 - #define CYTHON_COMPILING_IN_PYSTON 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_NOGIL 0 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #if PY_VERSION_HEX < 0x03050000 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #undef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 1 - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC (PYPY_VERSION_HEX >= 0x07030900) - #endif -#elif defined(PYSTON_VERSION) - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_PYSTON 1 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_NOGIL 0 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 0 - #endif -#elif defined(PY_NOGIL) - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_PYSTON 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_NOGIL 1 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #ifndef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #endif - #ifndef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 1 - #endif - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 -#else - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_PYSTON 0 - #define CYTHON_COMPILING_IN_CPYTHON 1 - #define CYTHON_COMPILING_IN_NOGIL 0 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #if PY_VERSION_HEX < 0x02070000 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #elif !defined(CYTHON_USE_PYTYPE_LOOKUP) - #define CYTHON_USE_PYTYPE_LOOKUP 1 - #endif - #if PY_MAJOR_VERSION < 3 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #if PY_VERSION_HEX < 0x02070000 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #elif !defined(CYTHON_USE_PYLONG_INTERNALS) - #define CYTHON_USE_PYLONG_INTERNALS 1 - #endif - #ifndef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 1 - #endif - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #if PY_VERSION_HEX < 0x030300F0 || PY_VERSION_HEX >= 0x030B00A2 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #elif !defined(CYTHON_USE_UNICODE_WRITER) - #define CYTHON_USE_UNICODE_WRITER 1 - #endif - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #if PY_VERSION_HEX >= 0x030B00A4 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #elif !defined(CYTHON_FAST_THREAD_STATE) - #define CYTHON_FAST_THREAD_STATE 1 - #endif - #ifndef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL (PY_VERSION_HEX < 0x030A0000) - #endif - #ifndef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT (PY_VERSION_HEX >= 0x03050000) - #endif - #ifndef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1) - #endif - #ifndef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS (PY_VERSION_HEX >= 0x030600B1) - #endif - #if PY_VERSION_HEX >= 0x030B00A4 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #elif !defined(CYTHON_USE_EXC_INFO_STACK) - #define CYTHON_USE_EXC_INFO_STACK (PY_VERSION_HEX >= 0x030700A3) - #endif - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 1 - #endif -#endif -#if !defined(CYTHON_FAST_PYCCALL) -#define CYTHON_FAST_PYCCALL (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1) -#endif -#if CYTHON_USE_PYLONG_INTERNALS - #if PY_MAJOR_VERSION < 3 - #include "longintrepr.h" - #endif - #undef SHIFT - #undef BASE - #undef MASK - #ifdef SIZEOF_VOID_P - enum { __pyx_check_sizeof_voidp = 1 / (int)(SIZEOF_VOID_P == sizeof(void*)) }; - #endif -#endif -#ifndef __has_attribute - #define __has_attribute(x) 0 -#endif -#ifndef __has_cpp_attribute - #define __has_cpp_attribute(x) 0 -#endif -#ifndef CYTHON_RESTRICT - #if defined(__GNUC__) - #define CYTHON_RESTRICT __restrict__ - #elif defined(_MSC_VER) && _MSC_VER >= 1400 - #define CYTHON_RESTRICT __restrict - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_RESTRICT restrict - #else - #define CYTHON_RESTRICT - #endif -#endif -#ifndef CYTHON_UNUSED -# if defined(__GNUC__) -# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -#endif -#ifndef CYTHON_MAYBE_UNUSED_VAR -# if defined(__cplusplus) - template void CYTHON_MAYBE_UNUSED_VAR( const T& ) { } -# else -# define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x) -# endif -#endif -#ifndef CYTHON_NCP_UNUSED -# if CYTHON_COMPILING_IN_CPYTHON -# define CYTHON_NCP_UNUSED -# else -# define CYTHON_NCP_UNUSED CYTHON_UNUSED -# endif -#endif -#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None) -#ifdef _MSC_VER - #ifndef _MSC_STDINT_H_ - #if _MSC_VER < 1300 - typedef unsigned char uint8_t; - typedef unsigned int uint32_t; - #else - typedef unsigned __int8 uint8_t; - typedef unsigned __int32 uint32_t; - #endif - #endif -#else - #include -#endif -#ifndef CYTHON_FALLTHROUGH - #if defined(__cplusplus) && __cplusplus >= 201103L - #if __has_cpp_attribute(fallthrough) - #define CYTHON_FALLTHROUGH [[fallthrough]] - #elif __has_cpp_attribute(clang::fallthrough) - #define CYTHON_FALLTHROUGH [[clang::fallthrough]] - #elif __has_cpp_attribute(gnu::fallthrough) - #define CYTHON_FALLTHROUGH [[gnu::fallthrough]] - #endif - #endif - #ifndef CYTHON_FALLTHROUGH - #if __has_attribute(fallthrough) - #define CYTHON_FALLTHROUGH __attribute__((fallthrough)) - #else - #define CYTHON_FALLTHROUGH - #endif - #endif - #if defined(__clang__ ) && defined(__apple_build_version__) - #if __apple_build_version__ < 7000000 - #undef CYTHON_FALLTHROUGH - #define CYTHON_FALLTHROUGH - #endif - #endif -#endif - -#ifndef CYTHON_INLINE - #if defined(__clang__) - #define CYTHON_INLINE __inline__ __attribute__ ((__unused__)) - #elif defined(__GNUC__) - #define CYTHON_INLINE __inline__ - #elif defined(_MSC_VER) - #define CYTHON_INLINE __inline - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_INLINE inline - #else - #define CYTHON_INLINE - #endif -#endif - -#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag) - #define Py_OptimizeFlag 0 -#endif -#define __PYX_BUILD_PY_SSIZE_T "n" -#define CYTHON_FORMAT_SSIZE_T "z" -#if PY_MAJOR_VERSION < 3 - #define __Pyx_BUILTIN_MODULE_NAME "__builtin__" - #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) - #define __Pyx_DefaultClassType PyClass_Type -#else - #define __Pyx_BUILTIN_MODULE_NAME "builtins" - #define __Pyx_DefaultClassType PyType_Type -#if PY_VERSION_HEX >= 0x030B00A1 - static CYTHON_INLINE PyCodeObject* __Pyx_PyCode_New(int a, int k, int l, int s, int f, - PyObject *code, PyObject *c, PyObject* n, PyObject *v, - PyObject *fv, PyObject *cell, PyObject* fn, - PyObject *name, int fline, PyObject *lnos) { - PyObject *kwds=NULL, *argcount=NULL, *posonlyargcount=NULL, *kwonlyargcount=NULL; - PyObject *nlocals=NULL, *stacksize=NULL, *flags=NULL, *replace=NULL, *call_result=NULL, *empty=NULL; - const char *fn_cstr=NULL; - const char *name_cstr=NULL; - PyCodeObject* co=NULL; - PyObject *type, *value, *traceback; - PyErr_Fetch(&type, &value, &traceback); - if (!(kwds=PyDict_New())) goto end; - if (!(argcount=PyLong_FromLong(a))) goto end; - if (PyDict_SetItemString(kwds, "co_argcount", argcount) != 0) goto end; - if (!(posonlyargcount=PyLong_FromLong(0))) goto end; - if (PyDict_SetItemString(kwds, "co_posonlyargcount", posonlyargcount) != 0) goto end; - if (!(kwonlyargcount=PyLong_FromLong(k))) goto end; - if (PyDict_SetItemString(kwds, "co_kwonlyargcount", kwonlyargcount) != 0) goto end; - if (!(nlocals=PyLong_FromLong(l))) goto end; - if (PyDict_SetItemString(kwds, "co_nlocals", nlocals) != 0) goto end; - if (!(stacksize=PyLong_FromLong(s))) goto end; - if (PyDict_SetItemString(kwds, "co_stacksize", stacksize) != 0) goto end; - if (!(flags=PyLong_FromLong(f))) goto end; - if (PyDict_SetItemString(kwds, "co_flags", flags) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_code", code) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_consts", c) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_names", n) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_varnames", v) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_freevars", fv) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_cellvars", cell) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_linetable", lnos) != 0) goto end; - if (!(fn_cstr=PyUnicode_AsUTF8AndSize(fn, NULL))) goto end; - if (!(name_cstr=PyUnicode_AsUTF8AndSize(name, NULL))) goto end; - if (!(co = PyCode_NewEmpty(fn_cstr, name_cstr, fline))) goto end; - if (!(replace = PyObject_GetAttrString((PyObject*)co, "replace"))) goto cleanup_code_too; - if (!(empty = PyTuple_New(0))) goto cleanup_code_too; // unfortunately __pyx_empty_tuple isn't available here - if (!(call_result = PyObject_Call(replace, empty, kwds))) goto cleanup_code_too; - Py_XDECREF((PyObject*)co); - co = (PyCodeObject*)call_result; - call_result = NULL; - if (0) { - cleanup_code_too: - Py_XDECREF((PyObject*)co); - co = NULL; - } - end: - Py_XDECREF(kwds); - Py_XDECREF(argcount); - Py_XDECREF(posonlyargcount); - Py_XDECREF(kwonlyargcount); - Py_XDECREF(nlocals); - Py_XDECREF(stacksize); - Py_XDECREF(replace); - Py_XDECREF(call_result); - Py_XDECREF(empty); - if (type) { - PyErr_Restore(type, value, traceback); - } - return co; - } -#else - #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#endif - #define __Pyx_DefaultClassType PyType_Type -#endif -#ifndef Py_TPFLAGS_CHECKTYPES - #define Py_TPFLAGS_CHECKTYPES 0 -#endif -#ifndef Py_TPFLAGS_HAVE_INDEX - #define Py_TPFLAGS_HAVE_INDEX 0 -#endif -#ifndef Py_TPFLAGS_HAVE_NEWBUFFER - #define Py_TPFLAGS_HAVE_NEWBUFFER 0 -#endif -#ifndef Py_TPFLAGS_HAVE_FINALIZE - #define Py_TPFLAGS_HAVE_FINALIZE 0 -#endif -#ifndef METH_STACKLESS - #define METH_STACKLESS 0 -#endif -#if PY_VERSION_HEX <= 0x030700A3 || !defined(METH_FASTCALL) - #ifndef METH_FASTCALL - #define METH_FASTCALL 0x80 - #endif - typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *const *args, Py_ssize_t nargs); - typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject *const *args, - Py_ssize_t nargs, PyObject *kwnames); -#else - #define __Pyx_PyCFunctionFast _PyCFunctionFast - #define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords -#endif -#if CYTHON_FAST_PYCCALL -#define __Pyx_PyFastCFunction_Check(func)\ - ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS))))) -#else -#define __Pyx_PyFastCFunction_Check(func) 0 -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc) - #define PyObject_Malloc(s) PyMem_Malloc(s) - #define PyObject_Free(p) PyMem_Free(p) - #define PyObject_Realloc(p) PyMem_Realloc(p) -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030400A1 - #define PyMem_RawMalloc(n) PyMem_Malloc(n) - #define PyMem_RawRealloc(p, n) PyMem_Realloc(p, n) - #define PyMem_RawFree(p) PyMem_Free(p) -#endif -#if CYTHON_COMPILING_IN_PYSTON - #define __Pyx_PyCode_HasFreeVars(co) PyCode_HasFreeVars(co) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno) -#else - #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno) -#endif -#if !CYTHON_FAST_THREAD_STATE || PY_VERSION_HEX < 0x02070000 - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#elif PY_VERSION_HEX >= 0x03060000 - #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet() -#elif PY_VERSION_HEX >= 0x03000000 - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#else - #define __Pyx_PyThreadState_Current _PyThreadState_Current -#endif -#if PY_VERSION_HEX < 0x030700A2 && !defined(PyThread_tss_create) && !defined(Py_tss_NEEDS_INIT) -#include "pythread.h" -#define Py_tss_NEEDS_INIT 0 -typedef int Py_tss_t; -static CYTHON_INLINE int PyThread_tss_create(Py_tss_t *key) { - *key = PyThread_create_key(); - return 0; -} -static CYTHON_INLINE Py_tss_t * PyThread_tss_alloc(void) { - Py_tss_t *key = (Py_tss_t *)PyObject_Malloc(sizeof(Py_tss_t)); - *key = Py_tss_NEEDS_INIT; - return key; -} -static CYTHON_INLINE void PyThread_tss_free(Py_tss_t *key) { - PyObject_Free(key); -} -static CYTHON_INLINE int PyThread_tss_is_created(Py_tss_t *key) { - return *key != Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE void PyThread_tss_delete(Py_tss_t *key) { - PyThread_delete_key(*key); - *key = Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE int PyThread_tss_set(Py_tss_t *key, void *value) { - return PyThread_set_key_value(*key, value); -} -static CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) { - return PyThread_get_key_value(*key); -} -#endif -#if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized) -#define __Pyx_PyDict_NewPresized(n) ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n)) -#else -#define __Pyx_PyDict_NewPresized(n) PyDict_New() -#endif -#if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION - #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y) -#else - #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y) -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 && CYTHON_USE_UNICODE_INTERNALS -#define __Pyx_PyDict_GetItemStr(dict, name) _PyDict_GetItem_KnownHash(dict, name, ((PyASCIIObject *) name)->hash) -#else -#define __Pyx_PyDict_GetItemStr(dict, name) PyDict_GetItem(dict, name) -#endif -#if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND) - #define CYTHON_PEP393_ENABLED 1 - #if defined(PyUnicode_IS_READY) - #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\ - 0 : _PyUnicode_Ready((PyObject *)(op))) - #else - #define __Pyx_PyUnicode_READY(op) (0) - #endif - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u) - #define __Pyx_PyUnicode_KIND(u) PyUnicode_KIND(u) - #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u) - #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, ch) - #if defined(PyUnicode_IS_READY) && defined(PyUnicode_GET_SIZE) - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03090000 - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : ((PyCompactUnicodeObject *)(u))->wstr_length)) - #else - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u))) - #endif - #else - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_LENGTH(u)) - #endif -#else - #define CYTHON_PEP393_ENABLED 0 - #define PyUnicode_1BYTE_KIND 1 - #define PyUnicode_2BYTE_KIND 2 - #define PyUnicode_4BYTE_KIND 4 - #define __Pyx_PyUnicode_READY(op) (0) - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i])) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111) - #define __Pyx_PyUnicode_KIND(u) (sizeof(Py_UNICODE)) - #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u)) - #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i])) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) (((void)(k)), ((Py_UNICODE*)d)[i] = ch) - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u)) -#endif -#if CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b) -#else - #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\ - PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b)) -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_Contains) - #define PyUnicode_Contains(u, s) PySequence_Contains(u, s) -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyByteArray_Check) - #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type) -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Format) - #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt) -#endif -#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyString_Check(b) && !PyString_CheckExact(b)))) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b)) -#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyUnicode_Check(b) && !PyUnicode_CheckExact(b)))) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b)) -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b) -#else - #define __Pyx_PyString_Format(a, b) PyString_Format(a, b) -#endif -#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII) - #define PyObject_ASCII(o) PyObject_Repr(o) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBaseString_Type PyUnicode_Type - #define PyStringObject PyUnicodeObject - #define PyString_Type PyUnicode_Type - #define PyString_Check PyUnicode_Check - #define PyString_CheckExact PyUnicode_CheckExact -#ifndef PyObject_Unicode - #define PyObject_Unicode PyObject_Str -#endif -#endif -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj) - #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj) -#else - #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj)) - #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj)) -#endif -#ifndef PySet_CheckExact - #define PySet_CheckExact(obj) (Py_TYPE(obj) == &PySet_Type) -#endif -#if PY_VERSION_HEX >= 0x030900A4 - #define __Pyx_SET_REFCNT(obj, refcnt) Py_SET_REFCNT(obj, refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SET_SIZE(obj, size) -#else - #define __Pyx_SET_REFCNT(obj, refcnt) Py_REFCNT(obj) = (refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SIZE(obj) = (size) -#endif -#if CYTHON_ASSUME_SAFE_MACROS - #define __Pyx_PySequence_SIZE(seq) Py_SIZE(seq) -#else - #define __Pyx_PySequence_SIZE(seq) PySequence_Size(seq) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyIntObject PyLongObject - #define PyInt_Type PyLong_Type - #define PyInt_Check(op) PyLong_Check(op) - #define PyInt_CheckExact(op) PyLong_CheckExact(op) - #define PyInt_FromString PyLong_FromString - #define PyInt_FromUnicode PyLong_FromUnicode - #define PyInt_FromLong PyLong_FromLong - #define PyInt_FromSize_t PyLong_FromSize_t - #define PyInt_FromSsize_t PyLong_FromSsize_t - #define PyInt_AsLong PyLong_AsLong - #define PyInt_AS_LONG PyLong_AS_LONG - #define PyInt_AsSsize_t PyLong_AsSsize_t - #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask - #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask - #define PyNumber_Int PyNumber_Long -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBoolObject PyLongObject -#endif -#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY - #ifndef PyUnicode_InternFromString - #define PyUnicode_InternFromString(s) PyUnicode_FromString(s) - #endif -#endif -#if PY_VERSION_HEX < 0x030200A4 - typedef long Py_hash_t; - #define __Pyx_PyInt_FromHash_t PyInt_FromLong - #define __Pyx_PyInt_AsHash_t __Pyx_PyIndex_AsHash_t -#else - #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t - #define __Pyx_PyInt_AsHash_t __Pyx_PyIndex_AsSsize_t -#endif -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyMethod_New(func, self, klass) ((self) ? ((void)(klass), PyMethod_New(func, self)) : __Pyx_NewRef(func)) -#else - #define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass) -#endif -#if CYTHON_USE_ASYNC_SLOTS - #if PY_VERSION_HEX >= 0x030500B1 - #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods - #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async) - #else - #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved)) - #endif -#else - #define __Pyx_PyType_AsAsync(obj) NULL -#endif -#ifndef __Pyx_PyAsyncMethodsStruct - typedef struct { - unaryfunc am_await; - unaryfunc am_aiter; - unaryfunc am_anext; - } __Pyx_PyAsyncMethodsStruct; -#endif - -#if defined(_WIN32) || defined(WIN32) || defined(MS_WINDOWS) - #if !defined(_USE_MATH_DEFINES) - #define _USE_MATH_DEFINES - #endif -#endif -#include -#ifdef NAN -#define __PYX_NAN() ((float) NAN) -#else -static CYTHON_INLINE float __PYX_NAN() { - float value; - memset(&value, 0xFF, sizeof(value)); - return value; -} -#endif -#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL) -#define __Pyx_truncl trunc -#else -#define __Pyx_truncl truncl -#endif - -#define __PYX_MARK_ERR_POS(f_index, lineno) \ - { __pyx_filename = __pyx_f[f_index]; (void)__pyx_filename; __pyx_lineno = lineno; (void)__pyx_lineno; __pyx_clineno = __LINE__; (void)__pyx_clineno; } -#define __PYX_ERR(f_index, lineno, Ln_error) \ - { __PYX_MARK_ERR_POS(f_index, lineno) goto Ln_error; } - -#ifndef __PYX_EXTERN_C - #ifdef __cplusplus - #define __PYX_EXTERN_C extern "C" - #else - #define __PYX_EXTERN_C extern - #endif -#endif - -#define __PYX_HAVE__monotonic_align__core -#define __PYX_HAVE_API__monotonic_align__core -/* Early includes */ -#include "pythread.h" -#include -#include -#include -#include "pystate.h" -#ifdef _OPENMP -#include -#endif /* _OPENMP */ - -#if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS) -#define CYTHON_WITHOUT_ASSERTIONS -#endif - -typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding; - const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; - -#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_UTF8 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT (PY_MAJOR_VERSION >= 3 && __PYX_DEFAULT_STRING_ENCODING_IS_UTF8) -#define __PYX_DEFAULT_STRING_ENCODING "" -#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString -#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#define __Pyx_uchar_cast(c) ((unsigned char)c) -#define __Pyx_long_cast(x) ((long)x) -#define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\ - (sizeof(type) < sizeof(Py_ssize_t)) ||\ - (sizeof(type) > sizeof(Py_ssize_t) &&\ - likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX) &&\ - (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\ - v == (type)PY_SSIZE_T_MIN))) ||\ - (sizeof(type) == sizeof(Py_ssize_t) &&\ - (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX))) ) -static CYTHON_INLINE int __Pyx_is_valid_index(Py_ssize_t i, Py_ssize_t limit) { - return (size_t) i < (size_t) limit; -} -#if defined (__cplusplus) && __cplusplus >= 201103L - #include - #define __Pyx_sst_abs(value) std::abs(value) -#elif SIZEOF_INT >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) abs(value) -#elif SIZEOF_LONG >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) labs(value) -#elif defined (_MSC_VER) - #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value)) -#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define __Pyx_sst_abs(value) llabs(value) -#elif defined (__GNUC__) - #define __Pyx_sst_abs(value) __builtin_llabs(value) -#else - #define __Pyx_sst_abs(value) ((value<0) ? -value : value) -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*); -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length); -#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s)) -#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l) -#define __Pyx_PyBytes_FromString PyBytes_FromString -#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*); -#if PY_MAJOR_VERSION < 3 - #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#else - #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize -#endif -#define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyObject_AsWritableString(s) ((char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableSString(s) ((signed char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsSString(s) ((const signed char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsUString(s) ((const unsigned char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s) -#define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s) -#define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s) -#define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s) -#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s) -static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) { - const Py_UNICODE *u_end = u; - while (*u_end++) ; - return (size_t)(u_end - u - 1); -} -#define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u)) -#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode -#define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode -#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj) -#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None) -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b); -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject*); -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x); -#define __Pyx_PySequence_Tuple(obj)\ - (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj)) -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t); -static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject*); -#if CYTHON_ASSUME_SAFE_MACROS -#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) -#else -#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x) -#endif -#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x)) -#if PY_MAJOR_VERSION >= 3 -#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x)) -#else -#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x)) -#endif -#define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Float(x)) -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII -static int __Pyx_sys_getdefaultencoding_not_ascii; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - PyObject* ascii_chars_u = NULL; - PyObject* ascii_chars_b = NULL; - const char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - if (strcmp(default_encoding_c, "ascii") == 0) { - __Pyx_sys_getdefaultencoding_not_ascii = 0; - } else { - char ascii_chars[128]; - int c; - for (c = 0; c < 128; c++) { - ascii_chars[c] = c; - } - __Pyx_sys_getdefaultencoding_not_ascii = 1; - ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL); - if (!ascii_chars_u) goto bad; - ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL); - if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) { - PyErr_Format( - PyExc_ValueError, - "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.", - default_encoding_c); - goto bad; - } - Py_DECREF(ascii_chars_u); - Py_DECREF(ascii_chars_b); - } - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - Py_XDECREF(ascii_chars_u); - Py_XDECREF(ascii_chars_b); - return -1; -} -#endif -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3 -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL) -#else -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL) -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -static char* __PYX_DEFAULT_STRING_ENCODING; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c) + 1); - if (!__PYX_DEFAULT_STRING_ENCODING) goto bad; - strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c); - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - return -1; -} -#endif -#endif - - -/* Test for GCC > 2.95 */ -#if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))) - #define likely(x) __builtin_expect(!!(x), 1) - #define unlikely(x) __builtin_expect(!!(x), 0) -#else /* !__GNUC__ or GCC < 2.95 */ - #define likely(x) (x) - #define unlikely(x) (x) -#endif /* __GNUC__ */ -static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; } - -static PyObject *__pyx_m = NULL; -static PyObject *__pyx_d; -static PyObject *__pyx_b; -static PyObject *__pyx_cython_runtime = NULL; -static PyObject *__pyx_empty_tuple; -static PyObject *__pyx_empty_bytes; -static PyObject *__pyx_empty_unicode; -static int __pyx_lineno; -static int __pyx_clineno = 0; -static const char * __pyx_cfilenm= __FILE__; -static const char *__pyx_filename; - - -static const char *__pyx_f[] = { - "core.pyx", - "stringsource", -}; -/* NoFastGil.proto */ -#define __Pyx_PyGILState_Ensure PyGILState_Ensure -#define __Pyx_PyGILState_Release PyGILState_Release -#define __Pyx_FastGIL_Remember() -#define __Pyx_FastGIL_Forget() -#define __Pyx_FastGilFuncInit() - -/* MemviewSliceStruct.proto */ -struct __pyx_memoryview_obj; -typedef struct { - struct __pyx_memoryview_obj *memview; - char *data; - Py_ssize_t shape[8]; - Py_ssize_t strides[8]; - Py_ssize_t suboffsets[8]; -} __Pyx_memviewslice; -#define __Pyx_MemoryView_Len(m) (m.shape[0]) - -/* Atomics.proto */ -#include -#ifndef CYTHON_ATOMICS - #define CYTHON_ATOMICS 1 -#endif -#define __PYX_CYTHON_ATOMICS_ENABLED() CYTHON_ATOMICS -#define __pyx_atomic_int_type int -#if CYTHON_ATOMICS && (__GNUC__ >= 5 || (__GNUC__ == 4 &&\ - (__GNUC_MINOR__ > 1 ||\ - (__GNUC_MINOR__ == 1 && __GNUC_PATCHLEVEL__ >= 2)))) - #define __pyx_atomic_incr_aligned(value) __sync_fetch_and_add(value, 1) - #define __pyx_atomic_decr_aligned(value) __sync_fetch_and_sub(value, 1) - #ifdef __PYX_DEBUG_ATOMICS - #warning "Using GNU atomics" - #endif -#elif CYTHON_ATOMICS && defined(_MSC_VER) && CYTHON_COMPILING_IN_NOGIL - #include - #undef __pyx_atomic_int_type - #define __pyx_atomic_int_type long - #pragma intrinsic (_InterlockedExchangeAdd) - #define __pyx_atomic_incr_aligned(value) _InterlockedExchangeAdd(value, 1) - #define __pyx_atomic_decr_aligned(value) _InterlockedExchangeAdd(value, -1) - #ifdef __PYX_DEBUG_ATOMICS - #pragma message ("Using MSVC atomics") - #endif -#else - #undef CYTHON_ATOMICS - #define CYTHON_ATOMICS 0 - #ifdef __PYX_DEBUG_ATOMICS - #warning "Not using atomics" - #endif -#endif -typedef volatile __pyx_atomic_int_type __pyx_atomic_int; -#if CYTHON_ATOMICS - #define __pyx_add_acquisition_count(memview)\ - __pyx_atomic_incr_aligned(__pyx_get_slice_count_pointer(memview)) - #define __pyx_sub_acquisition_count(memview)\ - __pyx_atomic_decr_aligned(__pyx_get_slice_count_pointer(memview)) -#else - #define __pyx_add_acquisition_count(memview)\ - __pyx_add_acquisition_count_locked(__pyx_get_slice_count_pointer(memview), memview->lock) - #define __pyx_sub_acquisition_count(memview)\ - __pyx_sub_acquisition_count_locked(__pyx_get_slice_count_pointer(memview), memview->lock) -#endif - -/* ForceInitThreads.proto */ -#ifndef __PYX_FORCE_INIT_THREADS - #define __PYX_FORCE_INIT_THREADS 0 -#endif - -/* BufferFormatStructs.proto */ -#define IS_UNSIGNED(type) (((type) -1) > 0) -struct __Pyx_StructField_; -#define __PYX_BUF_FLAGS_PACKED_STRUCT (1 << 0) -typedef struct { - const char* name; - struct __Pyx_StructField_* fields; - size_t size; - size_t arraysize[8]; - int ndim; - char typegroup; - char is_unsigned; - int flags; -} __Pyx_TypeInfo; -typedef struct __Pyx_StructField_ { - __Pyx_TypeInfo* type; - const char* name; - size_t offset; -} __Pyx_StructField; -typedef struct { - __Pyx_StructField* field; - size_t parent_offset; -} __Pyx_BufFmt_StackElem; -typedef struct { - __Pyx_StructField root; - __Pyx_BufFmt_StackElem* head; - size_t fmt_offset; - size_t new_count, enc_count; - size_t struct_alignment; - int is_complex; - char enc_type; - char new_packmode; - char enc_packmode; - char is_valid_array; -} __Pyx_BufFmt_Context; - - -/*--- Type declarations ---*/ -struct __pyx_array_obj; -struct __pyx_MemviewEnum_obj; -struct __pyx_memoryview_obj; -struct __pyx_memoryviewslice_obj; -struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each; - -/* "monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ -struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each { - int __pyx_n; - float max_neg_val; -}; - -/* "View.MemoryView":106 - * - * @cname("__pyx_array") - * cdef class array: # <<<<<<<<<<<<<< - * - * cdef: - */ -struct __pyx_array_obj { - PyObject_HEAD - struct __pyx_vtabstruct_array *__pyx_vtab; - char *data; - Py_ssize_t len; - char *format; - int ndim; - Py_ssize_t *_shape; - Py_ssize_t *_strides; - Py_ssize_t itemsize; - PyObject *mode; - PyObject *_format; - void (*callback_free_data)(void *); - int free_data; - int dtype_is_object; -}; - - -/* "View.MemoryView":280 - * - * @cname('__pyx_MemviewEnum') - * cdef class Enum(object): # <<<<<<<<<<<<<< - * cdef object name - * def __init__(self, name): - */ -struct __pyx_MemviewEnum_obj { - PyObject_HEAD - PyObject *name; -}; - - -/* "View.MemoryView":331 - * - * @cname('__pyx_memoryview') - * cdef class memoryview(object): # <<<<<<<<<<<<<< - * - * cdef object obj - */ -struct __pyx_memoryview_obj { - PyObject_HEAD - struct __pyx_vtabstruct_memoryview *__pyx_vtab; - PyObject *obj; - PyObject *_size; - PyObject *_array_interface; - PyThread_type_lock lock; - __pyx_atomic_int acquisition_count[2]; - __pyx_atomic_int *acquisition_count_aligned_p; - Py_buffer view; - int flags; - int dtype_is_object; - __Pyx_TypeInfo *typeinfo; -}; - - -/* "View.MemoryView":967 - * - * @cname('__pyx_memoryviewslice') - * cdef class _memoryviewslice(memoryview): # <<<<<<<<<<<<<< - * "Internal class for passing memoryview slices to Python" - * - */ -struct __pyx_memoryviewslice_obj { - struct __pyx_memoryview_obj __pyx_base; - __Pyx_memviewslice from_slice; - PyObject *from_object; - PyObject *(*to_object_func)(char *); - int (*to_dtype_func)(char *, PyObject *); -}; - - - -/* "View.MemoryView":106 - * - * @cname("__pyx_array") - * cdef class array: # <<<<<<<<<<<<<< - * - * cdef: - */ - -struct __pyx_vtabstruct_array { - PyObject *(*get_memview)(struct __pyx_array_obj *); -}; -static struct __pyx_vtabstruct_array *__pyx_vtabptr_array; - - -/* "View.MemoryView":331 - * - * @cname('__pyx_memoryview') - * cdef class memoryview(object): # <<<<<<<<<<<<<< - * - * cdef object obj - */ - -struct __pyx_vtabstruct_memoryview { - char *(*get_item_pointer)(struct __pyx_memoryview_obj *, PyObject *); - PyObject *(*is_slice)(struct __pyx_memoryview_obj *, PyObject *); - PyObject *(*setitem_slice_assignment)(struct __pyx_memoryview_obj *, PyObject *, PyObject *); - PyObject *(*setitem_slice_assign_scalar)(struct __pyx_memoryview_obj *, struct __pyx_memoryview_obj *, PyObject *); - PyObject *(*setitem_indexed)(struct __pyx_memoryview_obj *, PyObject *, PyObject *); - PyObject *(*convert_item_to_object)(struct __pyx_memoryview_obj *, char *); - PyObject *(*assign_item_from_object)(struct __pyx_memoryview_obj *, char *, PyObject *); -}; -static struct __pyx_vtabstruct_memoryview *__pyx_vtabptr_memoryview; - - -/* "View.MemoryView":967 - * - * @cname('__pyx_memoryviewslice') - * cdef class _memoryviewslice(memoryview): # <<<<<<<<<<<<<< - * "Internal class for passing memoryview slices to Python" - * - */ - -struct __pyx_vtabstruct__memoryviewslice { - struct __pyx_vtabstruct_memoryview __pyx_base; -}; -static struct __pyx_vtabstruct__memoryviewslice *__pyx_vtabptr__memoryviewslice; - -/* --- Runtime support code (head) --- */ -/* Refnanny.proto */ -#ifndef CYTHON_REFNANNY - #define CYTHON_REFNANNY 0 -#endif -#if CYTHON_REFNANNY - typedef struct { - void (*INCREF)(void*, PyObject*, int); - void (*DECREF)(void*, PyObject*, int); - void (*GOTREF)(void*, PyObject*, int); - void (*GIVEREF)(void*, PyObject*, int); - void* (*SetupContext)(const char*, int, const char*); - void (*FinishContext)(void**); - } __Pyx_RefNannyAPIStruct; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname); - #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL; -#ifdef WITH_THREAD - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - if (acquire_gil) {\ - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ - PyGILState_Release(__pyx_gilstate_save);\ - } else {\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ - } -#else - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__) -#endif - #define __Pyx_RefNannyFinishContext()\ - __Pyx_RefNanny->FinishContext(&__pyx_refnanny) - #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_XINCREF(r) do { if((r) != NULL) {__Pyx_INCREF(r); }} while(0) - #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r); }} while(0) - #define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r); }} while(0) - #define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);}} while(0) -#else - #define __Pyx_RefNannyDeclarations - #define __Pyx_RefNannySetupContext(name, acquire_gil) - #define __Pyx_RefNannyFinishContext() - #define __Pyx_INCREF(r) Py_INCREF(r) - #define __Pyx_DECREF(r) Py_DECREF(r) - #define __Pyx_GOTREF(r) - #define __Pyx_GIVEREF(r) - #define __Pyx_XINCREF(r) Py_XINCREF(r) - #define __Pyx_XDECREF(r) Py_XDECREF(r) - #define __Pyx_XGOTREF(r) - #define __Pyx_XGIVEREF(r) -#endif -#define __Pyx_XDECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_XDECREF(tmp);\ - } while (0) -#define __Pyx_DECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_DECREF(tmp);\ - } while (0) -#define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0) -#define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0) - -/* PyObjectGetAttrStr.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n) -#endif - -/* GetBuiltinName.proto */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name); - -/* MemviewSliceInit.proto */ -#define __Pyx_BUF_MAX_NDIMS %(BUF_MAX_NDIMS)d -#define __Pyx_MEMVIEW_DIRECT 1 -#define __Pyx_MEMVIEW_PTR 2 -#define __Pyx_MEMVIEW_FULL 4 -#define __Pyx_MEMVIEW_CONTIG 8 -#define __Pyx_MEMVIEW_STRIDED 16 -#define __Pyx_MEMVIEW_FOLLOW 32 -#define __Pyx_IS_C_CONTIG 1 -#define __Pyx_IS_F_CONTIG 2 -static int __Pyx_init_memviewslice( - struct __pyx_memoryview_obj *memview, - int ndim, - __Pyx_memviewslice *memviewslice, - int memview_is_new_reference); -static CYTHON_INLINE int __pyx_add_acquisition_count_locked( - __pyx_atomic_int *acquisition_count, PyThread_type_lock lock); -static CYTHON_INLINE int __pyx_sub_acquisition_count_locked( - __pyx_atomic_int *acquisition_count, PyThread_type_lock lock); -#define __pyx_get_slice_count_pointer(memview) (memview->acquisition_count_aligned_p) -#define __pyx_get_slice_count(memview) (*__pyx_get_slice_count_pointer(memview)) -#define __PYX_INC_MEMVIEW(slice, have_gil) __Pyx_INC_MEMVIEW(slice, have_gil, __LINE__) -#define __PYX_XDEC_MEMVIEW(slice, have_gil) __Pyx_XDEC_MEMVIEW(slice, have_gil, __LINE__) -static CYTHON_INLINE void __Pyx_INC_MEMVIEW(__Pyx_memviewslice *, int, int); -static CYTHON_INLINE void __Pyx_XDEC_MEMVIEW(__Pyx_memviewslice *, int, int); - -/* RaiseArgTupleInvalid.proto */ -static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact, - Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found); - -/* RaiseDoubleKeywords.proto */ -static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name); - -/* ParseKeywords.proto */ -static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[],\ - PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args,\ - const char* function_name); - -/* None.proto */ -static CYTHON_INLINE void __Pyx_RaiseUnboundLocalError(const char *varname); - -/* ArgTypeTest.proto */ -#define __Pyx_ArgTypeTest(obj, type, none_allowed, name, exact)\ - ((likely((Py_TYPE(obj) == type) | (none_allowed && (obj == Py_None)))) ? 1 :\ - __Pyx__ArgTypeTest(obj, type, name, exact)) -static int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const char *name, int exact); - -/* PyObjectCall.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw); -#else -#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw) -#endif - -/* PyThreadStateGet.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate; -#define __Pyx_PyThreadState_assign __pyx_tstate = __Pyx_PyThreadState_Current; -#define __Pyx_PyErr_Occurred() __pyx_tstate->curexc_type -#else -#define __Pyx_PyThreadState_declare -#define __Pyx_PyThreadState_assign -#define __Pyx_PyErr_Occurred() PyErr_Occurred() -#endif - -/* PyErrFetchRestore.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL) -#define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL)) -#else -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#endif -#else -#define __Pyx_PyErr_Clear() PyErr_Clear() -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestoreInState(tstate, type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchInState(tstate, type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb) -#endif - -/* RaiseException.proto */ -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause); - -/* PyCFunctionFastCall.proto */ -#if CYTHON_FAST_PYCCALL -static CYTHON_INLINE PyObject *__Pyx_PyCFunction_FastCall(PyObject *func, PyObject **args, Py_ssize_t nargs); -#else -#define __Pyx_PyCFunction_FastCall(func, args, nargs) (assert(0), NULL) -#endif - -/* PyFunctionFastCall.proto */ -#if CYTHON_FAST_PYCALL -#define __Pyx_PyFunction_FastCall(func, args, nargs)\ - __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL) -#if 1 || PY_VERSION_HEX < 0x030600B1 -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs); -#else -#define __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs) _PyFunction_FastCallDict(func, args, nargs, kwargs) -#endif -#define __Pyx_BUILD_ASSERT_EXPR(cond)\ - (sizeof(char [1 - 2*!(cond)]) - 1) -#ifndef Py_MEMBER_SIZE -#define Py_MEMBER_SIZE(type, member) sizeof(((type *)0)->member) -#endif -#if CYTHON_FAST_PYCALL - static size_t __pyx_pyframe_localsplus_offset = 0; - #include "frameobject.h" -#if PY_VERSION_HEX >= 0x030b00a6 - #ifndef Py_BUILD_CORE - #define Py_BUILD_CORE 1 - #endif - #include "internal/pycore_frame.h" -#endif - #define __Pxy_PyFrame_Initialize_Offsets()\ - ((void)__Pyx_BUILD_ASSERT_EXPR(sizeof(PyFrameObject) == offsetof(PyFrameObject, f_localsplus) + Py_MEMBER_SIZE(PyFrameObject, f_localsplus)),\ - (void)(__pyx_pyframe_localsplus_offset = ((size_t)PyFrame_Type.tp_basicsize) - Py_MEMBER_SIZE(PyFrameObject, f_localsplus))) - #define __Pyx_PyFrame_GetLocalsplus(frame)\ - (assert(__pyx_pyframe_localsplus_offset), (PyObject **)(((char *)(frame)) + __pyx_pyframe_localsplus_offset)) -#endif // CYTHON_FAST_PYCALL -#endif - -/* PyObjectCall2Args.proto */ -static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2); - -/* PyObjectCallMethO.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg); -#endif - -/* PyObjectCallOneArg.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg); - -/* IncludeStringH.proto */ -#include - -/* BytesEquals.proto */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals); - -/* UnicodeEquals.proto */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals); - -/* StrEquals.proto */ -#if PY_MAJOR_VERSION >= 3 -#define __Pyx_PyString_Equals __Pyx_PyUnicode_Equals -#else -#define __Pyx_PyString_Equals __Pyx_PyBytes_Equals -#endif - -/* DivInt[Py_ssize_t].proto */ -static CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t, Py_ssize_t); - -/* UnaryNegOverflows.proto */ -#define UNARY_NEG_WOULD_OVERFLOW(x)\ - (((x) < 0) & ((unsigned long)(x) == 0-(unsigned long)(x))) - -static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *); /*proto*/ -/* GetAttr.proto */ -static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *, PyObject *); - -/* GetItemInt.proto */ -#define __Pyx_GetItemInt(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_Fast(o, (Py_ssize_t)i, is_list, wraparound, boundscheck) :\ - (is_list ? (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL) :\ - __Pyx_GetItemInt_Generic(o, to_py_func(i)))) -#define __Pyx_GetItemInt_List(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_List_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ - (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL)) -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, - int wraparound, int boundscheck); -#define __Pyx_GetItemInt_Tuple(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_Tuple_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ - (PyErr_SetString(PyExc_IndexError, "tuple index out of range"), (PyObject*)NULL)) -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, - int wraparound, int boundscheck); -static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j); -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, - int is_list, int wraparound, int boundscheck); - -/* ObjectGetItem.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject* key); -#else -#define __Pyx_PyObject_GetItem(obj, key) PyObject_GetItem(obj, key) -#endif - -/* decode_c_string_utf16.proto */ -static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16(const char *s, Py_ssize_t size, const char *errors) { - int byteorder = 0; - return PyUnicode_DecodeUTF16(s, size, errors, &byteorder); -} -static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16LE(const char *s, Py_ssize_t size, const char *errors) { - int byteorder = -1; - return PyUnicode_DecodeUTF16(s, size, errors, &byteorder); -} -static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16BE(const char *s, Py_ssize_t size, const char *errors) { - int byteorder = 1; - return PyUnicode_DecodeUTF16(s, size, errors, &byteorder); -} - -/* decode_c_string.proto */ -static CYTHON_INLINE PyObject* __Pyx_decode_c_string( - const char* cstring, Py_ssize_t start, Py_ssize_t stop, - const char* encoding, const char* errors, - PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)); - -/* PyErrExceptionMatches.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err) -static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err); -#else -#define __Pyx_PyErr_ExceptionMatches(err) PyErr_ExceptionMatches(err) -#endif - -/* GetAttr3.proto */ -static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *, PyObject *, PyObject *); - -/* PyDictVersioning.proto */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -#define __PYX_DICT_VERSION_INIT ((PY_UINT64_T) -1) -#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)\ - (version_var) = __PYX_GET_DICT_VERSION(dict);\ - (cache_var) = (value); -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - if (likely(__PYX_GET_DICT_VERSION(DICT) == __pyx_dict_version)) {\ - (VAR) = __pyx_dict_cached_value;\ - } else {\ - (VAR) = __pyx_dict_cached_value = (LOOKUP);\ - __pyx_dict_version = __PYX_GET_DICT_VERSION(DICT);\ - }\ -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj); -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj); -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version); -#else -#define __PYX_GET_DICT_VERSION(dict) (0) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var) -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) (VAR) = (LOOKUP); -#endif - -/* GetModuleGlobalName.proto */ -#if CYTHON_USE_DICT_VERSIONS -#define __Pyx_GetModuleGlobalName(var, name) {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - (var) = (likely(__pyx_dict_version == __PYX_GET_DICT_VERSION(__pyx_d))) ?\ - (likely(__pyx_dict_cached_value) ? __Pyx_NewRef(__pyx_dict_cached_value) : __Pyx_GetBuiltinName(name)) :\ - __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} -#define __Pyx_GetModuleGlobalNameUncached(var, name) {\ - PY_UINT64_T __pyx_dict_version;\ - PyObject *__pyx_dict_cached_value;\ - (var) = __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value); -#else -#define __Pyx_GetModuleGlobalName(var, name) (var) = __Pyx__GetModuleGlobalName(name) -#define __Pyx_GetModuleGlobalNameUncached(var, name) (var) = __Pyx__GetModuleGlobalName(name) -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name); -#endif - -/* RaiseTooManyValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected); - -/* RaiseNeedMoreValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index); - -/* RaiseNoneIterError.proto */ -static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void); - -/* ExtTypeTest.proto */ -static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type); - -/* GetTopmostException.proto */ -#if CYTHON_USE_EXC_INFO_STACK -static _PyErr_StackItem * __Pyx_PyErr_GetTopmostException(PyThreadState *tstate); -#endif - -/* SaveResetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSave(type, value, tb) __Pyx__ExceptionSave(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#define __Pyx_ExceptionReset(type, value, tb) __Pyx__ExceptionReset(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -#else -#define __Pyx_ExceptionSave(type, value, tb) PyErr_GetExcInfo(type, value, tb) -#define __Pyx_ExceptionReset(type, value, tb) PyErr_SetExcInfo(type, value, tb) -#endif - -/* GetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_GetException(type, value, tb) __Pyx__GetException(__pyx_tstate, type, value, tb) -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* SwapException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSwap(type, value, tb) __Pyx__ExceptionSwap(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* Import.proto */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level); - -/* FastTypeChecks.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type) -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2); -#else -#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type) -#define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type) -#define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2)) -#endif -#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception) - -static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -/* ListCompAppend.proto */ -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE int __Pyx_ListComp_Append(PyObject* list, PyObject* x) { - PyListObject* L = (PyListObject*) list; - Py_ssize_t len = Py_SIZE(list); - if (likely(L->allocated > len)) { - Py_INCREF(x); - PyList_SET_ITEM(list, len, x); - __Pyx_SET_SIZE(list, len + 1); - return 0; - } - return PyList_Append(list, x); -} -#else -#define __Pyx_ListComp_Append(L,x) PyList_Append(L,x) -#endif - -/* PyIntBinop.proto */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check); -#else -#define __Pyx_PyInt_AddObjC(op1, op2, intval, inplace, zerodivision_check)\ - (inplace ? PyNumber_InPlaceAdd(op1, op2) : PyNumber_Add(op1, op2)) -#endif - -/* ListExtend.proto */ -static CYTHON_INLINE int __Pyx_PyList_Extend(PyObject* L, PyObject* v) { -#if CYTHON_COMPILING_IN_CPYTHON - PyObject* none = _PyList_Extend((PyListObject*)L, v); - if (unlikely(!none)) - return -1; - Py_DECREF(none); - return 0; -#else - return PyList_SetSlice(L, PY_SSIZE_T_MAX, PY_SSIZE_T_MAX, v); -#endif -} - -/* ListAppend.proto */ -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE int __Pyx_PyList_Append(PyObject* list, PyObject* x) { - PyListObject* L = (PyListObject*) list; - Py_ssize_t len = Py_SIZE(list); - if (likely(L->allocated > len) & likely(len > (L->allocated >> 1))) { - Py_INCREF(x); - PyList_SET_ITEM(list, len, x); - __Pyx_SET_SIZE(list, len + 1); - return 0; - } - return PyList_Append(list, x); -} -#else -#define __Pyx_PyList_Append(L,x) PyList_Append(L,x) -#endif - -/* DivInt[long].proto */ -static CYTHON_INLINE long __Pyx_div_long(long, long); - -/* PySequenceContains.proto */ -static CYTHON_INLINE int __Pyx_PySequence_ContainsTF(PyObject* item, PyObject* seq, int eq) { - int result = PySequence_Contains(seq, item); - return unlikely(result < 0) ? result : (result == (eq == Py_EQ)); -} - -/* ImportFrom.proto */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name); - -/* HasAttr.proto */ -static CYTHON_INLINE int __Pyx_HasAttr(PyObject *, PyObject *); - -/* PyObject_GenericGetAttrNoDict.proto */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GenericGetAttrNoDict PyObject_GenericGetAttr -#endif - -/* PyObject_GenericGetAttr.proto */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GenericGetAttr PyObject_GenericGetAttr -#endif - -/* SetVTable.proto */ -static int __Pyx_SetVtable(PyObject *dict, void *vtable); - -/* PyObjectGetAttrStrNoError.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name); - -/* SetupReduce.proto */ -static int __Pyx_setup_reduce(PyObject* type_obj); - -/* CLineInTraceback.proto */ -#ifdef CYTHON_CLINE_IN_TRACEBACK -#define __Pyx_CLineForTraceback(tstate, c_line) (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0) -#else -static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line); -#endif - -/* CodeObjectCache.proto */ -typedef struct { - PyCodeObject* code_object; - int code_line; -} __Pyx_CodeObjectCacheEntry; -struct __Pyx_CodeObjectCache { - int count; - int max_count; - __Pyx_CodeObjectCacheEntry* entries; -}; -static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL}; -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line); -static PyCodeObject *__pyx_find_code_object(int code_line); -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object); - -/* AddTraceback.proto */ -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename); - -#if PY_MAJOR_VERSION < 3 - static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags); - static void __Pyx_ReleaseBuffer(Py_buffer *view); -#else - #define __Pyx_GetBuffer PyObject_GetBuffer - #define __Pyx_ReleaseBuffer PyBuffer_Release -#endif - - -/* BufferStructDeclare.proto */ -typedef struct { - Py_ssize_t shape, strides, suboffsets; -} __Pyx_Buf_DimInfo; -typedef struct { - size_t refcount; - Py_buffer pybuffer; -} __Pyx_Buffer; -typedef struct { - __Pyx_Buffer *rcbuffer; - char *data; - __Pyx_Buf_DimInfo diminfo[8]; -} __Pyx_LocalBuf_ND; - -/* MemviewSliceIsContig.proto */ -static int __pyx_memviewslice_is_contig(const __Pyx_memviewslice mvs, char order, int ndim); - -/* OverlappingSlices.proto */ -static int __pyx_slices_overlap(__Pyx_memviewslice *slice1, - __Pyx_memviewslice *slice2, - int ndim, size_t itemsize); - -/* Capsule.proto */ -static CYTHON_INLINE PyObject *__pyx_capsule_create(void *p, const char *sig); - -/* IsLittleEndian.proto */ -static CYTHON_INLINE int __Pyx_Is_Little_Endian(void); - -/* BufferFormatCheck.proto */ -static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts); -static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx, - __Pyx_BufFmt_StackElem* stack, - __Pyx_TypeInfo* type); - -/* TypeInfoCompare.proto */ -static int __pyx_typeinfo_cmp(__Pyx_TypeInfo *a, __Pyx_TypeInfo *b); - -/* MemviewSliceValidateAndInit.proto */ -static int __Pyx_ValidateAndInit_memviewslice( - int *axes_specs, - int c_or_f_flag, - int buf_flags, - int ndim, - __Pyx_TypeInfo *dtype, - __Pyx_BufFmt_StackElem stack[], - __Pyx_memviewslice *memviewslice, - PyObject *original_obj); - -/* ObjectToMemviewSlice.proto */ -static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(PyObject *, int writable_flag); - -/* ObjectToMemviewSlice.proto */ -static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(PyObject *, int writable_flag); - -/* ObjectToMemviewSlice.proto */ -static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_dc_int(PyObject *, int writable_flag); - -/* GCCDiagnostics.proto */ -#if defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)) -#define __Pyx_HAS_GCC_DIAGNOSTIC -#endif - -/* MemviewSliceCopyTemplate.proto */ -static __Pyx_memviewslice -__pyx_memoryview_copy_new_contig(const __Pyx_memviewslice *from_mvs, - const char *mode, int ndim, - size_t sizeof_dtype, int contig_flag, - int dtype_is_object); - -/* CIntToPy.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value); - -/* CIntFromPy.proto */ -static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *); - -/* CIntToPy.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value); - -/* CIntFromPy.proto */ -static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *); - -/* CIntFromPy.proto */ -static CYTHON_INLINE char __Pyx_PyInt_As_char(PyObject *); - -/* CheckBinaryVersion.proto */ -static int __Pyx_check_binary_version(void); - -/* InitStrings.proto */ -static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); - -static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *__pyx_v_self); /* proto*/ -static char *__pyx_memoryview_get_item_pointer(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index); /* proto*/ -static PyObject *__pyx_memoryview_is_slice(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj); /* proto*/ -static PyObject *__pyx_memoryview_setitem_slice_assignment(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_dst, PyObject *__pyx_v_src); /* proto*/ -static PyObject *__pyx_memoryview_setitem_slice_assign_scalar(struct __pyx_memoryview_obj *__pyx_v_self, struct __pyx_memoryview_obj *__pyx_v_dst, PyObject *__pyx_v_value); /* proto*/ -static PyObject *__pyx_memoryview_setitem_indexed(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /* proto*/ -static PyObject *__pyx_memoryview_convert_item_to_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp); /* proto*/ -static PyObject *__pyx_memoryview_assign_item_from_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value); /* proto*/ -static PyObject *__pyx_memoryviewslice_convert_item_to_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp); /* proto*/ -static PyObject *__pyx_memoryviewslice_assign_item_from_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value); /* proto*/ - -/* Module declarations from 'cython.view' */ - -/* Module declarations from 'cython' */ - -/* Module declarations from 'monotonic_align.core' */ -static PyTypeObject *__pyx_array_type = 0; -static PyTypeObject *__pyx_MemviewEnum_type = 0; -static PyTypeObject *__pyx_memoryview_type = 0; -static PyTypeObject *__pyx_memoryviewslice_type = 0; -static PyObject *generic = 0; -static PyObject *strided = 0; -static PyObject *indirect = 0; -static PyObject *contiguous = 0; -static PyObject *indirect_contiguous = 0; -static int __pyx_memoryview_thread_locks_used; -static PyThread_type_lock __pyx_memoryview_thread_locks[8]; -static void __pyx_f_15monotonic_align_4core_maximum_path_each(__Pyx_memviewslice, __Pyx_memviewslice, int, int, struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each *__pyx_optional_args); /*proto*/ -static void __pyx_f_15monotonic_align_4core_maximum_path_c(__Pyx_memviewslice, __Pyx_memviewslice, __Pyx_memviewslice, __Pyx_memviewslice, int __pyx_skip_dispatch); /*proto*/ -static struct __pyx_array_obj *__pyx_array_new(PyObject *, Py_ssize_t, char *, char *, char *); /*proto*/ -static void *__pyx_align_pointer(void *, size_t); /*proto*/ -static PyObject *__pyx_memoryview_new(PyObject *, int, int, __Pyx_TypeInfo *); /*proto*/ -static CYTHON_INLINE int __pyx_memoryview_check(PyObject *); /*proto*/ -static PyObject *_unellipsify(PyObject *, int); /*proto*/ -static PyObject *assert_direct_dimensions(Py_ssize_t *, int); /*proto*/ -static struct __pyx_memoryview_obj *__pyx_memview_slice(struct __pyx_memoryview_obj *, PyObject *); /*proto*/ -static int __pyx_memoryview_slice_memviewslice(__Pyx_memviewslice *, Py_ssize_t, Py_ssize_t, Py_ssize_t, int, int, int *, Py_ssize_t, Py_ssize_t, Py_ssize_t, int, int, int, int); /*proto*/ -static char *__pyx_pybuffer_index(Py_buffer *, char *, Py_ssize_t, Py_ssize_t); /*proto*/ -static int __pyx_memslice_transpose(__Pyx_memviewslice *); /*proto*/ -static PyObject *__pyx_memoryview_fromslice(__Pyx_memviewslice, int, PyObject *(*)(char *), int (*)(char *, PyObject *), int); /*proto*/ -static __Pyx_memviewslice *__pyx_memoryview_get_slice_from_memoryview(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/ -static void __pyx_memoryview_slice_copy(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/ -static PyObject *__pyx_memoryview_copy_object(struct __pyx_memoryview_obj *); /*proto*/ -static PyObject *__pyx_memoryview_copy_object_from_slice(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/ -static Py_ssize_t abs_py_ssize_t(Py_ssize_t); /*proto*/ -static char __pyx_get_best_slice_order(__Pyx_memviewslice *, int); /*proto*/ -static void _copy_strided_to_strided(char *, Py_ssize_t *, char *, Py_ssize_t *, Py_ssize_t *, Py_ssize_t *, int, size_t); /*proto*/ -static void copy_strided_to_strided(__Pyx_memviewslice *, __Pyx_memviewslice *, int, size_t); /*proto*/ -static Py_ssize_t __pyx_memoryview_slice_get_size(__Pyx_memviewslice *, int); /*proto*/ -static Py_ssize_t __pyx_fill_contig_strides_array(Py_ssize_t *, Py_ssize_t *, Py_ssize_t, int, char); /*proto*/ -static void *__pyx_memoryview_copy_data_to_temp(__Pyx_memviewslice *, __Pyx_memviewslice *, char, int); /*proto*/ -static int __pyx_memoryview_err_extents(int, Py_ssize_t, Py_ssize_t); /*proto*/ -static int __pyx_memoryview_err_dim(PyObject *, char *, int); /*proto*/ -static int __pyx_memoryview_err(PyObject *, char *); /*proto*/ -static int __pyx_memoryview_copy_contents(__Pyx_memviewslice, __Pyx_memviewslice, int, int, int); /*proto*/ -static void __pyx_memoryview_broadcast_leading(__Pyx_memviewslice *, int, int); /*proto*/ -static void __pyx_memoryview_refcount_copying(__Pyx_memviewslice *, int, int, int); /*proto*/ -static void __pyx_memoryview_refcount_objects_in_slice_with_gil(char *, Py_ssize_t *, Py_ssize_t *, int, int); /*proto*/ -static void __pyx_memoryview_refcount_objects_in_slice(char *, Py_ssize_t *, Py_ssize_t *, int, int); /*proto*/ -static void __pyx_memoryview_slice_assign_scalar(__Pyx_memviewslice *, int, size_t, void *, int); /*proto*/ -static void __pyx_memoryview__slice_assign_scalar(char *, Py_ssize_t *, Py_ssize_t *, int, size_t, void *); /*proto*/ -static PyObject *__pyx_unpickle_Enum__set_state(struct __pyx_MemviewEnum_obj *, PyObject *); /*proto*/ -static __Pyx_TypeInfo __Pyx_TypeInfo_int = { "int", NULL, sizeof(int), { 0 }, 0, IS_UNSIGNED(int) ? 'U' : 'I', IS_UNSIGNED(int), 0 }; -static __Pyx_TypeInfo __Pyx_TypeInfo_float = { "float", NULL, sizeof(float), { 0 }, 0, 'R', 0, 0 }; -#define __Pyx_MODULE_NAME "monotonic_align.core" -extern int __pyx_module_is_main_monotonic_align__core; -int __pyx_module_is_main_monotonic_align__core = 0; - -/* Implementation of 'monotonic_align.core' */ -static PyObject *__pyx_builtin_range; -static PyObject *__pyx_builtin_ValueError; -static PyObject *__pyx_builtin_MemoryError; -static PyObject *__pyx_builtin_enumerate; -static PyObject *__pyx_builtin_TypeError; -static PyObject *__pyx_builtin_Ellipsis; -static PyObject *__pyx_builtin_id; -static PyObject *__pyx_builtin_IndexError; -static const char __pyx_k_O[] = "O"; -static const char __pyx_k_c[] = "c"; -static const char __pyx_k_id[] = "id"; -static const char __pyx_k_new[] = "__new__"; -static const char __pyx_k_obj[] = "obj"; -static const char __pyx_k_base[] = "base"; -static const char __pyx_k_dict[] = "__dict__"; -static const char __pyx_k_main[] = "__main__"; -static const char __pyx_k_mode[] = "mode"; -static const char __pyx_k_name[] = "name"; -static const char __pyx_k_ndim[] = "ndim"; -static const char __pyx_k_pack[] = "pack"; -static const char __pyx_k_size[] = "size"; -static const char __pyx_k_step[] = "step"; -static const char __pyx_k_stop[] = "stop"; -static const char __pyx_k_t_xs[] = "t_xs"; -static const char __pyx_k_t_ys[] = "t_ys"; -static const char __pyx_k_test[] = "__test__"; -static const char __pyx_k_ASCII[] = "ASCII"; -static const char __pyx_k_class[] = "__class__"; -static const char __pyx_k_error[] = "error"; -static const char __pyx_k_flags[] = "flags"; -static const char __pyx_k_paths[] = "paths"; -static const char __pyx_k_range[] = "range"; -static const char __pyx_k_shape[] = "shape"; -static const char __pyx_k_start[] = "start"; -static const char __pyx_k_encode[] = "encode"; -static const char __pyx_k_format[] = "format"; -static const char __pyx_k_import[] = "__import__"; -static const char __pyx_k_name_2[] = "__name__"; -static const char __pyx_k_pickle[] = "pickle"; -static const char __pyx_k_reduce[] = "__reduce__"; -static const char __pyx_k_struct[] = "struct"; -static const char __pyx_k_unpack[] = "unpack"; -static const char __pyx_k_update[] = "update"; -static const char __pyx_k_values[] = "values"; -static const char __pyx_k_fortran[] = "fortran"; -static const char __pyx_k_memview[] = "memview"; -static const char __pyx_k_Ellipsis[] = "Ellipsis"; -static const char __pyx_k_getstate[] = "__getstate__"; -static const char __pyx_k_itemsize[] = "itemsize"; -static const char __pyx_k_pyx_type[] = "__pyx_type"; -static const char __pyx_k_setstate[] = "__setstate__"; -static const char __pyx_k_TypeError[] = "TypeError"; -static const char __pyx_k_enumerate[] = "enumerate"; -static const char __pyx_k_pyx_state[] = "__pyx_state"; -static const char __pyx_k_reduce_ex[] = "__reduce_ex__"; -static const char __pyx_k_IndexError[] = "IndexError"; -static const char __pyx_k_ValueError[] = "ValueError"; -static const char __pyx_k_pyx_result[] = "__pyx_result"; -static const char __pyx_k_pyx_vtable[] = "__pyx_vtable__"; -static const char __pyx_k_MemoryError[] = "MemoryError"; -static const char __pyx_k_PickleError[] = "PickleError"; -static const char __pyx_k_pyx_checksum[] = "__pyx_checksum"; -static const char __pyx_k_stringsource[] = "stringsource"; -static const char __pyx_k_pyx_getbuffer[] = "__pyx_getbuffer"; -static const char __pyx_k_reduce_cython[] = "__reduce_cython__"; -static const char __pyx_k_View_MemoryView[] = "View.MemoryView"; -static const char __pyx_k_allocate_buffer[] = "allocate_buffer"; -static const char __pyx_k_dtype_is_object[] = "dtype_is_object"; -static const char __pyx_k_pyx_PickleError[] = "__pyx_PickleError"; -static const char __pyx_k_setstate_cython[] = "__setstate_cython__"; -static const char __pyx_k_pyx_unpickle_Enum[] = "__pyx_unpickle_Enum"; -static const char __pyx_k_cline_in_traceback[] = "cline_in_traceback"; -static const char __pyx_k_strided_and_direct[] = ""; -static const char __pyx_k_strided_and_indirect[] = ""; -static const char __pyx_k_contiguous_and_direct[] = ""; -static const char __pyx_k_MemoryView_of_r_object[] = ""; -static const char __pyx_k_MemoryView_of_r_at_0x_x[] = ""; -static const char __pyx_k_contiguous_and_indirect[] = ""; -static const char __pyx_k_Cannot_index_with_type_s[] = "Cannot index with type '%s'"; -static const char __pyx_k_Invalid_shape_in_axis_d_d[] = "Invalid shape in axis %d: %d."; -static const char __pyx_k_itemsize_0_for_cython_array[] = "itemsize <= 0 for cython.array"; -static const char __pyx_k_unable_to_allocate_array_data[] = "unable to allocate array data."; -static const char __pyx_k_strided_and_direct_or_indirect[] = ""; -static const char __pyx_k_Buffer_view_does_not_expose_stri[] = "Buffer view does not expose strides"; -static const char __pyx_k_Can_only_create_a_buffer_that_is[] = "Can only create a buffer that is contiguous in memory."; -static const char __pyx_k_Cannot_assign_to_read_only_memor[] = "Cannot assign to read-only memoryview"; -static const char __pyx_k_Cannot_create_writable_memory_vi[] = "Cannot create writable memory view from read-only memoryview"; -static const char __pyx_k_Empty_shape_tuple_for_cython_arr[] = "Empty shape tuple for cython.array"; -static const char __pyx_k_Incompatible_checksums_0x_x_vs_0[] = "Incompatible checksums (0x%x vs (0xb068931, 0x82a3537, 0x6ae9995) = (name))"; -static const char __pyx_k_Indirect_dimensions_not_supporte[] = "Indirect dimensions not supported"; -static const char __pyx_k_Invalid_mode_expected_c_or_fortr[] = "Invalid mode, expected 'c' or 'fortran', got %s"; -static const char __pyx_k_Out_of_bounds_on_buffer_access_a[] = "Out of bounds on buffer access (axis %d)"; -static const char __pyx_k_Unable_to_convert_item_to_object[] = "Unable to convert item to object"; -static const char __pyx_k_got_differing_extents_in_dimensi[] = "got differing extents in dimension %d (got %d and %d)"; -static const char __pyx_k_no_default___reduce___due_to_non[] = "no default __reduce__ due to non-trivial __cinit__"; -static const char __pyx_k_unable_to_allocate_shape_and_str[] = "unable to allocate shape and strides."; -static PyObject *__pyx_n_s_ASCII; -static PyObject *__pyx_kp_s_Buffer_view_does_not_expose_stri; -static PyObject *__pyx_kp_s_Can_only_create_a_buffer_that_is; -static PyObject *__pyx_kp_s_Cannot_assign_to_read_only_memor; -static PyObject *__pyx_kp_s_Cannot_create_writable_memory_vi; -static PyObject *__pyx_kp_s_Cannot_index_with_type_s; -static PyObject *__pyx_n_s_Ellipsis; -static PyObject *__pyx_kp_s_Empty_shape_tuple_for_cython_arr; -static PyObject *__pyx_kp_s_Incompatible_checksums_0x_x_vs_0; -static PyObject *__pyx_n_s_IndexError; -static PyObject *__pyx_kp_s_Indirect_dimensions_not_supporte; -static PyObject *__pyx_kp_s_Invalid_mode_expected_c_or_fortr; -static PyObject *__pyx_kp_s_Invalid_shape_in_axis_d_d; -static PyObject *__pyx_n_s_MemoryError; -static PyObject *__pyx_kp_s_MemoryView_of_r_at_0x_x; -static PyObject *__pyx_kp_s_MemoryView_of_r_object; -static PyObject *__pyx_n_b_O; -static PyObject *__pyx_kp_s_Out_of_bounds_on_buffer_access_a; -static PyObject *__pyx_n_s_PickleError; -static PyObject *__pyx_n_s_TypeError; -static PyObject *__pyx_kp_s_Unable_to_convert_item_to_object; -static PyObject *__pyx_n_s_ValueError; -static PyObject *__pyx_n_s_View_MemoryView; -static PyObject *__pyx_n_s_allocate_buffer; -static PyObject *__pyx_n_s_base; -static PyObject *__pyx_n_s_c; -static PyObject *__pyx_n_u_c; -static PyObject *__pyx_n_s_class; -static PyObject *__pyx_n_s_cline_in_traceback; -static PyObject *__pyx_kp_s_contiguous_and_direct; -static PyObject *__pyx_kp_s_contiguous_and_indirect; -static PyObject *__pyx_n_s_dict; -static PyObject *__pyx_n_s_dtype_is_object; -static PyObject *__pyx_n_s_encode; -static PyObject *__pyx_n_s_enumerate; -static PyObject *__pyx_n_s_error; -static PyObject *__pyx_n_s_flags; -static PyObject *__pyx_n_s_format; -static PyObject *__pyx_n_s_fortran; -static PyObject *__pyx_n_u_fortran; -static PyObject *__pyx_n_s_getstate; -static PyObject *__pyx_kp_s_got_differing_extents_in_dimensi; -static PyObject *__pyx_n_s_id; -static PyObject *__pyx_n_s_import; -static PyObject *__pyx_n_s_itemsize; -static PyObject *__pyx_kp_s_itemsize_0_for_cython_array; -static PyObject *__pyx_n_s_main; -static PyObject *__pyx_n_s_memview; -static PyObject *__pyx_n_s_mode; -static PyObject *__pyx_n_s_name; -static PyObject *__pyx_n_s_name_2; -static PyObject *__pyx_n_s_ndim; -static PyObject *__pyx_n_s_new; -static PyObject *__pyx_kp_s_no_default___reduce___due_to_non; -static PyObject *__pyx_n_s_obj; -static PyObject *__pyx_n_s_pack; -static PyObject *__pyx_n_s_paths; -static PyObject *__pyx_n_s_pickle; -static PyObject *__pyx_n_s_pyx_PickleError; -static PyObject *__pyx_n_s_pyx_checksum; -static PyObject *__pyx_n_s_pyx_getbuffer; -static PyObject *__pyx_n_s_pyx_result; -static PyObject *__pyx_n_s_pyx_state; -static PyObject *__pyx_n_s_pyx_type; -static PyObject *__pyx_n_s_pyx_unpickle_Enum; -static PyObject *__pyx_n_s_pyx_vtable; -static PyObject *__pyx_n_s_range; -static PyObject *__pyx_n_s_reduce; -static PyObject *__pyx_n_s_reduce_cython; -static PyObject *__pyx_n_s_reduce_ex; -static PyObject *__pyx_n_s_setstate; -static PyObject *__pyx_n_s_setstate_cython; -static PyObject *__pyx_n_s_shape; -static PyObject *__pyx_n_s_size; -static PyObject *__pyx_n_s_start; -static PyObject *__pyx_n_s_step; -static PyObject *__pyx_n_s_stop; -static PyObject *__pyx_kp_s_strided_and_direct; -static PyObject *__pyx_kp_s_strided_and_direct_or_indirect; -static PyObject *__pyx_kp_s_strided_and_indirect; -static PyObject *__pyx_kp_s_stringsource; -static PyObject *__pyx_n_s_struct; -static PyObject *__pyx_n_s_t_xs; -static PyObject *__pyx_n_s_t_ys; -static PyObject *__pyx_n_s_test; -static PyObject *__pyx_kp_s_unable_to_allocate_array_data; -static PyObject *__pyx_kp_s_unable_to_allocate_shape_and_str; -static PyObject *__pyx_n_s_unpack; -static PyObject *__pyx_n_s_update; -static PyObject *__pyx_n_s_values; -static PyObject *__pyx_pf_15monotonic_align_4core_maximum_path_c(CYTHON_UNUSED PyObject *__pyx_self, __Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs); /* proto */ -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, PyObject *__pyx_v_format, PyObject *__pyx_v_mode, int __pyx_v_allocate_buffer); /* proto */ -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(struct __pyx_array_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */ -static void __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(struct __pyx_array_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_5array_7memview___get__(struct __pyx_array_obj *__pyx_v_self); /* proto */ -static Py_ssize_t __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(struct __pyx_array_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_attr); /* proto */ -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item); /* proto */ -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value); /* proto */ -static PyObject *__pyx_pf___pyx_array___reduce_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_array_2__setstate_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ -static int __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v_name); /* proto */ -static PyObject *__pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(struct __pyx_MemviewEnum_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_MemviewEnum___reduce_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_MemviewEnum_2__setstate_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v___pyx_state); /* proto */ -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj, int __pyx_v_flags, int __pyx_v_dtype_is_object); /* proto */ -static void __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index); /* proto */ -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /* proto */ -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(struct __pyx_memoryview_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static Py_ssize_t __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryview___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryview_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ -static void __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base___get__(struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryviewslice___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryviewslice_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v___pyx_type, long __pyx_v___pyx_checksum, PyObject *__pyx_v___pyx_state); /* proto */ -static PyObject *__pyx_tp_new_array(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_Enum(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_memoryview(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new__memoryviewslice(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_int_0; -static PyObject *__pyx_int_1; -static PyObject *__pyx_int_112105877; -static PyObject *__pyx_int_136983863; -static PyObject *__pyx_int_184977713; -static PyObject *__pyx_int_neg_1; -static float __pyx_k_; -static PyObject *__pyx_tuple__2; -static PyObject *__pyx_tuple__3; -static PyObject *__pyx_tuple__4; -static PyObject *__pyx_tuple__5; -static PyObject *__pyx_tuple__6; -static PyObject *__pyx_tuple__7; -static PyObject *__pyx_tuple__8; -static PyObject *__pyx_tuple__9; -static PyObject *__pyx_slice__16; -static PyObject *__pyx_tuple__10; -static PyObject *__pyx_tuple__11; -static PyObject *__pyx_tuple__12; -static PyObject *__pyx_tuple__13; -static PyObject *__pyx_tuple__14; -static PyObject *__pyx_tuple__15; -static PyObject *__pyx_tuple__17; -static PyObject *__pyx_tuple__18; -static PyObject *__pyx_tuple__19; -static PyObject *__pyx_tuple__20; -static PyObject *__pyx_tuple__21; -static PyObject *__pyx_tuple__22; -static PyObject *__pyx_tuple__23; -static PyObject *__pyx_tuple__24; -static PyObject *__pyx_tuple__25; -static PyObject *__pyx_tuple__26; -static PyObject *__pyx_codeobj__27; -/* Late includes */ - -/* "monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ - -static void __pyx_f_15monotonic_align_4core_maximum_path_each(__Pyx_memviewslice __pyx_v_path, __Pyx_memviewslice __pyx_v_value, int __pyx_v_t_y, int __pyx_v_t_x, struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each *__pyx_optional_args) { - float __pyx_v_max_neg_val = __pyx_k_; - int __pyx_v_x; - int __pyx_v_y; - float __pyx_v_v_prev; - float __pyx_v_v_cur; - int __pyx_v_index; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - long __pyx_t_4; - int __pyx_t_5; - long __pyx_t_6; - long __pyx_t_7; - int __pyx_t_8; - Py_ssize_t __pyx_t_9; - Py_ssize_t __pyx_t_10; - float __pyx_t_11; - float __pyx_t_12; - float __pyx_t_13; - int __pyx_t_14; - Py_ssize_t __pyx_t_15; - Py_ssize_t __pyx_t_16; - if (__pyx_optional_args) { - if (__pyx_optional_args->__pyx_n > 0) { - __pyx_v_max_neg_val = __pyx_optional_args->max_neg_val; - } - } - - /* "monotonic_align/core.pyx":13 - * cdef float v_cur - * cdef float tmp - * cdef int index = t_x - 1 # <<<<<<<<<<<<<< - * - * for y in range(t_y): - */ - __pyx_v_index = (__pyx_v_t_x - 1); - - /* "monotonic_align/core.pyx":15 - * cdef int index = t_x - 1 - * - * for y in range(t_y): # <<<<<<<<<<<<<< - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: - */ - __pyx_t_1 = __pyx_v_t_y; - __pyx_t_2 = __pyx_t_1; - for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { - __pyx_v_y = __pyx_t_3; - - /* "monotonic_align/core.pyx":16 - * - * for y in range(t_y): - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): # <<<<<<<<<<<<<< - * if x == y: - * v_cur = max_neg_val - */ - __pyx_t_4 = (__pyx_v_y + 1); - __pyx_t_5 = __pyx_v_t_x; - if (((__pyx_t_4 < __pyx_t_5) != 0)) { - __pyx_t_6 = __pyx_t_4; - } else { - __pyx_t_6 = __pyx_t_5; - } - __pyx_t_4 = __pyx_t_6; - __pyx_t_5 = ((__pyx_v_t_x + __pyx_v_y) - __pyx_v_t_y); - __pyx_t_6 = 0; - if (((__pyx_t_5 > __pyx_t_6) != 0)) { - __pyx_t_7 = __pyx_t_5; - } else { - __pyx_t_7 = __pyx_t_6; - } - __pyx_t_6 = __pyx_t_4; - for (__pyx_t_5 = __pyx_t_7; __pyx_t_5 < __pyx_t_6; __pyx_t_5+=1) { - __pyx_v_x = __pyx_t_5; - - /* "monotonic_align/core.pyx":17 - * for y in range(t_y): - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: # <<<<<<<<<<<<<< - * v_cur = max_neg_val - * else: - */ - __pyx_t_8 = ((__pyx_v_x == __pyx_v_y) != 0); - if (__pyx_t_8) { - - /* "monotonic_align/core.pyx":18 - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: - * v_cur = max_neg_val # <<<<<<<<<<<<<< - * else: - * v_cur = value[y-1, x] - */ - __pyx_v_v_cur = __pyx_v_max_neg_val; - - /* "monotonic_align/core.pyx":17 - * for y in range(t_y): - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: # <<<<<<<<<<<<<< - * v_cur = max_neg_val - * else: - */ - goto __pyx_L7; - } - - /* "monotonic_align/core.pyx":20 - * v_cur = max_neg_val - * else: - * v_cur = value[y-1, x] # <<<<<<<<<<<<<< - * if x == 0: - * if y == 0: - */ - /*else*/ { - __pyx_t_9 = (__pyx_v_y - 1); - __pyx_t_10 = __pyx_v_x; - __pyx_v_v_cur = (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) ))); - } - __pyx_L7:; - - /* "monotonic_align/core.pyx":21 - * else: - * v_cur = value[y-1, x] - * if x == 0: # <<<<<<<<<<<<<< - * if y == 0: - * v_prev = 0. - */ - __pyx_t_8 = ((__pyx_v_x == 0) != 0); - if (__pyx_t_8) { - - /* "monotonic_align/core.pyx":22 - * v_cur = value[y-1, x] - * if x == 0: - * if y == 0: # <<<<<<<<<<<<<< - * v_prev = 0. - * else: - */ - __pyx_t_8 = ((__pyx_v_y == 0) != 0); - if (__pyx_t_8) { - - /* "monotonic_align/core.pyx":23 - * if x == 0: - * if y == 0: - * v_prev = 0. # <<<<<<<<<<<<<< - * else: - * v_prev = max_neg_val - */ - __pyx_v_v_prev = 0.; - - /* "monotonic_align/core.pyx":22 - * v_cur = value[y-1, x] - * if x == 0: - * if y == 0: # <<<<<<<<<<<<<< - * v_prev = 0. - * else: - */ - goto __pyx_L9; - } - - /* "monotonic_align/core.pyx":25 - * v_prev = 0. - * else: - * v_prev = max_neg_val # <<<<<<<<<<<<<< - * else: - * v_prev = value[y-1, x-1] - */ - /*else*/ { - __pyx_v_v_prev = __pyx_v_max_neg_val; - } - __pyx_L9:; - - /* "monotonic_align/core.pyx":21 - * else: - * v_cur = value[y-1, x] - * if x == 0: # <<<<<<<<<<<<<< - * if y == 0: - * v_prev = 0. - */ - goto __pyx_L8; - } - - /* "monotonic_align/core.pyx":27 - * v_prev = max_neg_val - * else: - * v_prev = value[y-1, x-1] # <<<<<<<<<<<<<< - * value[y, x] += max(v_prev, v_cur) - * - */ - /*else*/ { - __pyx_t_10 = (__pyx_v_y - 1); - __pyx_t_9 = (__pyx_v_x - 1); - __pyx_v_v_prev = (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_10 * __pyx_v_value.strides[0]) )) + __pyx_t_9)) ))); - } - __pyx_L8:; - - /* "monotonic_align/core.pyx":28 - * else: - * v_prev = value[y-1, x-1] - * value[y, x] += max(v_prev, v_cur) # <<<<<<<<<<<<<< - * - * for y in range(t_y - 1, -1, -1): - */ - __pyx_t_11 = __pyx_v_v_cur; - __pyx_t_12 = __pyx_v_v_prev; - if (((__pyx_t_11 > __pyx_t_12) != 0)) { - __pyx_t_13 = __pyx_t_11; - } else { - __pyx_t_13 = __pyx_t_12; - } - __pyx_t_9 = __pyx_v_y; - __pyx_t_10 = __pyx_v_x; - *((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) )) += __pyx_t_13; - } - } - - /* "monotonic_align/core.pyx":30 - * value[y, x] += max(v_prev, v_cur) - * - * for y in range(t_y - 1, -1, -1): # <<<<<<<<<<<<<< - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): - */ - for (__pyx_t_1 = (__pyx_v_t_y - 1); __pyx_t_1 > -1; __pyx_t_1-=1) { - __pyx_v_y = __pyx_t_1; - - /* "monotonic_align/core.pyx":31 - * - * for y in range(t_y - 1, -1, -1): - * path[y, index] = 1 # <<<<<<<<<<<<<< - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): - * index = index - 1 - */ - __pyx_t_10 = __pyx_v_y; - __pyx_t_9 = __pyx_v_index; - *((int *) ( /* dim=1 */ ((char *) (((int *) ( /* dim=0 */ (__pyx_v_path.data + __pyx_t_10 * __pyx_v_path.strides[0]) )) + __pyx_t_9)) )) = 1; - - /* "monotonic_align/core.pyx":32 - * for y in range(t_y - 1, -1, -1): - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): # <<<<<<<<<<<<<< - * index = index - 1 - * - */ - __pyx_t_14 = ((__pyx_v_index != 0) != 0); - if (__pyx_t_14) { - } else { - __pyx_t_8 = __pyx_t_14; - goto __pyx_L13_bool_binop_done; - } - __pyx_t_14 = ((__pyx_v_index == __pyx_v_y) != 0); - if (!__pyx_t_14) { - } else { - __pyx_t_8 = __pyx_t_14; - goto __pyx_L13_bool_binop_done; - } - __pyx_t_9 = (__pyx_v_y - 1); - __pyx_t_10 = __pyx_v_index; - __pyx_t_15 = (__pyx_v_y - 1); - __pyx_t_16 = (__pyx_v_index - 1); - __pyx_t_14 = (((*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) ))) < (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_15 * __pyx_v_value.strides[0]) )) + __pyx_t_16)) )))) != 0); - __pyx_t_8 = __pyx_t_14; - __pyx_L13_bool_binop_done:; - if (__pyx_t_8) { - - /* "monotonic_align/core.pyx":33 - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): - * index = index - 1 # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_index = (__pyx_v_index - 1); - - /* "monotonic_align/core.pyx":32 - * for y in range(t_y - 1, -1, -1): - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): # <<<<<<<<<<<<<< - * index = index - 1 - * - */ - } - } - - /* "monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ - - /* function exit code */ -} - -/* "monotonic_align/core.pyx":38 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: # <<<<<<<<<<<<<< - * cdef int b = paths.shape[0] - * cdef int i - */ - -static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static void __pyx_f_15monotonic_align_4core_maximum_path_c(__Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs, CYTHON_UNUSED int __pyx_skip_dispatch) { - CYTHON_UNUSED int __pyx_v_b; - int __pyx_v_i; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - __Pyx_memviewslice __pyx_t_4 = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_t_5 = { 0, 0, { 0 }, { 0 }, { 0 } }; - Py_ssize_t __pyx_t_6; - Py_ssize_t __pyx_t_7; - - /* "monotonic_align/core.pyx":39 - * @cython.wraparound(False) - * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: - * cdef int b = paths.shape[0] # <<<<<<<<<<<<<< - * cdef int i - * for i in prange(b, nogil=True): - */ - __pyx_v_b = (__pyx_v_paths.shape[0]); - - /* "monotonic_align/core.pyx":41 - * cdef int b = paths.shape[0] - * cdef int i - * for i in prange(b, nogil=True): # <<<<<<<<<<<<<< - * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i]) - */ - { - #ifdef WITH_THREAD - PyThreadState *_save; - Py_UNBLOCK_THREADS - __Pyx_FastGIL_Remember(); - #endif - /*try:*/ { - __pyx_t_1 = __pyx_v_b; - if ((1 == 0)) abort(); - { - #if ((defined(__APPLE__) || defined(__OSX__)) && (defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))))) - #undef likely - #undef unlikely - #define likely(x) (x) - #define unlikely(x) (x) - #endif - __pyx_t_3 = (__pyx_t_1 - 0 + 1 - 1/abs(1)) / 1; - if (__pyx_t_3 > 0) - { - #ifdef _OPENMP - #pragma omp parallel private(__pyx_t_6, __pyx_t_7) firstprivate(__pyx_t_4, __pyx_t_5) - #endif /* _OPENMP */ - { - #ifdef _OPENMP - #pragma omp for firstprivate(__pyx_v_i) lastprivate(__pyx_v_i) - #endif /* _OPENMP */ - for (__pyx_t_2 = 0; __pyx_t_2 < __pyx_t_3; __pyx_t_2++){ - { - __pyx_v_i = (int)(0 + 1 * __pyx_t_2); - - /* "monotonic_align/core.pyx":42 - * cdef int i - * for i in prange(b, nogil=True): - * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i]) # <<<<<<<<<<<<<< - */ - __pyx_t_4.data = __pyx_v_paths.data; - __pyx_t_4.memview = __pyx_v_paths.memview; - __PYX_INC_MEMVIEW(&__pyx_t_4, 0); - { - Py_ssize_t __pyx_tmp_idx = __pyx_v_i; - Py_ssize_t __pyx_tmp_stride = __pyx_v_paths.strides[0]; - __pyx_t_4.data += __pyx_tmp_idx * __pyx_tmp_stride; -} - -__pyx_t_4.shape[0] = __pyx_v_paths.shape[1]; -__pyx_t_4.strides[0] = __pyx_v_paths.strides[1]; - __pyx_t_4.suboffsets[0] = -1; - -__pyx_t_4.shape[1] = __pyx_v_paths.shape[2]; -__pyx_t_4.strides[1] = __pyx_v_paths.strides[2]; - __pyx_t_4.suboffsets[1] = -1; - -__pyx_t_5.data = __pyx_v_values.data; - __pyx_t_5.memview = __pyx_v_values.memview; - __PYX_INC_MEMVIEW(&__pyx_t_5, 0); - { - Py_ssize_t __pyx_tmp_idx = __pyx_v_i; - Py_ssize_t __pyx_tmp_stride = __pyx_v_values.strides[0]; - __pyx_t_5.data += __pyx_tmp_idx * __pyx_tmp_stride; -} - -__pyx_t_5.shape[0] = __pyx_v_values.shape[1]; -__pyx_t_5.strides[0] = __pyx_v_values.strides[1]; - __pyx_t_5.suboffsets[0] = -1; - -__pyx_t_5.shape[1] = __pyx_v_values.shape[2]; -__pyx_t_5.strides[1] = __pyx_v_values.strides[2]; - __pyx_t_5.suboffsets[1] = -1; - -__pyx_t_6 = __pyx_v_i; - __pyx_t_7 = __pyx_v_i; - __pyx_f_15monotonic_align_4core_maximum_path_each(__pyx_t_4, __pyx_t_5, (*((int *) ( /* dim=0 */ ((char *) (((int *) __pyx_v_t_ys.data) + __pyx_t_6)) ))), (*((int *) ( /* dim=0 */ ((char *) (((int *) __pyx_v_t_xs.data) + __pyx_t_7)) ))), NULL); - __PYX_XDEC_MEMVIEW(&__pyx_t_4, 0); - __pyx_t_4.memview = NULL; - __pyx_t_4.data = NULL; - __PYX_XDEC_MEMVIEW(&__pyx_t_5, 0); - __pyx_t_5.memview = NULL; - __pyx_t_5.data = NULL; - } - } - } - } - } - #if ((defined(__APPLE__) || defined(__OSX__)) && (defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))))) - #undef likely - #undef unlikely - #define likely(x) __builtin_expect(!!(x), 1) - #define unlikely(x) __builtin_expect(!!(x), 0) - #endif - } - - /* "monotonic_align/core.pyx":41 - * cdef int b = paths.shape[0] - * cdef int i - * for i in prange(b, nogil=True): # <<<<<<<<<<<<<< - * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i]) - */ - /*finally:*/ { - /*normal exit:*/{ - #ifdef WITH_THREAD - __Pyx_FastGIL_Forget(); - Py_BLOCK_THREADS - #endif - goto __pyx_L5; - } - __pyx_L5:; - } - } - - /* "monotonic_align/core.pyx":38 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: # <<<<<<<<<<<<<< - * cdef int b = paths.shape[0] - * cdef int i - */ - - /* function exit code */ -} - -/* Python wrapper */ -static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - __Pyx_memviewslice __pyx_v_paths = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_v_values = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_v_t_ys = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_v_t_xs = { 0, 0, { 0 }, { 0 }, { 0 } }; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("maximum_path_c (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_paths,&__pyx_n_s_values,&__pyx_n_s_t_ys,&__pyx_n_s_t_xs,0}; - PyObject* values[4] = {0,0,0,0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_paths)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_values)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 1); __PYX_ERR(0, 38, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_t_ys)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 2); __PYX_ERR(0, 38, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_t_xs)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 3); __PYX_ERR(0, 38, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "maximum_path_c") < 0)) __PYX_ERR(0, 38, __pyx_L3_error) - } - } else if (PyTuple_GET_SIZE(__pyx_args) != 4) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - } - __pyx_v_paths = __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(values[0], PyBUF_WRITABLE); if (unlikely(!__pyx_v_paths.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_v_values = __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(values[1], PyBUF_WRITABLE); if (unlikely(!__pyx_v_values.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_v_t_ys = __Pyx_PyObject_to_MemoryviewSlice_dc_int(values[2], PyBUF_WRITABLE); if (unlikely(!__pyx_v_t_ys.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_v_t_xs = __Pyx_PyObject_to_MemoryviewSlice_dc_int(values[3], PyBUF_WRITABLE); if (unlikely(!__pyx_v_t_xs.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("monotonic_align.core.maximum_path_c", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_15monotonic_align_4core_maximum_path_c(__pyx_self, __pyx_v_paths, __pyx_v_values, __pyx_v_t_ys, __pyx_v_t_xs); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15monotonic_align_4core_maximum_path_c(CYTHON_UNUSED PyObject *__pyx_self, __Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("maximum_path_c", 0); - __Pyx_XDECREF(__pyx_r); - if (unlikely(!__pyx_v_paths.memview)) { __Pyx_RaiseUnboundLocalError("paths"); __PYX_ERR(0, 38, __pyx_L1_error) } - if (unlikely(!__pyx_v_values.memview)) { __Pyx_RaiseUnboundLocalError("values"); __PYX_ERR(0, 38, __pyx_L1_error) } - if (unlikely(!__pyx_v_t_ys.memview)) { __Pyx_RaiseUnboundLocalError("t_ys"); __PYX_ERR(0, 38, __pyx_L1_error) } - if (unlikely(!__pyx_v_t_xs.memview)) { __Pyx_RaiseUnboundLocalError("t_xs"); __PYX_ERR(0, 38, __pyx_L1_error) } - __pyx_t_1 = __Pyx_void_to_None(__pyx_f_15monotonic_align_4core_maximum_path_c(__pyx_v_paths, __pyx_v_values, __pyx_v_t_ys, __pyx_v_t_xs, 0)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("monotonic_align.core.maximum_path_c", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __PYX_XDEC_MEMVIEW(&__pyx_v_paths, 1); - __PYX_XDEC_MEMVIEW(&__pyx_v_values, 1); - __PYX_XDEC_MEMVIEW(&__pyx_v_t_ys, 1); - __PYX_XDEC_MEMVIEW(&__pyx_v_t_xs, 1); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":123 - * cdef bint dtype_is_object - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<< - * mode="c", bint allocate_buffer=True): - * - */ - -/* Python wrapper */ -static int __pyx_array___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static int __pyx_array___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_shape = 0; - Py_ssize_t __pyx_v_itemsize; - PyObject *__pyx_v_format = 0; - PyObject *__pyx_v_mode = 0; - int __pyx_v_allocate_buffer; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_shape,&__pyx_n_s_itemsize,&__pyx_n_s_format,&__pyx_n_s_mode,&__pyx_n_s_allocate_buffer,0}; - PyObject* values[5] = {0,0,0,0,0}; - values[3] = ((PyObject *)__pyx_n_s_c); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_shape)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_itemsize)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, 1); __PYX_ERR(1, 123, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_format)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, 2); __PYX_ERR(1, 123, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (kw_args > 0) { - PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_mode); - if (value) { values[3] = value; kw_args--; } - } - CYTHON_FALLTHROUGH; - case 4: - if (kw_args > 0) { - PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_allocate_buffer); - if (value) { values[4] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__cinit__") < 0)) __PYX_ERR(1, 123, __pyx_L3_error) - } - } else { - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_shape = ((PyObject*)values[0]); - __pyx_v_itemsize = __Pyx_PyIndex_AsSsize_t(values[1]); if (unlikely((__pyx_v_itemsize == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 123, __pyx_L3_error) - __pyx_v_format = values[2]; - __pyx_v_mode = values[3]; - if (values[4]) { - __pyx_v_allocate_buffer = __Pyx_PyObject_IsTrue(values[4]); if (unlikely((__pyx_v_allocate_buffer == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 124, __pyx_L3_error) - } else { - - /* "View.MemoryView":124 - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, - * mode="c", bint allocate_buffer=True): # <<<<<<<<<<<<<< - * - * cdef int idx - */ - __pyx_v_allocate_buffer = ((int)1); - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 123, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.array.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return -1; - __pyx_L4_argument_unpacking_done:; - if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_shape), (&PyTuple_Type), 1, "shape", 1))) __PYX_ERR(1, 123, __pyx_L1_error) - if (unlikely(((PyObject *)__pyx_v_format) == Py_None)) { - PyErr_Format(PyExc_TypeError, "Argument '%.200s' must not be None", "format"); __PYX_ERR(1, 123, __pyx_L1_error) - } - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(((struct __pyx_array_obj *)__pyx_v_self), __pyx_v_shape, __pyx_v_itemsize, __pyx_v_format, __pyx_v_mode, __pyx_v_allocate_buffer); - - /* "View.MemoryView":123 - * cdef bint dtype_is_object - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<< - * mode="c", bint allocate_buffer=True): - * - */ - - /* function exit code */ - goto __pyx_L0; - __pyx_L1_error:; - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, PyObject *__pyx_v_format, PyObject *__pyx_v_mode, int __pyx_v_allocate_buffer) { - int __pyx_v_idx; - Py_ssize_t __pyx_v_i; - Py_ssize_t __pyx_v_dim; - PyObject **__pyx_v_p; - char __pyx_v_order; - int __pyx_r; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - char *__pyx_t_7; - int __pyx_t_8; - Py_ssize_t __pyx_t_9; - PyObject *__pyx_t_10 = NULL; - Py_ssize_t __pyx_t_11; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__cinit__", 0); - __Pyx_INCREF(__pyx_v_format); - - /* "View.MemoryView":130 - * cdef PyObject **p - * - * self.ndim = len(shape) # <<<<<<<<<<<<<< - * self.itemsize = itemsize - * - */ - if (unlikely(__pyx_v_shape == Py_None)) { - PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()"); - __PYX_ERR(1, 130, __pyx_L1_error) - } - __pyx_t_1 = PyTuple_GET_SIZE(__pyx_v_shape); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(1, 130, __pyx_L1_error) - __pyx_v_self->ndim = ((int)__pyx_t_1); - - /* "View.MemoryView":131 - * - * self.ndim = len(shape) - * self.itemsize = itemsize # <<<<<<<<<<<<<< - * - * if not self.ndim: - */ - __pyx_v_self->itemsize = __pyx_v_itemsize; - - /* "View.MemoryView":133 - * self.itemsize = itemsize - * - * if not self.ndim: # <<<<<<<<<<<<<< - * raise ValueError("Empty shape tuple for cython.array") - * - */ - __pyx_t_2 = ((!(__pyx_v_self->ndim != 0)) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":134 - * - * if not self.ndim: - * raise ValueError("Empty shape tuple for cython.array") # <<<<<<<<<<<<<< - * - * if itemsize <= 0: - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__2, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 134, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 134, __pyx_L1_error) - - /* "View.MemoryView":133 - * self.itemsize = itemsize - * - * if not self.ndim: # <<<<<<<<<<<<<< - * raise ValueError("Empty shape tuple for cython.array") - * - */ - } - - /* "View.MemoryView":136 - * raise ValueError("Empty shape tuple for cython.array") - * - * if itemsize <= 0: # <<<<<<<<<<<<<< - * raise ValueError("itemsize <= 0 for cython.array") - * - */ - __pyx_t_2 = ((__pyx_v_itemsize <= 0) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":137 - * - * if itemsize <= 0: - * raise ValueError("itemsize <= 0 for cython.array") # <<<<<<<<<<<<<< - * - * if not isinstance(format, bytes): - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__3, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 137, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 137, __pyx_L1_error) - - /* "View.MemoryView":136 - * raise ValueError("Empty shape tuple for cython.array") - * - * if itemsize <= 0: # <<<<<<<<<<<<<< - * raise ValueError("itemsize <= 0 for cython.array") - * - */ - } - - /* "View.MemoryView":139 - * raise ValueError("itemsize <= 0 for cython.array") - * - * if not isinstance(format, bytes): # <<<<<<<<<<<<<< - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string - */ - __pyx_t_2 = PyBytes_Check(__pyx_v_format); - __pyx_t_4 = ((!(__pyx_t_2 != 0)) != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":140 - * - * if not isinstance(format, bytes): - * format = format.encode('ASCII') # <<<<<<<<<<<<<< - * self._format = format # keep a reference to the byte string - * self.format = self._format - */ - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_format, __pyx_n_s_encode); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 140, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_6)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - } - } - __pyx_t_3 = (__pyx_t_6) ? __Pyx_PyObject_Call2Args(__pyx_t_5, __pyx_t_6, __pyx_n_s_ASCII) : __Pyx_PyObject_CallOneArg(__pyx_t_5, __pyx_n_s_ASCII); - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 140, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF_SET(__pyx_v_format, __pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":139 - * raise ValueError("itemsize <= 0 for cython.array") - * - * if not isinstance(format, bytes): # <<<<<<<<<<<<<< - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string - */ - } - - /* "View.MemoryView":141 - * if not isinstance(format, bytes): - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string # <<<<<<<<<<<<<< - * self.format = self._format - * - */ - if (!(likely(PyBytes_CheckExact(__pyx_v_format))||((__pyx_v_format) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_v_format)->tp_name), 0))) __PYX_ERR(1, 141, __pyx_L1_error) - __pyx_t_3 = __pyx_v_format; - __Pyx_INCREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __Pyx_GOTREF(__pyx_v_self->_format); - __Pyx_DECREF(__pyx_v_self->_format); - __pyx_v_self->_format = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":142 - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string - * self.format = self._format # <<<<<<<<<<<<<< - * - * - */ - if (unlikely(__pyx_v_self->_format == Py_None)) { - PyErr_SetString(PyExc_TypeError, "expected bytes, NoneType found"); - __PYX_ERR(1, 142, __pyx_L1_error) - } - __pyx_t_7 = __Pyx_PyBytes_AsWritableString(__pyx_v_self->_format); if (unlikely((!__pyx_t_7) && PyErr_Occurred())) __PYX_ERR(1, 142, __pyx_L1_error) - __pyx_v_self->format = __pyx_t_7; - - /* "View.MemoryView":145 - * - * - * self._shape = PyObject_Malloc(sizeof(Py_ssize_t)*self.ndim*2) # <<<<<<<<<<<<<< - * self._strides = self._shape + self.ndim - * - */ - __pyx_v_self->_shape = ((Py_ssize_t *)PyObject_Malloc((((sizeof(Py_ssize_t)) * __pyx_v_self->ndim) * 2))); - - /* "View.MemoryView":146 - * - * self._shape = PyObject_Malloc(sizeof(Py_ssize_t)*self.ndim*2) - * self._strides = self._shape + self.ndim # <<<<<<<<<<<<<< - * - * if not self._shape: - */ - __pyx_v_self->_strides = (__pyx_v_self->_shape + __pyx_v_self->ndim); - - /* "View.MemoryView":148 - * self._strides = self._shape + self.ndim - * - * if not self._shape: # <<<<<<<<<<<<<< - * raise MemoryError("unable to allocate shape and strides.") - * - */ - __pyx_t_4 = ((!(__pyx_v_self->_shape != 0)) != 0); - if (unlikely(__pyx_t_4)) { - - /* "View.MemoryView":149 - * - * if not self._shape: - * raise MemoryError("unable to allocate shape and strides.") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_MemoryError, __pyx_tuple__4, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 149, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 149, __pyx_L1_error) - - /* "View.MemoryView":148 - * self._strides = self._shape + self.ndim - * - * if not self._shape: # <<<<<<<<<<<<<< - * raise MemoryError("unable to allocate shape and strides.") - * - */ - } - - /* "View.MemoryView":152 - * - * - * for idx, dim in enumerate(shape): # <<<<<<<<<<<<<< - * if dim <= 0: - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - */ - __pyx_t_8 = 0; - __pyx_t_3 = __pyx_v_shape; __Pyx_INCREF(__pyx_t_3); __pyx_t_1 = 0; - for (;;) { - if (__pyx_t_1 >= PyTuple_GET_SIZE(__pyx_t_3)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_3, __pyx_t_1); __Pyx_INCREF(__pyx_t_5); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(1, 152, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_3, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 152, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - __pyx_t_9 = __Pyx_PyIndex_AsSsize_t(__pyx_t_5); if (unlikely((__pyx_t_9 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 152, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_v_dim = __pyx_t_9; - __pyx_v_idx = __pyx_t_8; - __pyx_t_8 = (__pyx_t_8 + 1); - - /* "View.MemoryView":153 - * - * for idx, dim in enumerate(shape): - * if dim <= 0: # <<<<<<<<<<<<<< - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - * self._shape[idx] = dim - */ - __pyx_t_4 = ((__pyx_v_dim <= 0) != 0); - if (unlikely(__pyx_t_4)) { - - /* "View.MemoryView":154 - * for idx, dim in enumerate(shape): - * if dim <= 0: - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) # <<<<<<<<<<<<<< - * self._shape[idx] = dim - * - */ - __pyx_t_5 = __Pyx_PyInt_From_int(__pyx_v_idx); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 154, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = PyInt_FromSsize_t(__pyx_v_dim); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 154, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_10 = PyTuple_New(2); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 154, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_10, 1, __pyx_t_6); - __pyx_t_5 = 0; - __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyString_Format(__pyx_kp_s_Invalid_shape_in_axis_d_d, __pyx_t_10); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 154, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_10 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_6); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 154, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_Raise(__pyx_t_10, 0, 0, 0); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __PYX_ERR(1, 154, __pyx_L1_error) - - /* "View.MemoryView":153 - * - * for idx, dim in enumerate(shape): - * if dim <= 0: # <<<<<<<<<<<<<< - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - * self._shape[idx] = dim - */ - } - - /* "View.MemoryView":155 - * if dim <= 0: - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - * self._shape[idx] = dim # <<<<<<<<<<<<<< - * - * cdef char order - */ - (__pyx_v_self->_shape[__pyx_v_idx]) = __pyx_v_dim; - - /* "View.MemoryView":152 - * - * - * for idx, dim in enumerate(shape): # <<<<<<<<<<<<<< - * if dim <= 0: - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - */ - } - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":158 - * - * cdef char order - * if mode == 'fortran': # <<<<<<<<<<<<<< - * order = b'F' - * self.mode = u'fortran' - */ - __pyx_t_4 = (__Pyx_PyString_Equals(__pyx_v_mode, __pyx_n_s_fortran, Py_EQ)); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(1, 158, __pyx_L1_error) - if (__pyx_t_4) { - - /* "View.MemoryView":159 - * cdef char order - * if mode == 'fortran': - * order = b'F' # <<<<<<<<<<<<<< - * self.mode = u'fortran' - * elif mode == 'c': - */ - __pyx_v_order = 'F'; - - /* "View.MemoryView":160 - * if mode == 'fortran': - * order = b'F' - * self.mode = u'fortran' # <<<<<<<<<<<<<< - * elif mode == 'c': - * order = b'C' - */ - __Pyx_INCREF(__pyx_n_u_fortran); - __Pyx_GIVEREF(__pyx_n_u_fortran); - __Pyx_GOTREF(__pyx_v_self->mode); - __Pyx_DECREF(__pyx_v_self->mode); - __pyx_v_self->mode = __pyx_n_u_fortran; - - /* "View.MemoryView":158 - * - * cdef char order - * if mode == 'fortran': # <<<<<<<<<<<<<< - * order = b'F' - * self.mode = u'fortran' - */ - goto __pyx_L10; - } - - /* "View.MemoryView":161 - * order = b'F' - * self.mode = u'fortran' - * elif mode == 'c': # <<<<<<<<<<<<<< - * order = b'C' - * self.mode = u'c' - */ - __pyx_t_4 = (__Pyx_PyString_Equals(__pyx_v_mode, __pyx_n_s_c, Py_EQ)); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(1, 161, __pyx_L1_error) - if (likely(__pyx_t_4)) { - - /* "View.MemoryView":162 - * self.mode = u'fortran' - * elif mode == 'c': - * order = b'C' # <<<<<<<<<<<<<< - * self.mode = u'c' - * else: - */ - __pyx_v_order = 'C'; - - /* "View.MemoryView":163 - * elif mode == 'c': - * order = b'C' - * self.mode = u'c' # <<<<<<<<<<<<<< - * else: - * raise ValueError("Invalid mode, expected 'c' or 'fortran', got %s" % mode) - */ - __Pyx_INCREF(__pyx_n_u_c); - __Pyx_GIVEREF(__pyx_n_u_c); - __Pyx_GOTREF(__pyx_v_self->mode); - __Pyx_DECREF(__pyx_v_self->mode); - __pyx_v_self->mode = __pyx_n_u_c; - - /* "View.MemoryView":161 - * order = b'F' - * self.mode = u'fortran' - * elif mode == 'c': # <<<<<<<<<<<<<< - * order = b'C' - * self.mode = u'c' - */ - goto __pyx_L10; - } - - /* "View.MemoryView":165 - * self.mode = u'c' - * else: - * raise ValueError("Invalid mode, expected 'c' or 'fortran', got %s" % mode) # <<<<<<<<<<<<<< - * - * self.len = fill_contig_strides_array(self._shape, self._strides, - */ - /*else*/ { - __pyx_t_3 = __Pyx_PyString_FormatSafe(__pyx_kp_s_Invalid_mode_expected_c_or_fortr, __pyx_v_mode); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 165, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_10 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_3); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 165, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_10, 0, 0, 0); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __PYX_ERR(1, 165, __pyx_L1_error) - } - __pyx_L10:; - - /* "View.MemoryView":167 - * raise ValueError("Invalid mode, expected 'c' or 'fortran', got %s" % mode) - * - * self.len = fill_contig_strides_array(self._shape, self._strides, # <<<<<<<<<<<<<< - * itemsize, self.ndim, order) - * - */ - __pyx_v_self->len = __pyx_fill_contig_strides_array(__pyx_v_self->_shape, __pyx_v_self->_strides, __pyx_v_itemsize, __pyx_v_self->ndim, __pyx_v_order); - - /* "View.MemoryView":170 - * itemsize, self.ndim, order) - * - * self.free_data = allocate_buffer # <<<<<<<<<<<<<< - * self.dtype_is_object = format == b'O' - * if allocate_buffer: - */ - __pyx_v_self->free_data = __pyx_v_allocate_buffer; - - /* "View.MemoryView":171 - * - * self.free_data = allocate_buffer - * self.dtype_is_object = format == b'O' # <<<<<<<<<<<<<< - * if allocate_buffer: - * - */ - __pyx_t_10 = PyObject_RichCompare(__pyx_v_format, __pyx_n_b_O, Py_EQ); __Pyx_XGOTREF(__pyx_t_10); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 171, __pyx_L1_error) - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_10); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 171, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_v_self->dtype_is_object = __pyx_t_4; - - /* "View.MemoryView":172 - * self.free_data = allocate_buffer - * self.dtype_is_object = format == b'O' - * if allocate_buffer: # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_4 = (__pyx_v_allocate_buffer != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":175 - * - * - * self.data = malloc(self.len) # <<<<<<<<<<<<<< - * if not self.data: - * raise MemoryError("unable to allocate array data.") - */ - __pyx_v_self->data = ((char *)malloc(__pyx_v_self->len)); - - /* "View.MemoryView":176 - * - * self.data = malloc(self.len) - * if not self.data: # <<<<<<<<<<<<<< - * raise MemoryError("unable to allocate array data.") - * - */ - __pyx_t_4 = ((!(__pyx_v_self->data != 0)) != 0); - if (unlikely(__pyx_t_4)) { - - /* "View.MemoryView":177 - * self.data = malloc(self.len) - * if not self.data: - * raise MemoryError("unable to allocate array data.") # <<<<<<<<<<<<<< - * - * if self.dtype_is_object: - */ - __pyx_t_10 = __Pyx_PyObject_Call(__pyx_builtin_MemoryError, __pyx_tuple__5, NULL); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 177, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_Raise(__pyx_t_10, 0, 0, 0); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __PYX_ERR(1, 177, __pyx_L1_error) - - /* "View.MemoryView":176 - * - * self.data = malloc(self.len) - * if not self.data: # <<<<<<<<<<<<<< - * raise MemoryError("unable to allocate array data.") - * - */ - } - - /* "View.MemoryView":179 - * raise MemoryError("unable to allocate array data.") - * - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * p = self.data - * for i in range(self.len / itemsize): - */ - __pyx_t_4 = (__pyx_v_self->dtype_is_object != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":180 - * - * if self.dtype_is_object: - * p = self.data # <<<<<<<<<<<<<< - * for i in range(self.len / itemsize): - * p[i] = Py_None - */ - __pyx_v_p = ((PyObject **)__pyx_v_self->data); - - /* "View.MemoryView":181 - * if self.dtype_is_object: - * p = self.data - * for i in range(self.len / itemsize): # <<<<<<<<<<<<<< - * p[i] = Py_None - * Py_INCREF(Py_None) - */ - if (unlikely(__pyx_v_itemsize == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "integer division or modulo by zero"); - __PYX_ERR(1, 181, __pyx_L1_error) - } - else if (sizeof(Py_ssize_t) == sizeof(long) && (!(((Py_ssize_t)-1) > 0)) && unlikely(__pyx_v_itemsize == (Py_ssize_t)-1) && unlikely(UNARY_NEG_WOULD_OVERFLOW(__pyx_v_self->len))) { - PyErr_SetString(PyExc_OverflowError, "value too large to perform division"); - __PYX_ERR(1, 181, __pyx_L1_error) - } - __pyx_t_1 = __Pyx_div_Py_ssize_t(__pyx_v_self->len, __pyx_v_itemsize); - __pyx_t_9 = __pyx_t_1; - for (__pyx_t_11 = 0; __pyx_t_11 < __pyx_t_9; __pyx_t_11+=1) { - __pyx_v_i = __pyx_t_11; - - /* "View.MemoryView":182 - * p = self.data - * for i in range(self.len / itemsize): - * p[i] = Py_None # <<<<<<<<<<<<<< - * Py_INCREF(Py_None) - * - */ - (__pyx_v_p[__pyx_v_i]) = Py_None; - - /* "View.MemoryView":183 - * for i in range(self.len / itemsize): - * p[i] = Py_None - * Py_INCREF(Py_None) # <<<<<<<<<<<<<< - * - * @cname('getbuffer') - */ - Py_INCREF(Py_None); - } - - /* "View.MemoryView":179 - * raise MemoryError("unable to allocate array data.") - * - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * p = self.data - * for i in range(self.len / itemsize): - */ - } - - /* "View.MemoryView":172 - * self.free_data = allocate_buffer - * self.dtype_is_object = format == b'O' - * if allocate_buffer: # <<<<<<<<<<<<<< - * - * - */ - } - - /* "View.MemoryView":123 - * cdef bint dtype_is_object - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<< - * mode="c", bint allocate_buffer=True): - * - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_AddTraceback("View.MemoryView.array.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_format); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":186 - * - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<< - * cdef int bufmode = -1 - * if self.mode == u"c": - */ - -/* Python wrapper */ -static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getbuffer__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(((struct __pyx_array_obj *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(struct __pyx_array_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - int __pyx_v_bufmode; - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - char *__pyx_t_4; - Py_ssize_t __pyx_t_5; - int __pyx_t_6; - Py_ssize_t *__pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - if (__pyx_v_info == NULL) { - PyErr_SetString(PyExc_BufferError, "PyObject_GetBuffer: view==NULL argument is obsolete"); - return -1; - } - __Pyx_RefNannySetupContext("__getbuffer__", 0); - __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(__pyx_v_info->obj); - - /* "View.MemoryView":187 - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): - * cdef int bufmode = -1 # <<<<<<<<<<<<<< - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - */ - __pyx_v_bufmode = -1; - - /* "View.MemoryView":188 - * def __getbuffer__(self, Py_buffer *info, int flags): - * cdef int bufmode = -1 - * if self.mode == u"c": # <<<<<<<<<<<<<< - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": - */ - __pyx_t_1 = (__Pyx_PyUnicode_Equals(__pyx_v_self->mode, __pyx_n_u_c, Py_EQ)); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 188, __pyx_L1_error) - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":189 - * cdef int bufmode = -1 - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS # <<<<<<<<<<<<<< - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - */ - __pyx_v_bufmode = (PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS); - - /* "View.MemoryView":188 - * def __getbuffer__(self, Py_buffer *info, int flags): - * cdef int bufmode = -1 - * if self.mode == u"c": # <<<<<<<<<<<<<< - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": - */ - goto __pyx_L3; - } - - /* "View.MemoryView":190 - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": # <<<<<<<<<<<<<< - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - */ - __pyx_t_2 = (__Pyx_PyUnicode_Equals(__pyx_v_self->mode, __pyx_n_u_fortran, Py_EQ)); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(1, 190, __pyx_L1_error) - __pyx_t_1 = (__pyx_t_2 != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":191 - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS # <<<<<<<<<<<<<< - * if not (flags & bufmode): - * raise ValueError("Can only create a buffer that is contiguous in memory.") - */ - __pyx_v_bufmode = (PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS); - - /* "View.MemoryView":190 - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": # <<<<<<<<<<<<<< - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - */ - } - __pyx_L3:; - - /* "View.MemoryView":192 - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): # <<<<<<<<<<<<<< - * raise ValueError("Can only create a buffer that is contiguous in memory.") - * info.buf = self.data - */ - __pyx_t_1 = ((!((__pyx_v_flags & __pyx_v_bufmode) != 0)) != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":193 - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - * raise ValueError("Can only create a buffer that is contiguous in memory.") # <<<<<<<<<<<<<< - * info.buf = self.data - * info.len = self.len - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__6, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 193, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 193, __pyx_L1_error) - - /* "View.MemoryView":192 - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): # <<<<<<<<<<<<<< - * raise ValueError("Can only create a buffer that is contiguous in memory.") - * info.buf = self.data - */ - } - - /* "View.MemoryView":194 - * if not (flags & bufmode): - * raise ValueError("Can only create a buffer that is contiguous in memory.") - * info.buf = self.data # <<<<<<<<<<<<<< - * info.len = self.len - * info.ndim = self.ndim - */ - __pyx_t_4 = __pyx_v_self->data; - __pyx_v_info->buf = __pyx_t_4; - - /* "View.MemoryView":195 - * raise ValueError("Can only create a buffer that is contiguous in memory.") - * info.buf = self.data - * info.len = self.len # <<<<<<<<<<<<<< - * info.ndim = self.ndim - * info.shape = self._shape - */ - __pyx_t_5 = __pyx_v_self->len; - __pyx_v_info->len = __pyx_t_5; - - /* "View.MemoryView":196 - * info.buf = self.data - * info.len = self.len - * info.ndim = self.ndim # <<<<<<<<<<<<<< - * info.shape = self._shape - * info.strides = self._strides - */ - __pyx_t_6 = __pyx_v_self->ndim; - __pyx_v_info->ndim = __pyx_t_6; - - /* "View.MemoryView":197 - * info.len = self.len - * info.ndim = self.ndim - * info.shape = self._shape # <<<<<<<<<<<<<< - * info.strides = self._strides - * info.suboffsets = NULL - */ - __pyx_t_7 = __pyx_v_self->_shape; - __pyx_v_info->shape = __pyx_t_7; - - /* "View.MemoryView":198 - * info.ndim = self.ndim - * info.shape = self._shape - * info.strides = self._strides # <<<<<<<<<<<<<< - * info.suboffsets = NULL - * info.itemsize = self.itemsize - */ - __pyx_t_7 = __pyx_v_self->_strides; - __pyx_v_info->strides = __pyx_t_7; - - /* "View.MemoryView":199 - * info.shape = self._shape - * info.strides = self._strides - * info.suboffsets = NULL # <<<<<<<<<<<<<< - * info.itemsize = self.itemsize - * info.readonly = 0 - */ - __pyx_v_info->suboffsets = NULL; - - /* "View.MemoryView":200 - * info.strides = self._strides - * info.suboffsets = NULL - * info.itemsize = self.itemsize # <<<<<<<<<<<<<< - * info.readonly = 0 - * - */ - __pyx_t_5 = __pyx_v_self->itemsize; - __pyx_v_info->itemsize = __pyx_t_5; - - /* "View.MemoryView":201 - * info.suboffsets = NULL - * info.itemsize = self.itemsize - * info.readonly = 0 # <<<<<<<<<<<<<< - * - * if flags & PyBUF_FORMAT: - */ - __pyx_v_info->readonly = 0; - - /* "View.MemoryView":203 - * info.readonly = 0 - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * info.format = self.format - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":204 - * - * if flags & PyBUF_FORMAT: - * info.format = self.format # <<<<<<<<<<<<<< - * else: - * info.format = NULL - */ - __pyx_t_4 = __pyx_v_self->format; - __pyx_v_info->format = __pyx_t_4; - - /* "View.MemoryView":203 - * info.readonly = 0 - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * info.format = self.format - * else: - */ - goto __pyx_L5; - } - - /* "View.MemoryView":206 - * info.format = self.format - * else: - * info.format = NULL # <<<<<<<<<<<<<< - * - * info.obj = self - */ - /*else*/ { - __pyx_v_info->format = NULL; - } - __pyx_L5:; - - /* "View.MemoryView":208 - * info.format = NULL - * - * info.obj = self # <<<<<<<<<<<<<< - * - * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)") - */ - __Pyx_INCREF(((PyObject *)__pyx_v_self)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_self)); - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); - __pyx_v_info->obj = ((PyObject *)__pyx_v_self); - - /* "View.MemoryView":186 - * - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<< - * cdef int bufmode = -1 - * if self.mode == u"c": - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.array.__getbuffer__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - if (__pyx_v_info->obj != NULL) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - goto __pyx_L2; - __pyx_L0:; - if (__pyx_v_info->obj == Py_None) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - __pyx_L2:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":212 - * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)") - * - * def __dealloc__(array self): # <<<<<<<<<<<<<< - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - */ - -/* Python wrapper */ -static void __pyx_array___dealloc__(PyObject *__pyx_v_self); /*proto*/ -static void __pyx_array___dealloc__(PyObject *__pyx_v_self) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0); - __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -static void __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(struct __pyx_array_obj *__pyx_v_self) { - __Pyx_RefNannyDeclarations - int __pyx_t_1; - __Pyx_RefNannySetupContext("__dealloc__", 0); - - /* "View.MemoryView":213 - * - * def __dealloc__(array self): - * if self.callback_free_data != NULL: # <<<<<<<<<<<<<< - * self.callback_free_data(self.data) - * elif self.free_data: - */ - __pyx_t_1 = ((__pyx_v_self->callback_free_data != NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":214 - * def __dealloc__(array self): - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) # <<<<<<<<<<<<<< - * elif self.free_data: - * if self.dtype_is_object: - */ - __pyx_v_self->callback_free_data(__pyx_v_self->data); - - /* "View.MemoryView":213 - * - * def __dealloc__(array self): - * if self.callback_free_data != NULL: # <<<<<<<<<<<<<< - * self.callback_free_data(self.data) - * elif self.free_data: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":215 - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - * elif self.free_data: # <<<<<<<<<<<<<< - * if self.dtype_is_object: - * refcount_objects_in_slice(self.data, self._shape, - */ - __pyx_t_1 = (__pyx_v_self->free_data != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":216 - * self.callback_free_data(self.data) - * elif self.free_data: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice(self.data, self._shape, - * self._strides, self.ndim, False) - */ - __pyx_t_1 = (__pyx_v_self->dtype_is_object != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":217 - * elif self.free_data: - * if self.dtype_is_object: - * refcount_objects_in_slice(self.data, self._shape, # <<<<<<<<<<<<<< - * self._strides, self.ndim, False) - * free(self.data) - */ - __pyx_memoryview_refcount_objects_in_slice(__pyx_v_self->data, __pyx_v_self->_shape, __pyx_v_self->_strides, __pyx_v_self->ndim, 0); - - /* "View.MemoryView":216 - * self.callback_free_data(self.data) - * elif self.free_data: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice(self.data, self._shape, - * self._strides, self.ndim, False) - */ - } - - /* "View.MemoryView":219 - * refcount_objects_in_slice(self.data, self._shape, - * self._strides, self.ndim, False) - * free(self.data) # <<<<<<<<<<<<<< - * PyObject_Free(self._shape) - * - */ - free(__pyx_v_self->data); - - /* "View.MemoryView":215 - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - * elif self.free_data: # <<<<<<<<<<<<<< - * if self.dtype_is_object: - * refcount_objects_in_slice(self.data, self._shape, - */ - } - __pyx_L3:; - - /* "View.MemoryView":220 - * self._strides, self.ndim, False) - * free(self.data) - * PyObject_Free(self._shape) # <<<<<<<<<<<<<< - * - * @property - */ - PyObject_Free(__pyx_v_self->_shape); - - /* "View.MemoryView":212 - * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)") - * - * def __dealloc__(array self): # <<<<<<<<<<<<<< - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":223 - * - * @property - * def memview(self): # <<<<<<<<<<<<<< - * return self.get_memview() - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_5array_7memview___get__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_5array_7memview___get__(struct __pyx_array_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":224 - * @property - * def memview(self): - * return self.get_memview() # <<<<<<<<<<<<<< - * - * @cname('get_memview') - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = ((struct __pyx_vtabstruct_array *)__pyx_v_self->__pyx_vtab)->get_memview(__pyx_v_self); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 224, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":223 - * - * @property - * def memview(self): # <<<<<<<<<<<<<< - * return self.get_memview() - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.array.memview.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":227 - * - * @cname('get_memview') - * cdef get_memview(self): # <<<<<<<<<<<<<< - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE - * return memoryview(self, flags, self.dtype_is_object) - */ - -static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *__pyx_v_self) { - int __pyx_v_flags; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("get_memview", 0); - - /* "View.MemoryView":228 - * @cname('get_memview') - * cdef get_memview(self): - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE # <<<<<<<<<<<<<< - * return memoryview(self, flags, self.dtype_is_object) - * - */ - __pyx_v_flags = ((PyBUF_ANY_CONTIGUOUS | PyBUF_FORMAT) | PyBUF_WRITABLE); - - /* "View.MemoryView":229 - * cdef get_memview(self): - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE - * return memoryview(self, flags, self.dtype_is_object) # <<<<<<<<<<<<<< - * - * def __len__(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_flags); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 229, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_self->dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 229, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 229, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_v_self)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_self)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_v_self)); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 229, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":227 - * - * @cname('get_memview') - * cdef get_memview(self): # <<<<<<<<<<<<<< - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE - * return memoryview(self, flags, self.dtype_is_object) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.array.get_memview", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":231 - * return memoryview(self, flags, self.dtype_is_object) - * - * def __len__(self): # <<<<<<<<<<<<<< - * return self._shape[0] - * - */ - -/* Python wrapper */ -static Py_ssize_t __pyx_array___len__(PyObject *__pyx_v_self); /*proto*/ -static Py_ssize_t __pyx_array___len__(PyObject *__pyx_v_self) { - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__len__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static Py_ssize_t __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(struct __pyx_array_obj *__pyx_v_self) { - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__len__", 0); - - /* "View.MemoryView":232 - * - * def __len__(self): - * return self._shape[0] # <<<<<<<<<<<<<< - * - * def __getattr__(self, attr): - */ - __pyx_r = (__pyx_v_self->_shape[0]); - goto __pyx_L0; - - /* "View.MemoryView":231 - * return memoryview(self, flags, self.dtype_is_object) - * - * def __len__(self): # <<<<<<<<<<<<<< - * return self._shape[0] - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":234 - * return self._shape[0] - * - * def __getattr__(self, attr): # <<<<<<<<<<<<<< - * return getattr(self.memview, attr) - * - */ - -/* Python wrapper */ -static PyObject *__pyx_array___getattr__(PyObject *__pyx_v_self, PyObject *__pyx_v_attr); /*proto*/ -static PyObject *__pyx_array___getattr__(PyObject *__pyx_v_self, PyObject *__pyx_v_attr) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getattr__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_attr)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_attr) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__getattr__", 0); - - /* "View.MemoryView":235 - * - * def __getattr__(self, attr): - * return getattr(self.memview, attr) # <<<<<<<<<<<<<< - * - * def __getitem__(self, item): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 235, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_GetAttr(__pyx_t_1, __pyx_v_attr); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 235, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":234 - * return self._shape[0] - * - * def __getattr__(self, attr): # <<<<<<<<<<<<<< - * return getattr(self.memview, attr) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.array.__getattr__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":237 - * return getattr(self.memview, attr) - * - * def __getitem__(self, item): # <<<<<<<<<<<<<< - * return self.memview[item] - * - */ - -/* Python wrapper */ -static PyObject *__pyx_array___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item); /*proto*/ -static PyObject *__pyx_array___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getitem__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_item)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__getitem__", 0); - - /* "View.MemoryView":238 - * - * def __getitem__(self, item): - * return self.memview[item] # <<<<<<<<<<<<<< - * - * def __setitem__(self, item, value): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 238, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_t_1, __pyx_v_item); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 238, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":237 - * return getattr(self.memview, attr) - * - * def __getitem__(self, item): # <<<<<<<<<<<<<< - * return self.memview[item] - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.array.__getitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":240 - * return self.memview[item] - * - * def __setitem__(self, item, value): # <<<<<<<<<<<<<< - * self.memview[item] = value - * - */ - -/* Python wrapper */ -static int __pyx_array___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value); /*proto*/ -static int __pyx_array___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setitem__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_item), ((PyObject *)__pyx_v_value)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value) { - int __pyx_r; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setitem__", 0); - - /* "View.MemoryView":241 - * - * def __setitem__(self, item, value): - * self.memview[item] = value # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 241, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (unlikely(PyObject_SetItem(__pyx_t_1, __pyx_v_item, __pyx_v_value) < 0)) __PYX_ERR(1, 241, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "View.MemoryView":240 - * return self.memview[item] - * - * def __setitem__(self, item, value): # <<<<<<<<<<<<<< - * self.memview[item] = value - * - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.array.__setitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_array_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_pw___pyx_array_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_array___reduce_cython__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_array___reduce_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__7, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 2, __pyx_L1_error) - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.array.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_array_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ -static PyObject *__pyx_pw___pyx_array_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_array_2__setstate_cython__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_array_2__setstate_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__8, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 4, __pyx_L1_error) - - /* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.array.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":245 - * - * @cname("__pyx_array_new") - * cdef array array_cwrapper(tuple shape, Py_ssize_t itemsize, char *format, # <<<<<<<<<<<<<< - * char *mode, char *buf): - * cdef array result - */ - -static struct __pyx_array_obj *__pyx_array_new(PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, char *__pyx_v_format, char *__pyx_v_mode, char *__pyx_v_buf) { - struct __pyx_array_obj *__pyx_v_result = 0; - struct __pyx_array_obj *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("array_cwrapper", 0); - - /* "View.MemoryView":249 - * cdef array result - * - * if buf == NULL: # <<<<<<<<<<<<<< - * result = array(shape, itemsize, format, mode.decode('ASCII')) - * else: - */ - __pyx_t_1 = ((__pyx_v_buf == NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":250 - * - * if buf == NULL: - * result = array(shape, itemsize, format, mode.decode('ASCII')) # <<<<<<<<<<<<<< - * else: - * result = array(shape, itemsize, format, mode.decode('ASCII'), - */ - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_itemsize); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 250, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyBytes_FromString(__pyx_v_format); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 250, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_decode_c_string(__pyx_v_mode, 0, strlen(__pyx_v_mode), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 250, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PyTuple_New(4); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 250, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(__pyx_v_shape); - __Pyx_GIVEREF(__pyx_v_shape); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_v_shape); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_5, 3, __pyx_t_4); - __pyx_t_2 = 0; - __pyx_t_3 = 0; - __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_Call(((PyObject *)__pyx_array_type), __pyx_t_5, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 250, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_v_result = ((struct __pyx_array_obj *)__pyx_t_4); - __pyx_t_4 = 0; - - /* "View.MemoryView":249 - * cdef array result - * - * if buf == NULL: # <<<<<<<<<<<<<< - * result = array(shape, itemsize, format, mode.decode('ASCII')) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":252 - * result = array(shape, itemsize, format, mode.decode('ASCII')) - * else: - * result = array(shape, itemsize, format, mode.decode('ASCII'), # <<<<<<<<<<<<<< - * allocate_buffer=False) - * result.data = buf - */ - /*else*/ { - __pyx_t_4 = PyInt_FromSsize_t(__pyx_v_itemsize); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 252, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __Pyx_PyBytes_FromString(__pyx_v_format); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 252, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_3 = __Pyx_decode_c_string(__pyx_v_mode, 0, strlen(__pyx_v_mode), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 252, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyTuple_New(4); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 252, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_shape); - __Pyx_GIVEREF(__pyx_v_shape); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_shape); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_2, 3, __pyx_t_3); - __pyx_t_4 = 0; - __pyx_t_5 = 0; - __pyx_t_3 = 0; - - /* "View.MemoryView":253 - * else: - * result = array(shape, itemsize, format, mode.decode('ASCII'), - * allocate_buffer=False) # <<<<<<<<<<<<<< - * result.data = buf - * - */ - __pyx_t_3 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 253, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_allocate_buffer, Py_False) < 0) __PYX_ERR(1, 253, __pyx_L1_error) - - /* "View.MemoryView":252 - * result = array(shape, itemsize, format, mode.decode('ASCII')) - * else: - * result = array(shape, itemsize, format, mode.decode('ASCII'), # <<<<<<<<<<<<<< - * allocate_buffer=False) - * result.data = buf - */ - __pyx_t_5 = __Pyx_PyObject_Call(((PyObject *)__pyx_array_type), __pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 252, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result = ((struct __pyx_array_obj *)__pyx_t_5); - __pyx_t_5 = 0; - - /* "View.MemoryView":254 - * result = array(shape, itemsize, format, mode.decode('ASCII'), - * allocate_buffer=False) - * result.data = buf # <<<<<<<<<<<<<< - * - * return result - */ - __pyx_v_result->data = __pyx_v_buf; - } - __pyx_L3:; - - /* "View.MemoryView":256 - * result.data = buf - * - * return result # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(((PyObject *)__pyx_r)); - __Pyx_INCREF(((PyObject *)__pyx_v_result)); - __pyx_r = __pyx_v_result; - goto __pyx_L0; - - /* "View.MemoryView":245 - * - * @cname("__pyx_array_new") - * cdef array array_cwrapper(tuple shape, Py_ssize_t itemsize, char *format, # <<<<<<<<<<<<<< - * char *mode, char *buf): - * cdef array result - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.array_cwrapper", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XGIVEREF((PyObject *)__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":282 - * cdef class Enum(object): - * cdef object name - * def __init__(self, name): # <<<<<<<<<<<<<< - * self.name = name - * def __repr__(self): - */ - -/* Python wrapper */ -static int __pyx_MemviewEnum___init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static int __pyx_MemviewEnum___init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_name = 0; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__init__ (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_name,0}; - PyObject* values[1] = {0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_name)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__init__") < 0)) __PYX_ERR(1, 282, __pyx_L3_error) - } - } else if (PyTuple_GET_SIZE(__pyx_args) != 1) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - } - __pyx_v_name = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__init__", 1, 1, 1, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 282, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.Enum.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return -1; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self), __pyx_v_name); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v_name) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__init__", 0); - - /* "View.MemoryView":283 - * cdef object name - * def __init__(self, name): - * self.name = name # <<<<<<<<<<<<<< - * def __repr__(self): - * return self.name - */ - __Pyx_INCREF(__pyx_v_name); - __Pyx_GIVEREF(__pyx_v_name); - __Pyx_GOTREF(__pyx_v_self->name); - __Pyx_DECREF(__pyx_v_self->name); - __pyx_v_self->name = __pyx_v_name; - - /* "View.MemoryView":282 - * cdef class Enum(object): - * cdef object name - * def __init__(self, name): # <<<<<<<<<<<<<< - * self.name = name - * def __repr__(self): - */ - - /* function exit code */ - __pyx_r = 0; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":284 - * def __init__(self, name): - * self.name = name - * def __repr__(self): # <<<<<<<<<<<<<< - * return self.name - * - */ - -/* Python wrapper */ -static PyObject *__pyx_MemviewEnum___repr__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_MemviewEnum___repr__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__repr__ (wrapper)", 0); - __pyx_r = __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(struct __pyx_MemviewEnum_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__repr__", 0); - - /* "View.MemoryView":285 - * self.name = name - * def __repr__(self): - * return self.name # <<<<<<<<<<<<<< - * - * cdef generic = Enum("") - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->name); - __pyx_r = __pyx_v_self->name; - goto __pyx_L0; - - /* "View.MemoryView":284 - * def __init__(self, name): - * self.name = name - * def __repr__(self): # <<<<<<<<<<<<<< - * return self.name - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * cdef tuple state - * cdef object _dict - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_MemviewEnum_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_pw___pyx_MemviewEnum_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_MemviewEnum___reduce_cython__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_MemviewEnum___reduce_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self) { - PyObject *__pyx_v_state = 0; - PyObject *__pyx_v__dict = 0; - int __pyx_v_use_setstate; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - int __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":5 - * cdef object _dict - * cdef bint use_setstate - * state = (self.name,) # <<<<<<<<<<<<<< - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: - */ - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_self->name); - __Pyx_GIVEREF(__pyx_v_self->name); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_self->name); - __pyx_v_state = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "(tree fragment)":6 - * cdef bint use_setstate - * state = (self.name,) - * _dict = getattr(self, '__dict__', None) # <<<<<<<<<<<<<< - * if _dict is not None: - * state += (_dict,) - */ - __pyx_t_1 = __Pyx_GetAttr3(((PyObject *)__pyx_v_self), __pyx_n_s_dict, Py_None); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v__dict = __pyx_t_1; - __pyx_t_1 = 0; - - /* "(tree fragment)":7 - * state = (self.name,) - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: # <<<<<<<<<<<<<< - * state += (_dict,) - * use_setstate = True - */ - __pyx_t_2 = (__pyx_v__dict != Py_None); - __pyx_t_3 = (__pyx_t_2 != 0); - if (__pyx_t_3) { - - /* "(tree fragment)":8 - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: - * state += (_dict,) # <<<<<<<<<<<<<< - * use_setstate = True - * else: - */ - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 8, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v__dict); - __Pyx_GIVEREF(__pyx_v__dict); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v__dict); - __pyx_t_4 = PyNumber_InPlaceAdd(__pyx_v_state, __pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 8, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF_SET(__pyx_v_state, ((PyObject*)__pyx_t_4)); - __pyx_t_4 = 0; - - /* "(tree fragment)":9 - * if _dict is not None: - * state += (_dict,) - * use_setstate = True # <<<<<<<<<<<<<< - * else: - * use_setstate = self.name is not None - */ - __pyx_v_use_setstate = 1; - - /* "(tree fragment)":7 - * state = (self.name,) - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: # <<<<<<<<<<<<<< - * state += (_dict,) - * use_setstate = True - */ - goto __pyx_L3; - } - - /* "(tree fragment)":11 - * use_setstate = True - * else: - * use_setstate = self.name is not None # <<<<<<<<<<<<<< - * if use_setstate: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state - */ - /*else*/ { - __pyx_t_3 = (__pyx_v_self->name != Py_None); - __pyx_v_use_setstate = __pyx_t_3; - } - __pyx_L3:; - - /* "(tree fragment)":12 - * else: - * use_setstate = self.name is not None - * if use_setstate: # <<<<<<<<<<<<<< - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state - * else: - */ - __pyx_t_3 = (__pyx_v_use_setstate != 0); - if (__pyx_t_3) { - - /* "(tree fragment)":13 - * use_setstate = self.name is not None - * if use_setstate: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state # <<<<<<<<<<<<<< - * else: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_pyx_unpickle_Enum); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_INCREF(__pyx_int_184977713); - __Pyx_GIVEREF(__pyx_int_184977713); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_int_184977713); - __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(Py_None); - PyTuple_SET_ITEM(__pyx_t_1, 2, Py_None); - __pyx_t_5 = PyTuple_New(3); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_1); - __Pyx_INCREF(__pyx_v_state); - __Pyx_GIVEREF(__pyx_v_state); - PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_v_state); - __pyx_t_4 = 0; - __pyx_t_1 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "(tree fragment)":12 - * else: - * use_setstate = self.name is not None - * if use_setstate: # <<<<<<<<<<<<<< - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state - * else: - */ - } - - /* "(tree fragment)":15 - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state - * else: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * __pyx_unpickle_Enum__set_state(self, __pyx_state) - */ - /*else*/ { - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_pyx_unpickle_Enum); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_INCREF(__pyx_int_184977713); - __Pyx_GIVEREF(__pyx_int_184977713); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_int_184977713); - __Pyx_INCREF(__pyx_v_state); - __Pyx_GIVEREF(__pyx_v_state); - PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_v_state); - __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_1); - __pyx_t_5 = 0; - __pyx_t_1 = 0; - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - } - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * cdef tuple state - * cdef object _dict - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.Enum.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_state); - __Pyx_XDECREF(__pyx_v__dict); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":16 - * else: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state(self, __pyx_state) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_MemviewEnum_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ -static PyObject *__pyx_pw___pyx_MemviewEnum_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_MemviewEnum_2__setstate_cython__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_MemviewEnum_2__setstate_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":17 - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) - * def __setstate_cython__(self, __pyx_state): - * __pyx_unpickle_Enum__set_state(self, __pyx_state) # <<<<<<<<<<<<<< - */ - if (!(likely(PyTuple_CheckExact(__pyx_v___pyx_state))||((__pyx_v___pyx_state) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "tuple", Py_TYPE(__pyx_v___pyx_state)->tp_name), 0))) __PYX_ERR(1, 17, __pyx_L1_error) - __pyx_t_1 = __pyx_unpickle_Enum__set_state(__pyx_v_self, ((PyObject*)__pyx_v___pyx_state)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "(tree fragment)":16 - * else: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state(self, __pyx_state) - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.Enum.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":299 - * - * @cname('__pyx_align_pointer') - * cdef void *align_pointer(void *memory, size_t alignment) nogil: # <<<<<<<<<<<<<< - * "Align pointer memory on a given boundary" - * cdef Py_intptr_t aligned_p = memory - */ - -static void *__pyx_align_pointer(void *__pyx_v_memory, size_t __pyx_v_alignment) { - Py_intptr_t __pyx_v_aligned_p; - size_t __pyx_v_offset; - void *__pyx_r; - int __pyx_t_1; - - /* "View.MemoryView":301 - * cdef void *align_pointer(void *memory, size_t alignment) nogil: - * "Align pointer memory on a given boundary" - * cdef Py_intptr_t aligned_p = memory # <<<<<<<<<<<<<< - * cdef size_t offset - * - */ - __pyx_v_aligned_p = ((Py_intptr_t)__pyx_v_memory); - - /* "View.MemoryView":305 - * - * with cython.cdivision(True): - * offset = aligned_p % alignment # <<<<<<<<<<<<<< - * - * if offset > 0: - */ - __pyx_v_offset = (__pyx_v_aligned_p % __pyx_v_alignment); - - /* "View.MemoryView":307 - * offset = aligned_p % alignment - * - * if offset > 0: # <<<<<<<<<<<<<< - * aligned_p += alignment - offset - * - */ - __pyx_t_1 = ((__pyx_v_offset > 0) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":308 - * - * if offset > 0: - * aligned_p += alignment - offset # <<<<<<<<<<<<<< - * - * return aligned_p - */ - __pyx_v_aligned_p = (__pyx_v_aligned_p + (__pyx_v_alignment - __pyx_v_offset)); - - /* "View.MemoryView":307 - * offset = aligned_p % alignment - * - * if offset > 0: # <<<<<<<<<<<<<< - * aligned_p += alignment - offset - * - */ - } - - /* "View.MemoryView":310 - * aligned_p += alignment - offset - * - * return aligned_p # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = ((void *)__pyx_v_aligned_p); - goto __pyx_L0; - - /* "View.MemoryView":299 - * - * @cname('__pyx_align_pointer') - * cdef void *align_pointer(void *memory, size_t alignment) nogil: # <<<<<<<<<<<<<< - * "Align pointer memory on a given boundary" - * cdef Py_intptr_t aligned_p = memory - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":346 - * cdef __Pyx_TypeInfo *typeinfo - * - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): # <<<<<<<<<<<<<< - * self.obj = obj - * self.flags = flags - */ - -/* Python wrapper */ -static int __pyx_memoryview___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static int __pyx_memoryview___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_obj = 0; - int __pyx_v_flags; - int __pyx_v_dtype_is_object; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_obj,&__pyx_n_s_flags,&__pyx_n_s_dtype_is_object,0}; - PyObject* values[3] = {0,0,0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_obj)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_flags)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 2, 3, 1); __PYX_ERR(1, 346, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (kw_args > 0) { - PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_dtype_is_object); - if (value) { values[2] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__cinit__") < 0)) __PYX_ERR(1, 346, __pyx_L3_error) - } - } else { - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_obj = values[0]; - __pyx_v_flags = __Pyx_PyInt_As_int(values[1]); if (unlikely((__pyx_v_flags == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 346, __pyx_L3_error) - if (values[2]) { - __pyx_v_dtype_is_object = __Pyx_PyObject_IsTrue(values[2]); if (unlikely((__pyx_v_dtype_is_object == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 346, __pyx_L3_error) - } else { - __pyx_v_dtype_is_object = ((int)0); - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 346, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.memoryview.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return -1; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_obj, __pyx_v_flags, __pyx_v_dtype_is_object); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj, int __pyx_v_flags, int __pyx_v_dtype_is_object) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__cinit__", 0); - - /* "View.MemoryView":347 - * - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): - * self.obj = obj # <<<<<<<<<<<<<< - * self.flags = flags - * if type(self) is memoryview or obj is not None: - */ - __Pyx_INCREF(__pyx_v_obj); - __Pyx_GIVEREF(__pyx_v_obj); - __Pyx_GOTREF(__pyx_v_self->obj); - __Pyx_DECREF(__pyx_v_self->obj); - __pyx_v_self->obj = __pyx_v_obj; - - /* "View.MemoryView":348 - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): - * self.obj = obj - * self.flags = flags # <<<<<<<<<<<<<< - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) - */ - __pyx_v_self->flags = __pyx_v_flags; - - /* "View.MemoryView":349 - * self.obj = obj - * self.flags = flags - * if type(self) is memoryview or obj is not None: # <<<<<<<<<<<<<< - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: - */ - __pyx_t_2 = (((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))) == ((PyObject *)__pyx_memoryview_type)); - __pyx_t_3 = (__pyx_t_2 != 0); - if (!__pyx_t_3) { - } else { - __pyx_t_1 = __pyx_t_3; - goto __pyx_L4_bool_binop_done; - } - __pyx_t_3 = (__pyx_v_obj != Py_None); - __pyx_t_2 = (__pyx_t_3 != 0); - __pyx_t_1 = __pyx_t_2; - __pyx_L4_bool_binop_done:; - if (__pyx_t_1) { - - /* "View.MemoryView":350 - * self.flags = flags - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) # <<<<<<<<<<<<<< - * if self.view.obj == NULL: - * (<__pyx_buffer *> &self.view).obj = Py_None - */ - __pyx_t_4 = __Pyx_GetBuffer(__pyx_v_obj, (&__pyx_v_self->view), __pyx_v_flags); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(1, 350, __pyx_L1_error) - - /* "View.MemoryView":351 - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: # <<<<<<<<<<<<<< - * (<__pyx_buffer *> &self.view).obj = Py_None - * Py_INCREF(Py_None) - */ - __pyx_t_1 = ((((PyObject *)__pyx_v_self->view.obj) == NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":352 - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: - * (<__pyx_buffer *> &self.view).obj = Py_None # <<<<<<<<<<<<<< - * Py_INCREF(Py_None) - * - */ - ((Py_buffer *)(&__pyx_v_self->view))->obj = Py_None; - - /* "View.MemoryView":353 - * if self.view.obj == NULL: - * (<__pyx_buffer *> &self.view).obj = Py_None - * Py_INCREF(Py_None) # <<<<<<<<<<<<<< - * - * if not __PYX_CYTHON_ATOMICS_ENABLED(): - */ - Py_INCREF(Py_None); - - /* "View.MemoryView":351 - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: # <<<<<<<<<<<<<< - * (<__pyx_buffer *> &self.view).obj = Py_None - * Py_INCREF(Py_None) - */ - } - - /* "View.MemoryView":349 - * self.obj = obj - * self.flags = flags - * if type(self) is memoryview or obj is not None: # <<<<<<<<<<<<<< - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: - */ - } - - /* "View.MemoryView":355 - * Py_INCREF(Py_None) - * - * if not __PYX_CYTHON_ATOMICS_ENABLED(): # <<<<<<<<<<<<<< - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: - */ - __pyx_t_1 = ((!(__PYX_CYTHON_ATOMICS_ENABLED() != 0)) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":357 - * if not __PYX_CYTHON_ATOMICS_ENABLED(): - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: # <<<<<<<<<<<<<< - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - */ - __pyx_t_1 = ((__pyx_memoryview_thread_locks_used < 8) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":358 - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: - */ - __pyx_v_self->lock = (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]); - - /* "View.MemoryView":359 - * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 # <<<<<<<<<<<<<< - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() - */ - __pyx_memoryview_thread_locks_used = (__pyx_memoryview_thread_locks_used + 1); - - /* "View.MemoryView":357 - * if not __PYX_CYTHON_ATOMICS_ENABLED(): - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: # <<<<<<<<<<<<<< - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - */ - } - - /* "View.MemoryView":360 - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: # <<<<<<<<<<<<<< - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: - */ - __pyx_t_1 = ((__pyx_v_self->lock == NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":361 - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() # <<<<<<<<<<<<<< - * if self.lock is NULL: - * raise MemoryError - */ - __pyx_v_self->lock = PyThread_allocate_lock(); - - /* "View.MemoryView":362 - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * - */ - __pyx_t_1 = ((__pyx_v_self->lock == NULL) != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":363 - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: - * raise MemoryError # <<<<<<<<<<<<<< - * - * if flags & PyBUF_FORMAT: - */ - PyErr_NoMemory(); __PYX_ERR(1, 363, __pyx_L1_error) - - /* "View.MemoryView":362 - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * - */ - } - - /* "View.MemoryView":360 - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: # <<<<<<<<<<<<<< - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: - */ - } - - /* "View.MemoryView":355 - * Py_INCREF(Py_None) - * - * if not __PYX_CYTHON_ATOMICS_ENABLED(): # <<<<<<<<<<<<<< - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: - */ - } - - /* "View.MemoryView":365 - * raise MemoryError - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":366 - * - * if flags & PyBUF_FORMAT: - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') # <<<<<<<<<<<<<< - * else: - * self.dtype_is_object = dtype_is_object - */ - __pyx_t_2 = (((__pyx_v_self->view.format[0]) == 'O') != 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L12_bool_binop_done; - } - __pyx_t_2 = (((__pyx_v_self->view.format[1]) == '\x00') != 0); - __pyx_t_1 = __pyx_t_2; - __pyx_L12_bool_binop_done:; - __pyx_v_self->dtype_is_object = __pyx_t_1; - - /* "View.MemoryView":365 - * raise MemoryError - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') - * else: - */ - goto __pyx_L11; - } - - /* "View.MemoryView":368 - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') - * else: - * self.dtype_is_object = dtype_is_object # <<<<<<<<<<<<<< - * - * self.acquisition_count_aligned_p = <__pyx_atomic_int *> align_pointer( - */ - /*else*/ { - __pyx_v_self->dtype_is_object = __pyx_v_dtype_is_object; - } - __pyx_L11:; - - /* "View.MemoryView":370 - * self.dtype_is_object = dtype_is_object - * - * self.acquisition_count_aligned_p = <__pyx_atomic_int *> align_pointer( # <<<<<<<<<<<<<< - * &self.acquisition_count[0], sizeof(__pyx_atomic_int)) - * self.typeinfo = NULL - */ - __pyx_v_self->acquisition_count_aligned_p = ((__pyx_atomic_int *)__pyx_align_pointer(((void *)(&(__pyx_v_self->acquisition_count[0]))), (sizeof(__pyx_atomic_int)))); - - /* "View.MemoryView":372 - * self.acquisition_count_aligned_p = <__pyx_atomic_int *> align_pointer( - * &self.acquisition_count[0], sizeof(__pyx_atomic_int)) - * self.typeinfo = NULL # <<<<<<<<<<<<<< - * - * def __dealloc__(memoryview self): - */ - __pyx_v_self->typeinfo = NULL; - - /* "View.MemoryView":346 - * cdef __Pyx_TypeInfo *typeinfo - * - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): # <<<<<<<<<<<<<< - * self.obj = obj - * self.flags = flags - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView.memoryview.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":374 - * self.typeinfo = NULL - * - * def __dealloc__(memoryview self): # <<<<<<<<<<<<<< - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - */ - -/* Python wrapper */ -static void __pyx_memoryview___dealloc__(PyObject *__pyx_v_self); /*proto*/ -static void __pyx_memoryview___dealloc__(PyObject *__pyx_v_self) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0); - __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -static void __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(struct __pyx_memoryview_obj *__pyx_v_self) { - int __pyx_v_i; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - int __pyx_t_5; - PyThread_type_lock __pyx_t_6; - PyThread_type_lock __pyx_t_7; - __Pyx_RefNannySetupContext("__dealloc__", 0); - - /* "View.MemoryView":375 - * - * def __dealloc__(memoryview self): - * if self.obj is not None: # <<<<<<<<<<<<<< - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - */ - __pyx_t_1 = (__pyx_v_self->obj != Py_None); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":376 - * def __dealloc__(memoryview self): - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) # <<<<<<<<<<<<<< - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - * - */ - __Pyx_ReleaseBuffer((&__pyx_v_self->view)); - - /* "View.MemoryView":375 - * - * def __dealloc__(memoryview self): - * if self.obj is not None: # <<<<<<<<<<<<<< - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":377 - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: # <<<<<<<<<<<<<< - * - * (<__pyx_buffer *> &self.view).obj = NULL - */ - __pyx_t_2 = ((((Py_buffer *)(&__pyx_v_self->view))->obj == Py_None) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":379 - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - * - * (<__pyx_buffer *> &self.view).obj = NULL # <<<<<<<<<<<<<< - * Py_DECREF(Py_None) - * - */ - ((Py_buffer *)(&__pyx_v_self->view))->obj = NULL; - - /* "View.MemoryView":380 - * - * (<__pyx_buffer *> &self.view).obj = NULL - * Py_DECREF(Py_None) # <<<<<<<<<<<<<< - * - * cdef int i - */ - Py_DECREF(Py_None); - - /* "View.MemoryView":377 - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: # <<<<<<<<<<<<<< - * - * (<__pyx_buffer *> &self.view).obj = NULL - */ - } - __pyx_L3:; - - /* "View.MemoryView":384 - * cdef int i - * global __pyx_memoryview_thread_locks_used - * if self.lock != NULL: # <<<<<<<<<<<<<< - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: - */ - __pyx_t_2 = ((__pyx_v_self->lock != NULL) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":385 - * global __pyx_memoryview_thread_locks_used - * if self.lock != NULL: - * for i in range(__pyx_memoryview_thread_locks_used): # <<<<<<<<<<<<<< - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 - */ - __pyx_t_3 = __pyx_memoryview_thread_locks_used; - __pyx_t_4 = __pyx_t_3; - for (__pyx_t_5 = 0; __pyx_t_5 < __pyx_t_4; __pyx_t_5+=1) { - __pyx_v_i = __pyx_t_5; - - /* "View.MemoryView":386 - * if self.lock != NULL: - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: - */ - __pyx_t_2 = (((__pyx_memoryview_thread_locks[__pyx_v_i]) == __pyx_v_self->lock) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":387 - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 # <<<<<<<<<<<<<< - * if i != __pyx_memoryview_thread_locks_used: - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - */ - __pyx_memoryview_thread_locks_used = (__pyx_memoryview_thread_locks_used - 1); - - /* "View.MemoryView":388 - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - */ - __pyx_t_2 = ((__pyx_v_i != __pyx_memoryview_thread_locks_used) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":390 - * if i != __pyx_memoryview_thread_locks_used: - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) # <<<<<<<<<<<<<< - * break - * else: - */ - __pyx_t_6 = (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]); - __pyx_t_7 = (__pyx_memoryview_thread_locks[__pyx_v_i]); - - /* "View.MemoryView":389 - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - * break - */ - (__pyx_memoryview_thread_locks[__pyx_v_i]) = __pyx_t_6; - (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]) = __pyx_t_7; - - /* "View.MemoryView":388 - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - */ - } - - /* "View.MemoryView":391 - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - * break # <<<<<<<<<<<<<< - * else: - * PyThread_free_lock(self.lock) - */ - goto __pyx_L6_break; - - /* "View.MemoryView":386 - * if self.lock != NULL: - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: - */ - } - } - /*else*/ { - - /* "View.MemoryView":393 - * break - * else: - * PyThread_free_lock(self.lock) # <<<<<<<<<<<<<< - * - * cdef char *get_item_pointer(memoryview self, object index) except NULL: - */ - PyThread_free_lock(__pyx_v_self->lock); - } - __pyx_L6_break:; - - /* "View.MemoryView":384 - * cdef int i - * global __pyx_memoryview_thread_locks_used - * if self.lock != NULL: # <<<<<<<<<<<<<< - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: - */ - } - - /* "View.MemoryView":374 - * self.typeinfo = NULL - * - * def __dealloc__(memoryview self): # <<<<<<<<<<<<<< - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":395 - * PyThread_free_lock(self.lock) - * - * cdef char *get_item_pointer(memoryview self, object index) except NULL: # <<<<<<<<<<<<<< - * cdef Py_ssize_t dim - * cdef char *itemp = self.view.buf - */ - -static char *__pyx_memoryview_get_item_pointer(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index) { - Py_ssize_t __pyx_v_dim; - char *__pyx_v_itemp; - PyObject *__pyx_v_idx = NULL; - char *__pyx_r; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t __pyx_t_3; - PyObject *(*__pyx_t_4)(PyObject *); - PyObject *__pyx_t_5 = NULL; - Py_ssize_t __pyx_t_6; - char *__pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("get_item_pointer", 0); - - /* "View.MemoryView":397 - * cdef char *get_item_pointer(memoryview self, object index) except NULL: - * cdef Py_ssize_t dim - * cdef char *itemp = self.view.buf # <<<<<<<<<<<<<< - * - * for dim, idx in enumerate(index): - */ - __pyx_v_itemp = ((char *)__pyx_v_self->view.buf); - - /* "View.MemoryView":399 - * cdef char *itemp = self.view.buf - * - * for dim, idx in enumerate(index): # <<<<<<<<<<<<<< - * itemp = pybuffer_index(&self.view, itemp, idx, dim) - * - */ - __pyx_t_1 = 0; - if (likely(PyList_CheckExact(__pyx_v_index)) || PyTuple_CheckExact(__pyx_v_index)) { - __pyx_t_2 = __pyx_v_index; __Pyx_INCREF(__pyx_t_2); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - } else { - __pyx_t_3 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_v_index); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 399, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = Py_TYPE(__pyx_t_2)->tp_iternext; if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 399, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_4)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - if (__pyx_t_3 >= PyList_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_5); __pyx_t_3++; if (unlikely(0 < 0)) __PYX_ERR(1, 399, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 399, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - } else { - if (__pyx_t_3 >= PyTuple_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_5); __pyx_t_3++; if (unlikely(0 < 0)) __PYX_ERR(1, 399, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 399, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - } - } else { - __pyx_t_5 = __pyx_t_4(__pyx_t_2); - if (unlikely(!__pyx_t_5)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(1, 399, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_5); - } - __Pyx_XDECREF_SET(__pyx_v_idx, __pyx_t_5); - __pyx_t_5 = 0; - __pyx_v_dim = __pyx_t_1; - __pyx_t_1 = (__pyx_t_1 + 1); - - /* "View.MemoryView":400 - * - * for dim, idx in enumerate(index): - * itemp = pybuffer_index(&self.view, itemp, idx, dim) # <<<<<<<<<<<<<< - * - * return itemp - */ - __pyx_t_6 = __Pyx_PyIndex_AsSsize_t(__pyx_v_idx); if (unlikely((__pyx_t_6 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 400, __pyx_L1_error) - __pyx_t_7 = __pyx_pybuffer_index((&__pyx_v_self->view), __pyx_v_itemp, __pyx_t_6, __pyx_v_dim); if (unlikely(__pyx_t_7 == ((char *)NULL))) __PYX_ERR(1, 400, __pyx_L1_error) - __pyx_v_itemp = __pyx_t_7; - - /* "View.MemoryView":399 - * cdef char *itemp = self.view.buf - * - * for dim, idx in enumerate(index): # <<<<<<<<<<<<<< - * itemp = pybuffer_index(&self.view, itemp, idx, dim) - * - */ - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "View.MemoryView":402 - * itemp = pybuffer_index(&self.view, itemp, idx, dim) - * - * return itemp # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = __pyx_v_itemp; - goto __pyx_L0; - - /* "View.MemoryView":395 - * PyThread_free_lock(self.lock) - * - * cdef char *get_item_pointer(memoryview self, object index) except NULL: # <<<<<<<<<<<<<< - * cdef Py_ssize_t dim - * cdef char *itemp = self.view.buf - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview.get_item_pointer", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_idx); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":405 - * - * - * def __getitem__(memoryview self, object index): # <<<<<<<<<<<<<< - * if index is Ellipsis: - * return self - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index); /*proto*/ -static PyObject *__pyx_memoryview___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getitem__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v_index)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index) { - PyObject *__pyx_v_have_slices = NULL; - PyObject *__pyx_v_indices = NULL; - char *__pyx_v_itemp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - char *__pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__getitem__", 0); - - /* "View.MemoryView":406 - * - * def __getitem__(memoryview self, object index): - * if index is Ellipsis: # <<<<<<<<<<<<<< - * return self - * - */ - __pyx_t_1 = (__pyx_v_index == __pyx_builtin_Ellipsis); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":407 - * def __getitem__(memoryview self, object index): - * if index is Ellipsis: - * return self # <<<<<<<<<<<<<< - * - * have_slices, indices = _unellipsify(index, self.view.ndim) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)__pyx_v_self)); - __pyx_r = ((PyObject *)__pyx_v_self); - goto __pyx_L0; - - /* "View.MemoryView":406 - * - * def __getitem__(memoryview self, object index): - * if index is Ellipsis: # <<<<<<<<<<<<<< - * return self - * - */ - } - - /* "View.MemoryView":409 - * return self - * - * have_slices, indices = _unellipsify(index, self.view.ndim) # <<<<<<<<<<<<<< - * - * cdef char *itemp - */ - __pyx_t_3 = _unellipsify(__pyx_v_index, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 409, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (likely(__pyx_t_3 != Py_None)) { - PyObject* sequence = __pyx_t_3; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(1, 409, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_5 = PyTuple_GET_ITEM(sequence, 1); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(__pyx_t_5); - #else - __pyx_t_4 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 409, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 409, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } else { - __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(1, 409, __pyx_L1_error) - } - __pyx_v_have_slices = __pyx_t_4; - __pyx_t_4 = 0; - __pyx_v_indices = __pyx_t_5; - __pyx_t_5 = 0; - - /* "View.MemoryView":412 - * - * cdef char *itemp - * if have_slices: # <<<<<<<<<<<<<< - * return memview_slice(self, indices) - * else: - */ - __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_v_have_slices); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(1, 412, __pyx_L1_error) - if (__pyx_t_2) { - - /* "View.MemoryView":413 - * cdef char *itemp - * if have_slices: - * return memview_slice(self, indices) # <<<<<<<<<<<<<< - * else: - * itemp = self.get_item_pointer(indices) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = ((PyObject *)__pyx_memview_slice(__pyx_v_self, __pyx_v_indices)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 413, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "View.MemoryView":412 - * - * cdef char *itemp - * if have_slices: # <<<<<<<<<<<<<< - * return memview_slice(self, indices) - * else: - */ - } - - /* "View.MemoryView":415 - * return memview_slice(self, indices) - * else: - * itemp = self.get_item_pointer(indices) # <<<<<<<<<<<<<< - * return self.convert_item_to_object(itemp) - * - */ - /*else*/ { - __pyx_t_6 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->get_item_pointer(__pyx_v_self, __pyx_v_indices); if (unlikely(__pyx_t_6 == ((char *)NULL))) __PYX_ERR(1, 415, __pyx_L1_error) - __pyx_v_itemp = __pyx_t_6; - - /* "View.MemoryView":416 - * else: - * itemp = self.get_item_pointer(indices) - * return self.convert_item_to_object(itemp) # <<<<<<<<<<<<<< - * - * def __setitem__(memoryview self, object index, object value): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->convert_item_to_object(__pyx_v_self, __pyx_v_itemp); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 416, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - } - - /* "View.MemoryView":405 - * - * - * def __getitem__(memoryview self, object index): # <<<<<<<<<<<<<< - * if index is Ellipsis: - * return self - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview.__getitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_have_slices); - __Pyx_XDECREF(__pyx_v_indices); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":418 - * return self.convert_item_to_object(itemp) - * - * def __setitem__(memoryview self, object index, object value): # <<<<<<<<<<<<<< - * if self.view.readonly: - * raise TypeError("Cannot assign to read-only memoryview") - */ - -/* Python wrapper */ -static int __pyx_memoryview___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /*proto*/ -static int __pyx_memoryview___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setitem__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v_index), ((PyObject *)__pyx_v_value)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) { - PyObject *__pyx_v_have_slices = NULL; - PyObject *__pyx_v_obj = NULL; - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setitem__", 0); - __Pyx_INCREF(__pyx_v_index); - - /* "View.MemoryView":419 - * - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: # <<<<<<<<<<<<<< - * raise TypeError("Cannot assign to read-only memoryview") - * - */ - __pyx_t_1 = (__pyx_v_self->view.readonly != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":420 - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: - * raise TypeError("Cannot assign to read-only memoryview") # <<<<<<<<<<<<<< - * - * have_slices, index = _unellipsify(index, self.view.ndim) - */ - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__9, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 420, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_Raise(__pyx_t_2, 0, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(1, 420, __pyx_L1_error) - - /* "View.MemoryView":419 - * - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: # <<<<<<<<<<<<<< - * raise TypeError("Cannot assign to read-only memoryview") - * - */ - } - - /* "View.MemoryView":422 - * raise TypeError("Cannot assign to read-only memoryview") - * - * have_slices, index = _unellipsify(index, self.view.ndim) # <<<<<<<<<<<<<< - * - * if have_slices: - */ - __pyx_t_2 = _unellipsify(__pyx_v_index, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 422, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (likely(__pyx_t_2 != Py_None)) { - PyObject* sequence = __pyx_t_2; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(1, 422, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_4 = PyTuple_GET_ITEM(sequence, 1); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_4); - #else - __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 422, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 422, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } else { - __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(1, 422, __pyx_L1_error) - } - __pyx_v_have_slices = __pyx_t_3; - __pyx_t_3 = 0; - __Pyx_DECREF_SET(__pyx_v_index, __pyx_t_4); - __pyx_t_4 = 0; - - /* "View.MemoryView":424 - * have_slices, index = _unellipsify(index, self.view.ndim) - * - * if have_slices: # <<<<<<<<<<<<<< - * obj = self.is_slice(value) - * if obj: - */ - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_v_have_slices); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 424, __pyx_L1_error) - if (__pyx_t_1) { - - /* "View.MemoryView":425 - * - * if have_slices: - * obj = self.is_slice(value) # <<<<<<<<<<<<<< - * if obj: - * self.setitem_slice_assignment(self[index], obj) - */ - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->is_slice(__pyx_v_self, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 425, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_v_obj = __pyx_t_2; - __pyx_t_2 = 0; - - /* "View.MemoryView":426 - * if have_slices: - * obj = self.is_slice(value) - * if obj: # <<<<<<<<<<<<<< - * self.setitem_slice_assignment(self[index], obj) - * else: - */ - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_v_obj); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 426, __pyx_L1_error) - if (__pyx_t_1) { - - /* "View.MemoryView":427 - * obj = self.is_slice(value) - * if obj: - * self.setitem_slice_assignment(self[index], obj) # <<<<<<<<<<<<<< - * else: - * self.setitem_slice_assign_scalar(self[index], value) - */ - __pyx_t_2 = __Pyx_PyObject_GetItem(((PyObject *)__pyx_v_self), __pyx_v_index); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 427, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_slice_assignment(__pyx_v_self, __pyx_t_2, __pyx_v_obj); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 427, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "View.MemoryView":426 - * if have_slices: - * obj = self.is_slice(value) - * if obj: # <<<<<<<<<<<<<< - * self.setitem_slice_assignment(self[index], obj) - * else: - */ - goto __pyx_L5; - } - - /* "View.MemoryView":429 - * self.setitem_slice_assignment(self[index], obj) - * else: - * self.setitem_slice_assign_scalar(self[index], value) # <<<<<<<<<<<<<< - * else: - * self.setitem_indexed(index, value) - */ - /*else*/ { - __pyx_t_4 = __Pyx_PyObject_GetItem(((PyObject *)__pyx_v_self), __pyx_v_index); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 429, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - if (!(likely(((__pyx_t_4) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_4, __pyx_memoryview_type))))) __PYX_ERR(1, 429, __pyx_L1_error) - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_slice_assign_scalar(__pyx_v_self, ((struct __pyx_memoryview_obj *)__pyx_t_4), __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 429, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_L5:; - - /* "View.MemoryView":424 - * have_slices, index = _unellipsify(index, self.view.ndim) - * - * if have_slices: # <<<<<<<<<<<<<< - * obj = self.is_slice(value) - * if obj: - */ - goto __pyx_L4; - } - - /* "View.MemoryView":431 - * self.setitem_slice_assign_scalar(self[index], value) - * else: - * self.setitem_indexed(index, value) # <<<<<<<<<<<<<< - * - * cdef is_slice(self, obj): - */ - /*else*/ { - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_indexed(__pyx_v_self, __pyx_v_index, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 431, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_L4:; - - /* "View.MemoryView":418 - * return self.convert_item_to_object(itemp) - * - * def __setitem__(memoryview self, object index, object value): # <<<<<<<<<<<<<< - * if self.view.readonly: - * raise TypeError("Cannot assign to read-only memoryview") - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView.memoryview.__setitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_have_slices); - __Pyx_XDECREF(__pyx_v_obj); - __Pyx_XDECREF(__pyx_v_index); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":433 - * self.setitem_indexed(index, value) - * - * cdef is_slice(self, obj): # <<<<<<<<<<<<<< - * if not isinstance(obj, memoryview): - * try: - */ - -static PyObject *__pyx_memoryview_is_slice(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - int __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("is_slice", 0); - __Pyx_INCREF(__pyx_v_obj); - - /* "View.MemoryView":434 - * - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): # <<<<<<<<<<<<<< - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - */ - __pyx_t_1 = __Pyx_TypeCheck(__pyx_v_obj, __pyx_memoryview_type); - __pyx_t_2 = ((!(__pyx_t_1 != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":435 - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): - * try: # <<<<<<<<<<<<<< - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_3, &__pyx_t_4, &__pyx_t_5); - __Pyx_XGOTREF(__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_4); - __Pyx_XGOTREF(__pyx_t_5); - /*try:*/ { - - /* "View.MemoryView":436 - * if not isinstance(obj, memoryview): - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, # <<<<<<<<<<<<<< - * self.dtype_is_object) - * except TypeError: - */ - __pyx_t_6 = __Pyx_PyInt_From_int(((__pyx_v_self->flags & (~PyBUF_WRITABLE)) | PyBUF_ANY_CONTIGUOUS)); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 436, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_6); - - /* "View.MemoryView":437 - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) # <<<<<<<<<<<<<< - * except TypeError: - * return None - */ - __pyx_t_7 = __Pyx_PyBool_FromLong(__pyx_v_self->dtype_is_object); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 437, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_7); - - /* "View.MemoryView":436 - * if not isinstance(obj, memoryview): - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, # <<<<<<<<<<<<<< - * self.dtype_is_object) - * except TypeError: - */ - __pyx_t_8 = PyTuple_New(3); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 436, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_INCREF(__pyx_v_obj); - __Pyx_GIVEREF(__pyx_v_obj); - PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_v_obj); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_8, 1, __pyx_t_6); - __Pyx_GIVEREF(__pyx_t_7); - PyTuple_SET_ITEM(__pyx_t_8, 2, __pyx_t_7); - __pyx_t_6 = 0; - __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_8, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 436, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF_SET(__pyx_v_obj, __pyx_t_7); - __pyx_t_7 = 0; - - /* "View.MemoryView":435 - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): - * try: # <<<<<<<<<<<<<< - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - */ - } - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - goto __pyx_L9_try_end; - __pyx_L4_error:; - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - - /* "View.MemoryView":438 - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - * except TypeError: # <<<<<<<<<<<<<< - * return None - * - */ - __pyx_t_9 = __Pyx_PyErr_ExceptionMatches(__pyx_builtin_TypeError); - if (__pyx_t_9) { - __Pyx_AddTraceback("View.MemoryView.memoryview.is_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_7, &__pyx_t_8, &__pyx_t_6) < 0) __PYX_ERR(1, 438, __pyx_L6_except_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_GOTREF(__pyx_t_8); - __Pyx_GOTREF(__pyx_t_6); - - /* "View.MemoryView":439 - * self.dtype_is_object) - * except TypeError: - * return None # <<<<<<<<<<<<<< - * - * return obj - */ - __Pyx_XDECREF(__pyx_r); - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - goto __pyx_L7_except_return; - } - goto __pyx_L6_except_error; - __pyx_L6_except_error:; - - /* "View.MemoryView":435 - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): - * try: # <<<<<<<<<<<<<< - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - */ - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_XGIVEREF(__pyx_t_5); - __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_4, __pyx_t_5); - goto __pyx_L1_error; - __pyx_L7_except_return:; - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_XGIVEREF(__pyx_t_5); - __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_4, __pyx_t_5); - goto __pyx_L0; - __pyx_L9_try_end:; - } - - /* "View.MemoryView":434 - * - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): # <<<<<<<<<<<<<< - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - */ - } - - /* "View.MemoryView":441 - * return None - * - * return obj # <<<<<<<<<<<<<< - * - * cdef setitem_slice_assignment(self, dst, src): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_obj); - __pyx_r = __pyx_v_obj; - goto __pyx_L0; - - /* "View.MemoryView":433 - * self.setitem_indexed(index, value) - * - * cdef is_slice(self, obj): # <<<<<<<<<<<<<< - * if not isinstance(obj, memoryview): - * try: - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_AddTraceback("View.MemoryView.memoryview.is_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_obj); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":443 - * return obj - * - * cdef setitem_slice_assignment(self, dst, src): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice dst_slice - * cdef __Pyx_memviewslice src_slice - */ - -static PyObject *__pyx_memoryview_setitem_slice_assignment(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_dst, PyObject *__pyx_v_src) { - __Pyx_memviewslice __pyx_v_dst_slice; - __Pyx_memviewslice __pyx_v_src_slice; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - __Pyx_memviewslice *__pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_t_5; - int __pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("setitem_slice_assignment", 0); - - /* "View.MemoryView":447 - * cdef __Pyx_memviewslice src_slice - * - * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], # <<<<<<<<<<<<<< - * get_slice_from_memview(dst, &dst_slice)[0], - * src.ndim, dst.ndim, self.dtype_is_object) - */ - if (!(likely(((__pyx_v_src) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_src, __pyx_memoryview_type))))) __PYX_ERR(1, 447, __pyx_L1_error) - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(((struct __pyx_memoryview_obj *)__pyx_v_src), (&__pyx_v_src_slice)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 447, __pyx_L1_error) - - /* "View.MemoryView":448 - * - * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], - * get_slice_from_memview(dst, &dst_slice)[0], # <<<<<<<<<<<<<< - * src.ndim, dst.ndim, self.dtype_is_object) - * - */ - if (!(likely(((__pyx_v_dst) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_dst, __pyx_memoryview_type))))) __PYX_ERR(1, 448, __pyx_L1_error) - __pyx_t_2 = __pyx_memoryview_get_slice_from_memoryview(((struct __pyx_memoryview_obj *)__pyx_v_dst), (&__pyx_v_dst_slice)); if (unlikely(__pyx_t_2 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 448, __pyx_L1_error) - - /* "View.MemoryView":449 - * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], - * get_slice_from_memview(dst, &dst_slice)[0], - * src.ndim, dst.ndim, self.dtype_is_object) # <<<<<<<<<<<<<< - * - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_src, __pyx_n_s_ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 449, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyInt_As_int(__pyx_t_3); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 449, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_dst, __pyx_n_s_ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 449, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = __Pyx_PyInt_As_int(__pyx_t_3); if (unlikely((__pyx_t_5 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 449, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":447 - * cdef __Pyx_memviewslice src_slice - * - * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], # <<<<<<<<<<<<<< - * get_slice_from_memview(dst, &dst_slice)[0], - * src.ndim, dst.ndim, self.dtype_is_object) - */ - __pyx_t_6 = __pyx_memoryview_copy_contents((__pyx_t_1[0]), (__pyx_t_2[0]), __pyx_t_4, __pyx_t_5, __pyx_v_self->dtype_is_object); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(1, 447, __pyx_L1_error) - - /* "View.MemoryView":443 - * return obj - * - * cdef setitem_slice_assignment(self, dst, src): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice dst_slice - * cdef __Pyx_memviewslice src_slice - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_slice_assignment", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":451 - * src.ndim, dst.ndim, self.dtype_is_object) - * - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): # <<<<<<<<<<<<<< - * cdef int array[128] - * cdef void *tmp = NULL - */ - -static PyObject *__pyx_memoryview_setitem_slice_assign_scalar(struct __pyx_memoryview_obj *__pyx_v_self, struct __pyx_memoryview_obj *__pyx_v_dst, PyObject *__pyx_v_value) { - int __pyx_v_array[0x80]; - void *__pyx_v_tmp; - void *__pyx_v_item; - __Pyx_memviewslice *__pyx_v_dst_slice; - __Pyx_memviewslice __pyx_v_tmp_slice; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_t_5; - char const *__pyx_t_6; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - PyObject *__pyx_t_10 = NULL; - PyObject *__pyx_t_11 = NULL; - PyObject *__pyx_t_12 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("setitem_slice_assign_scalar", 0); - - /* "View.MemoryView":453 - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): - * cdef int array[128] - * cdef void *tmp = NULL # <<<<<<<<<<<<<< - * cdef void *item - * - */ - __pyx_v_tmp = NULL; - - /* "View.MemoryView":458 - * cdef __Pyx_memviewslice *dst_slice - * cdef __Pyx_memviewslice tmp_slice - * dst_slice = get_slice_from_memview(dst, &tmp_slice) # <<<<<<<<<<<<<< - * - * if self.view.itemsize > sizeof(array): - */ - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_dst, (&__pyx_v_tmp_slice)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 458, __pyx_L1_error) - __pyx_v_dst_slice = __pyx_t_1; - - /* "View.MemoryView":460 - * dst_slice = get_slice_from_memview(dst, &tmp_slice) - * - * if self.view.itemsize > sizeof(array): # <<<<<<<<<<<<<< - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: - */ - __pyx_t_2 = ((((size_t)__pyx_v_self->view.itemsize) > (sizeof(__pyx_v_array))) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":461 - * - * if self.view.itemsize > sizeof(array): - * tmp = PyMem_Malloc(self.view.itemsize) # <<<<<<<<<<<<<< - * if tmp == NULL: - * raise MemoryError - */ - __pyx_v_tmp = PyMem_Malloc(__pyx_v_self->view.itemsize); - - /* "View.MemoryView":462 - * if self.view.itemsize > sizeof(array): - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * item = tmp - */ - __pyx_t_2 = ((__pyx_v_tmp == NULL) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":463 - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: - * raise MemoryError # <<<<<<<<<<<<<< - * item = tmp - * else: - */ - PyErr_NoMemory(); __PYX_ERR(1, 463, __pyx_L1_error) - - /* "View.MemoryView":462 - * if self.view.itemsize > sizeof(array): - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * item = tmp - */ - } - - /* "View.MemoryView":464 - * if tmp == NULL: - * raise MemoryError - * item = tmp # <<<<<<<<<<<<<< - * else: - * item = array - */ - __pyx_v_item = __pyx_v_tmp; - - /* "View.MemoryView":460 - * dst_slice = get_slice_from_memview(dst, &tmp_slice) - * - * if self.view.itemsize > sizeof(array): # <<<<<<<<<<<<<< - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":466 - * item = tmp - * else: - * item = array # <<<<<<<<<<<<<< - * - * try: - */ - /*else*/ { - __pyx_v_item = ((void *)__pyx_v_array); - } - __pyx_L3:; - - /* "View.MemoryView":468 - * item = array - * - * try: # <<<<<<<<<<<<<< - * if self.dtype_is_object: - * ( item)[0] = value - */ - /*try:*/ { - - /* "View.MemoryView":469 - * - * try: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * ( item)[0] = value - * else: - */ - __pyx_t_2 = (__pyx_v_self->dtype_is_object != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":470 - * try: - * if self.dtype_is_object: - * ( item)[0] = value # <<<<<<<<<<<<<< - * else: - * self.assign_item_from_object( item, value) - */ - (((PyObject **)__pyx_v_item)[0]) = ((PyObject *)__pyx_v_value); - - /* "View.MemoryView":469 - * - * try: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * ( item)[0] = value - * else: - */ - goto __pyx_L8; - } - - /* "View.MemoryView":472 - * ( item)[0] = value - * else: - * self.assign_item_from_object( item, value) # <<<<<<<<<<<<<< - * - * - */ - /*else*/ { - __pyx_t_3 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->assign_item_from_object(__pyx_v_self, ((char *)__pyx_v_item), __pyx_v_value); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 472, __pyx_L6_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_L8:; - - /* "View.MemoryView":476 - * - * - * if self.view.suboffsets != NULL: # <<<<<<<<<<<<<< - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, - */ - __pyx_t_2 = ((__pyx_v_self->view.suboffsets != NULL) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":477 - * - * if self.view.suboffsets != NULL: - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) # <<<<<<<<<<<<<< - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, - * item, self.dtype_is_object) - */ - __pyx_t_3 = assert_direct_dimensions(__pyx_v_self->view.suboffsets, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 477, __pyx_L6_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":476 - * - * - * if self.view.suboffsets != NULL: # <<<<<<<<<<<<<< - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, - */ - } - - /* "View.MemoryView":478 - * if self.view.suboffsets != NULL: - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, # <<<<<<<<<<<<<< - * item, self.dtype_is_object) - * finally: - */ - __pyx_memoryview_slice_assign_scalar(__pyx_v_dst_slice, __pyx_v_dst->view.ndim, __pyx_v_self->view.itemsize, __pyx_v_item, __pyx_v_self->dtype_is_object); - } - - /* "View.MemoryView":481 - * item, self.dtype_is_object) - * finally: - * PyMem_Free(tmp) # <<<<<<<<<<<<<< - * - * cdef setitem_indexed(self, index, value): - */ - /*finally:*/ { - /*normal exit:*/{ - PyMem_Free(__pyx_v_tmp); - goto __pyx_L7; - } - __pyx_L6_error:; - /*exception exit:*/{ - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0; - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PY_MAJOR_VERSION >= 3) __Pyx_ExceptionSwap(&__pyx_t_10, &__pyx_t_11, &__pyx_t_12); - if ((PY_MAJOR_VERSION < 3) || unlikely(__Pyx_GetException(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9) < 0)) __Pyx_ErrFetch(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9); - __Pyx_XGOTREF(__pyx_t_7); - __Pyx_XGOTREF(__pyx_t_8); - __Pyx_XGOTREF(__pyx_t_9); - __Pyx_XGOTREF(__pyx_t_10); - __Pyx_XGOTREF(__pyx_t_11); - __Pyx_XGOTREF(__pyx_t_12); - __pyx_t_4 = __pyx_lineno; __pyx_t_5 = __pyx_clineno; __pyx_t_6 = __pyx_filename; - { - PyMem_Free(__pyx_v_tmp); - } - if (PY_MAJOR_VERSION >= 3) { - __Pyx_XGIVEREF(__pyx_t_10); - __Pyx_XGIVEREF(__pyx_t_11); - __Pyx_XGIVEREF(__pyx_t_12); - __Pyx_ExceptionReset(__pyx_t_10, __pyx_t_11, __pyx_t_12); - } - __Pyx_XGIVEREF(__pyx_t_7); - __Pyx_XGIVEREF(__pyx_t_8); - __Pyx_XGIVEREF(__pyx_t_9); - __Pyx_ErrRestore(__pyx_t_7, __pyx_t_8, __pyx_t_9); - __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0; - __pyx_lineno = __pyx_t_4; __pyx_clineno = __pyx_t_5; __pyx_filename = __pyx_t_6; - goto __pyx_L1_error; - } - __pyx_L7:; - } - - /* "View.MemoryView":451 - * src.ndim, dst.ndim, self.dtype_is_object) - * - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): # <<<<<<<<<<<<<< - * cdef int array[128] - * cdef void *tmp = NULL - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_slice_assign_scalar", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":483 - * PyMem_Free(tmp) - * - * cdef setitem_indexed(self, index, value): # <<<<<<<<<<<<<< - * cdef char *itemp = self.get_item_pointer(index) - * self.assign_item_from_object(itemp, value) - */ - -static PyObject *__pyx_memoryview_setitem_indexed(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) { - char *__pyx_v_itemp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - char *__pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("setitem_indexed", 0); - - /* "View.MemoryView":484 - * - * cdef setitem_indexed(self, index, value): - * cdef char *itemp = self.get_item_pointer(index) # <<<<<<<<<<<<<< - * self.assign_item_from_object(itemp, value) - * - */ - __pyx_t_1 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->get_item_pointer(__pyx_v_self, __pyx_v_index); if (unlikely(__pyx_t_1 == ((char *)NULL))) __PYX_ERR(1, 484, __pyx_L1_error) - __pyx_v_itemp = __pyx_t_1; - - /* "View.MemoryView":485 - * cdef setitem_indexed(self, index, value): - * cdef char *itemp = self.get_item_pointer(index) - * self.assign_item_from_object(itemp, value) # <<<<<<<<<<<<<< - * - * cdef convert_item_to_object(self, char *itemp): - */ - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->assign_item_from_object(__pyx_v_self, __pyx_v_itemp, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 485, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "View.MemoryView":483 - * PyMem_Free(tmp) - * - * cdef setitem_indexed(self, index, value): # <<<<<<<<<<<<<< - * cdef char *itemp = self.get_item_pointer(index) - * self.assign_item_from_object(itemp, value) - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_indexed", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":487 - * self.assign_item_from_object(itemp, value) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - -static PyObject *__pyx_memoryview_convert_item_to_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp) { - PyObject *__pyx_v_struct = NULL; - PyObject *__pyx_v_bytesitem = 0; - PyObject *__pyx_v_result = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - int __pyx_t_8; - PyObject *__pyx_t_9 = NULL; - size_t __pyx_t_10; - int __pyx_t_11; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("convert_item_to_object", 0); - - /* "View.MemoryView":490 - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - * import struct # <<<<<<<<<<<<<< - * cdef bytes bytesitem - * - */ - __pyx_t_1 = __Pyx_Import(__pyx_n_s_struct, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 490, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_struct = __pyx_t_1; - __pyx_t_1 = 0; - - /* "View.MemoryView":493 - * cdef bytes bytesitem - * - * bytesitem = itemp[:self.view.itemsize] # <<<<<<<<<<<<<< - * try: - * result = struct.unpack(self.view.format, bytesitem) - */ - __pyx_t_1 = __Pyx_PyBytes_FromStringAndSize(__pyx_v_itemp + 0, __pyx_v_self->view.itemsize - 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 493, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_bytesitem = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":494 - * - * bytesitem = itemp[:self.view.itemsize] - * try: # <<<<<<<<<<<<<< - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_2, &__pyx_t_3, &__pyx_t_4); - __Pyx_XGOTREF(__pyx_t_2); - __Pyx_XGOTREF(__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_4); - /*try:*/ { - - /* "View.MemoryView":495 - * bytesitem = itemp[:self.view.itemsize] - * try: - * result = struct.unpack(self.view.format, bytesitem) # <<<<<<<<<<<<<< - * except struct.error: - * raise ValueError("Unable to convert item to object") - */ - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_unpack); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 495, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 495, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_8 = 1; - } - } - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(__pyx_t_5)) { - PyObject *__pyx_temp[3] = {__pyx_t_7, __pyx_t_6, __pyx_v_bytesitem}; - __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_5, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 495, __pyx_L3_error) - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } else - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(__pyx_t_5)) { - PyObject *__pyx_temp[3] = {__pyx_t_7, __pyx_t_6, __pyx_v_bytesitem}; - __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_5, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 495, __pyx_L3_error) - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } else - #endif - { - __pyx_t_9 = PyTuple_New(2+__pyx_t_8); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 495, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_9); - if (__pyx_t_7) { - __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_9, 0, __pyx_t_7); __pyx_t_7 = NULL; - } - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_9, 0+__pyx_t_8, __pyx_t_6); - __Pyx_INCREF(__pyx_v_bytesitem); - __Pyx_GIVEREF(__pyx_v_bytesitem); - PyTuple_SET_ITEM(__pyx_t_9, 1+__pyx_t_8, __pyx_v_bytesitem); - __pyx_t_6 = 0; - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_t_9, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 495, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_v_result = __pyx_t_1; - __pyx_t_1 = 0; - - /* "View.MemoryView":494 - * - * bytesitem = itemp[:self.view.itemsize] - * try: # <<<<<<<<<<<<<< - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - */ - } - - /* "View.MemoryView":499 - * raise ValueError("Unable to convert item to object") - * else: - * if len(self.view.format) == 1: # <<<<<<<<<<<<<< - * return result[0] - * return result - */ - /*else:*/ { - __pyx_t_10 = strlen(__pyx_v_self->view.format); - __pyx_t_11 = ((__pyx_t_10 == 1) != 0); - if (__pyx_t_11) { - - /* "View.MemoryView":500 - * else: - * if len(self.view.format) == 1: - * return result[0] # <<<<<<<<<<<<<< - * return result - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_result, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 500, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L6_except_return; - - /* "View.MemoryView":499 - * raise ValueError("Unable to convert item to object") - * else: - * if len(self.view.format) == 1: # <<<<<<<<<<<<<< - * return result[0] - * return result - */ - } - - /* "View.MemoryView":501 - * if len(self.view.format) == 1: - * return result[0] - * return result # <<<<<<<<<<<<<< - * - * cdef assign_item_from_object(self, char *itemp, object value): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_result); - __pyx_r = __pyx_v_result; - goto __pyx_L6_except_return; - } - __pyx_L3_error:; - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "View.MemoryView":496 - * try: - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: # <<<<<<<<<<<<<< - * raise ValueError("Unable to convert item to object") - * else: - */ - __Pyx_ErrFetch(&__pyx_t_1, &__pyx_t_5, &__pyx_t_9); - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_error); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 496, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_8 = __Pyx_PyErr_GivenExceptionMatches(__pyx_t_1, __pyx_t_6); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_ErrRestore(__pyx_t_1, __pyx_t_5, __pyx_t_9); - __pyx_t_1 = 0; __pyx_t_5 = 0; __pyx_t_9 = 0; - if (__pyx_t_8) { - __Pyx_AddTraceback("View.MemoryView.memoryview.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_9, &__pyx_t_5, &__pyx_t_1) < 0) __PYX_ERR(1, 496, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GOTREF(__pyx_t_1); - - /* "View.MemoryView":497 - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - * raise ValueError("Unable to convert item to object") # <<<<<<<<<<<<<< - * else: - * if len(self.view.format) == 1: - */ - __pyx_t_6 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__10, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 497, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_Raise(__pyx_t_6, 0, 0, 0); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __PYX_ERR(1, 497, __pyx_L5_except_error) - } - goto __pyx_L5_except_error; - __pyx_L5_except_error:; - - /* "View.MemoryView":494 - * - * bytesitem = itemp[:self.view.itemsize] - * try: # <<<<<<<<<<<<<< - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - */ - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_ExceptionReset(__pyx_t_2, __pyx_t_3, __pyx_t_4); - goto __pyx_L1_error; - __pyx_L6_except_return:; - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_ExceptionReset(__pyx_t_2, __pyx_t_3, __pyx_t_4); - goto __pyx_L0; - } - - /* "View.MemoryView":487 - * self.assign_item_from_object(itemp, value) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_AddTraceback("View.MemoryView.memoryview.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_struct); - __Pyx_XDECREF(__pyx_v_bytesitem); - __Pyx_XDECREF(__pyx_v_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":503 - * return result - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - -static PyObject *__pyx_memoryview_assign_item_from_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value) { - PyObject *__pyx_v_struct = NULL; - char __pyx_v_c; - PyObject *__pyx_v_bytesvalue = 0; - Py_ssize_t __pyx_v_i; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - int __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - int __pyx_t_7; - PyObject *__pyx_t_8 = NULL; - Py_ssize_t __pyx_t_9; - PyObject *__pyx_t_10 = NULL; - char *__pyx_t_11; - char *__pyx_t_12; - char *__pyx_t_13; - char *__pyx_t_14; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("assign_item_from_object", 0); - - /* "View.MemoryView":506 - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - * import struct # <<<<<<<<<<<<<< - * cdef char c - * cdef bytes bytesvalue - */ - __pyx_t_1 = __Pyx_Import(__pyx_n_s_struct, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 506, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_struct = __pyx_t_1; - __pyx_t_1 = 0; - - /* "View.MemoryView":511 - * cdef Py_ssize_t i - * - * if isinstance(value, tuple): # <<<<<<<<<<<<<< - * bytesvalue = struct.pack(self.view.format, *value) - * else: - */ - __pyx_t_2 = PyTuple_Check(__pyx_v_value); - __pyx_t_3 = (__pyx_t_2 != 0); - if (__pyx_t_3) { - - /* "View.MemoryView":512 - * - * if isinstance(value, tuple): - * bytesvalue = struct.pack(self.view.format, *value) # <<<<<<<<<<<<<< - * else: - * bytesvalue = struct.pack(self.view.format, value) - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_pack); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4); - __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PySequence_Tuple(__pyx_v_value); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_6 = PyNumber_Add(__pyx_t_5, __pyx_t_4); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_6, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (!(likely(PyBytes_CheckExact(__pyx_t_4))||((__pyx_t_4) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_t_4)->tp_name), 0))) __PYX_ERR(1, 512, __pyx_L1_error) - __pyx_v_bytesvalue = ((PyObject*)__pyx_t_4); - __pyx_t_4 = 0; - - /* "View.MemoryView":511 - * cdef Py_ssize_t i - * - * if isinstance(value, tuple): # <<<<<<<<<<<<<< - * bytesvalue = struct.pack(self.view.format, *value) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":514 - * bytesvalue = struct.pack(self.view.format, *value) - * else: - * bytesvalue = struct.pack(self.view.format, value) # <<<<<<<<<<<<<< - * - * for i, c in enumerate(bytesvalue): - */ - /*else*/ { - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_pack); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 514, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_1 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 514, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_5 = NULL; - __pyx_t_7 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_7 = 1; - } - } - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(__pyx_t_6)) { - PyObject *__pyx_temp[3] = {__pyx_t_5, __pyx_t_1, __pyx_v_value}; - __pyx_t_4 = __Pyx_PyFunction_FastCall(__pyx_t_6, __pyx_temp+1-__pyx_t_7, 2+__pyx_t_7); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 514, __pyx_L1_error) - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(__pyx_t_6)) { - PyObject *__pyx_temp[3] = {__pyx_t_5, __pyx_t_1, __pyx_v_value}; - __pyx_t_4 = __Pyx_PyCFunction_FastCall(__pyx_t_6, __pyx_temp+1-__pyx_t_7, 2+__pyx_t_7); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 514, __pyx_L1_error) - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else - #endif - { - __pyx_t_8 = PyTuple_New(2+__pyx_t_7); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 514, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - if (__pyx_t_5) { - __Pyx_GIVEREF(__pyx_t_5); PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_t_5); __pyx_t_5 = NULL; - } - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_8, 0+__pyx_t_7, __pyx_t_1); - __Pyx_INCREF(__pyx_v_value); - __Pyx_GIVEREF(__pyx_v_value); - PyTuple_SET_ITEM(__pyx_t_8, 1+__pyx_t_7, __pyx_v_value); - __pyx_t_1 = 0; - __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_6, __pyx_t_8, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 514, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (!(likely(PyBytes_CheckExact(__pyx_t_4))||((__pyx_t_4) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_t_4)->tp_name), 0))) __PYX_ERR(1, 514, __pyx_L1_error) - __pyx_v_bytesvalue = ((PyObject*)__pyx_t_4); - __pyx_t_4 = 0; - } - __pyx_L3:; - - /* "View.MemoryView":516 - * bytesvalue = struct.pack(self.view.format, value) - * - * for i, c in enumerate(bytesvalue): # <<<<<<<<<<<<<< - * itemp[i] = c - * - */ - __pyx_t_9 = 0; - if (unlikely(__pyx_v_bytesvalue == Py_None)) { - PyErr_SetString(PyExc_TypeError, "'NoneType' is not iterable"); - __PYX_ERR(1, 516, __pyx_L1_error) - } - __Pyx_INCREF(__pyx_v_bytesvalue); - __pyx_t_10 = __pyx_v_bytesvalue; - __pyx_t_12 = PyBytes_AS_STRING(__pyx_t_10); - __pyx_t_13 = (__pyx_t_12 + PyBytes_GET_SIZE(__pyx_t_10)); - for (__pyx_t_14 = __pyx_t_12; __pyx_t_14 < __pyx_t_13; __pyx_t_14++) { - __pyx_t_11 = __pyx_t_14; - __pyx_v_c = (__pyx_t_11[0]); - - /* "View.MemoryView":517 - * - * for i, c in enumerate(bytesvalue): - * itemp[i] = c # <<<<<<<<<<<<<< - * - * @cname('getbuffer') - */ - __pyx_v_i = __pyx_t_9; - - /* "View.MemoryView":516 - * bytesvalue = struct.pack(self.view.format, value) - * - * for i, c in enumerate(bytesvalue): # <<<<<<<<<<<<<< - * itemp[i] = c - * - */ - __pyx_t_9 = (__pyx_t_9 + 1); - - /* "View.MemoryView":517 - * - * for i, c in enumerate(bytesvalue): - * itemp[i] = c # <<<<<<<<<<<<<< - * - * @cname('getbuffer') - */ - (__pyx_v_itemp[__pyx_v_i]) = __pyx_v_c; - } - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - - /* "View.MemoryView":503 - * return result - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_AddTraceback("View.MemoryView.memoryview.assign_item_from_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_struct); - __Pyx_XDECREF(__pyx_v_bytesvalue); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":520 - * - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<< - * if flags & PyBUF_WRITABLE and self.view.readonly: - * raise ValueError("Cannot create writable memory view from read-only memoryview") - */ - -/* Python wrapper */ -static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getbuffer__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(struct __pyx_memoryview_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - Py_ssize_t *__pyx_t_4; - char *__pyx_t_5; - void *__pyx_t_6; - int __pyx_t_7; - Py_ssize_t __pyx_t_8; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - if (__pyx_v_info == NULL) { - PyErr_SetString(PyExc_BufferError, "PyObject_GetBuffer: view==NULL argument is obsolete"); - return -1; - } - __Pyx_RefNannySetupContext("__getbuffer__", 0); - __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(__pyx_v_info->obj); - - /* "View.MemoryView":521 - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: # <<<<<<<<<<<<<< - * raise ValueError("Cannot create writable memory view from read-only memoryview") - * - */ - __pyx_t_2 = ((__pyx_v_flags & PyBUF_WRITABLE) != 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L4_bool_binop_done; - } - __pyx_t_2 = (__pyx_v_self->view.readonly != 0); - __pyx_t_1 = __pyx_t_2; - __pyx_L4_bool_binop_done:; - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":522 - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: - * raise ValueError("Cannot create writable memory view from read-only memoryview") # <<<<<<<<<<<<<< - * - * if flags & PyBUF_ND: - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__11, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 522, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 522, __pyx_L1_error) - - /* "View.MemoryView":521 - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: # <<<<<<<<<<<<<< - * raise ValueError("Cannot create writable memory view from read-only memoryview") - * - */ - } - - /* "View.MemoryView":524 - * raise ValueError("Cannot create writable memory view from read-only memoryview") - * - * if flags & PyBUF_ND: # <<<<<<<<<<<<<< - * info.shape = self.view.shape - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_ND) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":525 - * - * if flags & PyBUF_ND: - * info.shape = self.view.shape # <<<<<<<<<<<<<< - * else: - * info.shape = NULL - */ - __pyx_t_4 = __pyx_v_self->view.shape; - __pyx_v_info->shape = __pyx_t_4; - - /* "View.MemoryView":524 - * raise ValueError("Cannot create writable memory view from read-only memoryview") - * - * if flags & PyBUF_ND: # <<<<<<<<<<<<<< - * info.shape = self.view.shape - * else: - */ - goto __pyx_L6; - } - - /* "View.MemoryView":527 - * info.shape = self.view.shape - * else: - * info.shape = NULL # <<<<<<<<<<<<<< - * - * if flags & PyBUF_STRIDES: - */ - /*else*/ { - __pyx_v_info->shape = NULL; - } - __pyx_L6:; - - /* "View.MemoryView":529 - * info.shape = NULL - * - * if flags & PyBUF_STRIDES: # <<<<<<<<<<<<<< - * info.strides = self.view.strides - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_STRIDES) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":530 - * - * if flags & PyBUF_STRIDES: - * info.strides = self.view.strides # <<<<<<<<<<<<<< - * else: - * info.strides = NULL - */ - __pyx_t_4 = __pyx_v_self->view.strides; - __pyx_v_info->strides = __pyx_t_4; - - /* "View.MemoryView":529 - * info.shape = NULL - * - * if flags & PyBUF_STRIDES: # <<<<<<<<<<<<<< - * info.strides = self.view.strides - * else: - */ - goto __pyx_L7; - } - - /* "View.MemoryView":532 - * info.strides = self.view.strides - * else: - * info.strides = NULL # <<<<<<<<<<<<<< - * - * if flags & PyBUF_INDIRECT: - */ - /*else*/ { - __pyx_v_info->strides = NULL; - } - __pyx_L7:; - - /* "View.MemoryView":534 - * info.strides = NULL - * - * if flags & PyBUF_INDIRECT: # <<<<<<<<<<<<<< - * info.suboffsets = self.view.suboffsets - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_INDIRECT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":535 - * - * if flags & PyBUF_INDIRECT: - * info.suboffsets = self.view.suboffsets # <<<<<<<<<<<<<< - * else: - * info.suboffsets = NULL - */ - __pyx_t_4 = __pyx_v_self->view.suboffsets; - __pyx_v_info->suboffsets = __pyx_t_4; - - /* "View.MemoryView":534 - * info.strides = NULL - * - * if flags & PyBUF_INDIRECT: # <<<<<<<<<<<<<< - * info.suboffsets = self.view.suboffsets - * else: - */ - goto __pyx_L8; - } - - /* "View.MemoryView":537 - * info.suboffsets = self.view.suboffsets - * else: - * info.suboffsets = NULL # <<<<<<<<<<<<<< - * - * if flags & PyBUF_FORMAT: - */ - /*else*/ { - __pyx_v_info->suboffsets = NULL; - } - __pyx_L8:; - - /* "View.MemoryView":539 - * info.suboffsets = NULL - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * info.format = self.view.format - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":540 - * - * if flags & PyBUF_FORMAT: - * info.format = self.view.format # <<<<<<<<<<<<<< - * else: - * info.format = NULL - */ - __pyx_t_5 = __pyx_v_self->view.format; - __pyx_v_info->format = __pyx_t_5; - - /* "View.MemoryView":539 - * info.suboffsets = NULL - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * info.format = self.view.format - * else: - */ - goto __pyx_L9; - } - - /* "View.MemoryView":542 - * info.format = self.view.format - * else: - * info.format = NULL # <<<<<<<<<<<<<< - * - * info.buf = self.view.buf - */ - /*else*/ { - __pyx_v_info->format = NULL; - } - __pyx_L9:; - - /* "View.MemoryView":544 - * info.format = NULL - * - * info.buf = self.view.buf # <<<<<<<<<<<<<< - * info.ndim = self.view.ndim - * info.itemsize = self.view.itemsize - */ - __pyx_t_6 = __pyx_v_self->view.buf; - __pyx_v_info->buf = __pyx_t_6; - - /* "View.MemoryView":545 - * - * info.buf = self.view.buf - * info.ndim = self.view.ndim # <<<<<<<<<<<<<< - * info.itemsize = self.view.itemsize - * info.len = self.view.len - */ - __pyx_t_7 = __pyx_v_self->view.ndim; - __pyx_v_info->ndim = __pyx_t_7; - - /* "View.MemoryView":546 - * info.buf = self.view.buf - * info.ndim = self.view.ndim - * info.itemsize = self.view.itemsize # <<<<<<<<<<<<<< - * info.len = self.view.len - * info.readonly = self.view.readonly - */ - __pyx_t_8 = __pyx_v_self->view.itemsize; - __pyx_v_info->itemsize = __pyx_t_8; - - /* "View.MemoryView":547 - * info.ndim = self.view.ndim - * info.itemsize = self.view.itemsize - * info.len = self.view.len # <<<<<<<<<<<<<< - * info.readonly = self.view.readonly - * info.obj = self - */ - __pyx_t_8 = __pyx_v_self->view.len; - __pyx_v_info->len = __pyx_t_8; - - /* "View.MemoryView":548 - * info.itemsize = self.view.itemsize - * info.len = self.view.len - * info.readonly = self.view.readonly # <<<<<<<<<<<<<< - * info.obj = self - * - */ - __pyx_t_1 = __pyx_v_self->view.readonly; - __pyx_v_info->readonly = __pyx_t_1; - - /* "View.MemoryView":549 - * info.len = self.view.len - * info.readonly = self.view.readonly - * info.obj = self # <<<<<<<<<<<<<< - * - * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)") - */ - __Pyx_INCREF(((PyObject *)__pyx_v_self)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_self)); - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); - __pyx_v_info->obj = ((PyObject *)__pyx_v_self); - - /* "View.MemoryView":520 - * - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<< - * if flags & PyBUF_WRITABLE and self.view.readonly: - * raise ValueError("Cannot create writable memory view from read-only memoryview") - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.__getbuffer__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - if (__pyx_v_info->obj != NULL) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - goto __pyx_L2; - __pyx_L0:; - if (__pyx_v_info->obj == Py_None) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - __pyx_L2:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":555 - * - * @property - * def T(self): # <<<<<<<<<<<<<< - * cdef _memoryviewslice result = memoryview_copy(self) - * transpose_memslice(&result.from_slice) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - struct __pyx_memoryviewslice_obj *__pyx_v_result = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":556 - * @property - * def T(self): - * cdef _memoryviewslice result = memoryview_copy(self) # <<<<<<<<<<<<<< - * transpose_memslice(&result.from_slice) - * return result - */ - __pyx_t_1 = __pyx_memoryview_copy_object(__pyx_v_self); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 556, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_memoryviewslice_type))))) __PYX_ERR(1, 556, __pyx_L1_error) - __pyx_v_result = ((struct __pyx_memoryviewslice_obj *)__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":557 - * def T(self): - * cdef _memoryviewslice result = memoryview_copy(self) - * transpose_memslice(&result.from_slice) # <<<<<<<<<<<<<< - * return result - * - */ - __pyx_t_2 = __pyx_memslice_transpose((&__pyx_v_result->from_slice)); if (unlikely(__pyx_t_2 == ((int)0))) __PYX_ERR(1, 557, __pyx_L1_error) - - /* "View.MemoryView":558 - * cdef _memoryviewslice result = memoryview_copy(self) - * transpose_memslice(&result.from_slice) - * return result # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)__pyx_v_result)); - __pyx_r = ((PyObject *)__pyx_v_result); - goto __pyx_L0; - - /* "View.MemoryView":555 - * - * @property - * def T(self): # <<<<<<<<<<<<<< - * cdef _memoryviewslice result = memoryview_copy(self) - * transpose_memslice(&result.from_slice) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.T.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":561 - * - * @property - * def base(self): # <<<<<<<<<<<<<< - * return self.obj - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":562 - * @property - * def base(self): - * return self.obj # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->obj); - __pyx_r = __pyx_v_self->obj; - goto __pyx_L0; - - /* "View.MemoryView":561 - * - * @property - * def base(self): # <<<<<<<<<<<<<< - * return self.obj - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":565 - * - * @property - * def shape(self): # <<<<<<<<<<<<<< - * return tuple([length for length in self.view.shape[:self.view.ndim]]) - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_v_length; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - Py_ssize_t *__pyx_t_2; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":566 - * @property - * def shape(self): - * return tuple([length for length in self.view.shape[:self.view.ndim]]) # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 566, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = (__pyx_v_self->view.shape + __pyx_v_self->view.ndim); - for (__pyx_t_4 = __pyx_v_self->view.shape; __pyx_t_4 < __pyx_t_3; __pyx_t_4++) { - __pyx_t_2 = __pyx_t_4; - __pyx_v_length = (__pyx_t_2[0]); - __pyx_t_5 = PyInt_FromSsize_t(__pyx_v_length); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 566, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_t_5))) __PYX_ERR(1, 566, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __pyx_t_5 = PyList_AsTuple(((PyObject*)__pyx_t_1)); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 566, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "View.MemoryView":565 - * - * @property - * def shape(self): # <<<<<<<<<<<<<< - * return tuple([length for length in self.view.shape[:self.view.ndim]]) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview.shape.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":569 - * - * @property - * def strides(self): # <<<<<<<<<<<<<< - * if self.view.strides == NULL: - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_v_stride; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - Py_ssize_t *__pyx_t_5; - PyObject *__pyx_t_6 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":570 - * @property - * def strides(self): - * if self.view.strides == NULL: # <<<<<<<<<<<<<< - * - * raise ValueError("Buffer view does not expose strides") - */ - __pyx_t_1 = ((__pyx_v_self->view.strides == NULL) != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":572 - * if self.view.strides == NULL: - * - * raise ValueError("Buffer view does not expose strides") # <<<<<<<<<<<<<< - * - * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) - */ - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__12, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 572, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_Raise(__pyx_t_2, 0, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(1, 572, __pyx_L1_error) - - /* "View.MemoryView":570 - * @property - * def strides(self): - * if self.view.strides == NULL: # <<<<<<<<<<<<<< - * - * raise ValueError("Buffer view does not expose strides") - */ - } - - /* "View.MemoryView":574 - * raise ValueError("Buffer view does not expose strides") - * - * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 574, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = (__pyx_v_self->view.strides + __pyx_v_self->view.ndim); - for (__pyx_t_5 = __pyx_v_self->view.strides; __pyx_t_5 < __pyx_t_4; __pyx_t_5++) { - __pyx_t_3 = __pyx_t_5; - __pyx_v_stride = (__pyx_t_3[0]); - __pyx_t_6 = PyInt_FromSsize_t(__pyx_v_stride); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 574, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - if (unlikely(__Pyx_ListComp_Append(__pyx_t_2, (PyObject*)__pyx_t_6))) __PYX_ERR(1, 574, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - __pyx_t_6 = PyList_AsTuple(((PyObject*)__pyx_t_2)); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 574, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_6; - __pyx_t_6 = 0; - goto __pyx_L0; - - /* "View.MemoryView":569 - * - * @property - * def strides(self): # <<<<<<<<<<<<<< - * if self.view.strides == NULL: - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("View.MemoryView.memoryview.strides.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":577 - * - * @property - * def suboffsets(self): # <<<<<<<<<<<<<< - * if self.view.suboffsets == NULL: - * return (-1,) * self.view.ndim - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_v_suboffset; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - Py_ssize_t *__pyx_t_4; - Py_ssize_t *__pyx_t_5; - Py_ssize_t *__pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":578 - * @property - * def suboffsets(self): - * if self.view.suboffsets == NULL: # <<<<<<<<<<<<<< - * return (-1,) * self.view.ndim - * - */ - __pyx_t_1 = ((__pyx_v_self->view.suboffsets == NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":579 - * def suboffsets(self): - * if self.view.suboffsets == NULL: - * return (-1,) * self.view.ndim # <<<<<<<<<<<<<< - * - * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_PyInt_From_int(__pyx_v_self->view.ndim); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 579, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Multiply(__pyx_tuple__13, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 579, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "View.MemoryView":578 - * @property - * def suboffsets(self): - * if self.view.suboffsets == NULL: # <<<<<<<<<<<<<< - * return (-1,) * self.view.ndim - * - */ - } - - /* "View.MemoryView":581 - * return (-1,) * self.view.ndim - * - * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 581, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = (__pyx_v_self->view.suboffsets + __pyx_v_self->view.ndim); - for (__pyx_t_6 = __pyx_v_self->view.suboffsets; __pyx_t_6 < __pyx_t_5; __pyx_t_6++) { - __pyx_t_4 = __pyx_t_6; - __pyx_v_suboffset = (__pyx_t_4[0]); - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_suboffset); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 581, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (unlikely(__Pyx_ListComp_Append(__pyx_t_3, (PyObject*)__pyx_t_2))) __PYX_ERR(1, 581, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_2 = PyList_AsTuple(((PyObject*)__pyx_t_3)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 581, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":577 - * - * @property - * def suboffsets(self): # <<<<<<<<<<<<<< - * if self.view.suboffsets == NULL: - * return (-1,) * self.view.ndim - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.suboffsets.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":584 - * - * @property - * def ndim(self): # <<<<<<<<<<<<<< - * return self.view.ndim - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":585 - * @property - * def ndim(self): - * return self.view.ndim # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_self->view.ndim); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 585, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":584 - * - * @property - * def ndim(self): # <<<<<<<<<<<<<< - * return self.view.ndim - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.ndim.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":588 - * - * @property - * def itemsize(self): # <<<<<<<<<<<<<< - * return self.view.itemsize - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":589 - * @property - * def itemsize(self): - * return self.view.itemsize # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyInt_FromSsize_t(__pyx_v_self->view.itemsize); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 589, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":588 - * - * @property - * def itemsize(self): # <<<<<<<<<<<<<< - * return self.view.itemsize - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.itemsize.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":592 - * - * @property - * def nbytes(self): # <<<<<<<<<<<<<< - * return self.size * self.view.itemsize - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":593 - * @property - * def nbytes(self): - * return self.size * self.view.itemsize # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_size); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 593, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_self->view.itemsize); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 593, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Multiply(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 593, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "View.MemoryView":592 - * - * @property - * def nbytes(self): # <<<<<<<<<<<<<< - * return self.size * self.view.itemsize - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.nbytes.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":596 - * - * @property - * def size(self): # <<<<<<<<<<<<<< - * if self._size is None: - * result = 1 - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_v_result = NULL; - PyObject *__pyx_v_length = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - Py_ssize_t *__pyx_t_5; - PyObject *__pyx_t_6 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":597 - * @property - * def size(self): - * if self._size is None: # <<<<<<<<<<<<<< - * result = 1 - * - */ - __pyx_t_1 = (__pyx_v_self->_size == Py_None); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":598 - * def size(self): - * if self._size is None: - * result = 1 # <<<<<<<<<<<<<< - * - * for length in self.view.shape[:self.view.ndim]: - */ - __Pyx_INCREF(__pyx_int_1); - __pyx_v_result = __pyx_int_1; - - /* "View.MemoryView":600 - * result = 1 - * - * for length in self.view.shape[:self.view.ndim]: # <<<<<<<<<<<<<< - * result *= length - * - */ - __pyx_t_4 = (__pyx_v_self->view.shape + __pyx_v_self->view.ndim); - for (__pyx_t_5 = __pyx_v_self->view.shape; __pyx_t_5 < __pyx_t_4; __pyx_t_5++) { - __pyx_t_3 = __pyx_t_5; - __pyx_t_6 = PyInt_FromSsize_t((__pyx_t_3[0])); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 600, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_XDECREF_SET(__pyx_v_length, __pyx_t_6); - __pyx_t_6 = 0; - - /* "View.MemoryView":601 - * - * for length in self.view.shape[:self.view.ndim]: - * result *= length # <<<<<<<<<<<<<< - * - * self._size = result - */ - __pyx_t_6 = PyNumber_InPlaceMultiply(__pyx_v_result, __pyx_v_length); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 601, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF_SET(__pyx_v_result, __pyx_t_6); - __pyx_t_6 = 0; - } - - /* "View.MemoryView":603 - * result *= length - * - * self._size = result # <<<<<<<<<<<<<< - * - * return self._size - */ - __Pyx_INCREF(__pyx_v_result); - __Pyx_GIVEREF(__pyx_v_result); - __Pyx_GOTREF(__pyx_v_self->_size); - __Pyx_DECREF(__pyx_v_self->_size); - __pyx_v_self->_size = __pyx_v_result; - - /* "View.MemoryView":597 - * @property - * def size(self): - * if self._size is None: # <<<<<<<<<<<<<< - * result = 1 - * - */ - } - - /* "View.MemoryView":605 - * self._size = result - * - * return self._size # <<<<<<<<<<<<<< - * - * def __len__(self): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->_size); - __pyx_r = __pyx_v_self->_size; - goto __pyx_L0; - - /* "View.MemoryView":596 - * - * @property - * def size(self): # <<<<<<<<<<<<<< - * if self._size is None: - * result = 1 - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("View.MemoryView.memoryview.size.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_result); - __Pyx_XDECREF(__pyx_v_length); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":607 - * return self._size - * - * def __len__(self): # <<<<<<<<<<<<<< - * if self.view.ndim >= 1: - * return self.view.shape[0] - */ - -/* Python wrapper */ -static Py_ssize_t __pyx_memoryview___len__(PyObject *__pyx_v_self); /*proto*/ -static Py_ssize_t __pyx_memoryview___len__(PyObject *__pyx_v_self) { - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__len__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static Py_ssize_t __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - __Pyx_RefNannySetupContext("__len__", 0); - - /* "View.MemoryView":608 - * - * def __len__(self): - * if self.view.ndim >= 1: # <<<<<<<<<<<<<< - * return self.view.shape[0] - * - */ - __pyx_t_1 = ((__pyx_v_self->view.ndim >= 1) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":609 - * def __len__(self): - * if self.view.ndim >= 1: - * return self.view.shape[0] # <<<<<<<<<<<<<< - * - * return 0 - */ - __pyx_r = (__pyx_v_self->view.shape[0]); - goto __pyx_L0; - - /* "View.MemoryView":608 - * - * def __len__(self): - * if self.view.ndim >= 1: # <<<<<<<<<<<<<< - * return self.view.shape[0] - * - */ - } - - /* "View.MemoryView":611 - * return self.view.shape[0] - * - * return 0 # <<<<<<<<<<<<<< - * - * def __repr__(self): - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":607 - * return self._size - * - * def __len__(self): # <<<<<<<<<<<<<< - * if self.view.ndim >= 1: - * return self.view.shape[0] - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":613 - * return 0 - * - * def __repr__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__, - * id(self)) - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview___repr__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_memoryview___repr__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__repr__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__repr__", 0); - - /* "View.MemoryView":614 - * - * def __repr__(self): - * return "" % (self.base.__class__.__name__, # <<<<<<<<<<<<<< - * id(self)) - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_base); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 614, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_class); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 614, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_name_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 614, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "View.MemoryView":615 - * def __repr__(self): - * return "" % (self.base.__class__.__name__, - * id(self)) # <<<<<<<<<<<<<< - * - * def __str__(self): - */ - __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_builtin_id, ((PyObject *)__pyx_v_self)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 615, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "View.MemoryView":614 - * - * def __repr__(self): - * return "" % (self.base.__class__.__name__, # <<<<<<<<<<<<<< - * id(self)) - * - */ - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 614, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_2); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyString_Format(__pyx_kp_s_MemoryView_of_r_at_0x_x, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 614, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":613 - * return 0 - * - * def __repr__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__, - * id(self)) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.__repr__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":617 - * id(self)) - * - * def __str__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__,) - * - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview___str__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_memoryview___str__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__str__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__str__", 0); - - /* "View.MemoryView":618 - * - * def __str__(self): - * return "" % (self.base.__class__.__name__,) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_base); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 618, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_class); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 618, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_name_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 618, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 618, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyString_Format(__pyx_kp_s_MemoryView_of_r_object, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 618, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":617 - * id(self)) - * - * def __str__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__,) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.__str__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":621 - * - * - * def is_c_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_is_c_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_memoryview_is_c_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("is_c_contig (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice *__pyx_v_mslice; - __Pyx_memviewslice __pyx_v_tmp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("is_c_contig", 0); - - /* "View.MemoryView":624 - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) # <<<<<<<<<<<<<< - * return slice_is_contig(mslice[0], 'C', self.view.ndim) - * - */ - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_self, (&__pyx_v_tmp)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 624, __pyx_L1_error) - __pyx_v_mslice = __pyx_t_1; - - /* "View.MemoryView":625 - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) - * return slice_is_contig(mslice[0], 'C', self.view.ndim) # <<<<<<<<<<<<<< - * - * def is_f_contig(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_memviewslice_is_contig((__pyx_v_mslice[0]), 'C', __pyx_v_self->view.ndim)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 625, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":621 - * - * - * def is_c_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.is_c_contig", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":627 - * return slice_is_contig(mslice[0], 'C', self.view.ndim) - * - * def is_f_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_is_f_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_memoryview_is_f_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("is_f_contig (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice *__pyx_v_mslice; - __Pyx_memviewslice __pyx_v_tmp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("is_f_contig", 0); - - /* "View.MemoryView":630 - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) # <<<<<<<<<<<<<< - * return slice_is_contig(mslice[0], 'F', self.view.ndim) - * - */ - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_self, (&__pyx_v_tmp)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 630, __pyx_L1_error) - __pyx_v_mslice = __pyx_t_1; - - /* "View.MemoryView":631 - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) - * return slice_is_contig(mslice[0], 'F', self.view.ndim) # <<<<<<<<<<<<<< - * - * def copy(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_memviewslice_is_contig((__pyx_v_mslice[0]), 'F', __pyx_v_self->view.ndim)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 631, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":627 - * return slice_is_contig(mslice[0], 'C', self.view.ndim) - * - * def is_f_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.is_f_contig", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":633 - * return slice_is_contig(mslice[0], 'F', self.view.ndim) - * - * def copy(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice mslice - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_copy(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_memoryview_copy(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("copy (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice __pyx_v_mslice; - int __pyx_v_flags; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("copy", 0); - - /* "View.MemoryView":635 - * def copy(self): - * cdef __Pyx_memviewslice mslice - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS # <<<<<<<<<<<<<< - * - * slice_copy(self, &mslice) - */ - __pyx_v_flags = (__pyx_v_self->flags & (~PyBUF_F_CONTIGUOUS)); - - /* "View.MemoryView":637 - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS - * - * slice_copy(self, &mslice) # <<<<<<<<<<<<<< - * mslice = slice_copy_contig(&mslice, "c", self.view.ndim, - * self.view.itemsize, - */ - __pyx_memoryview_slice_copy(__pyx_v_self, (&__pyx_v_mslice)); - - /* "View.MemoryView":638 - * - * slice_copy(self, &mslice) - * mslice = slice_copy_contig(&mslice, "c", self.view.ndim, # <<<<<<<<<<<<<< - * self.view.itemsize, - * flags|PyBUF_C_CONTIGUOUS, - */ - __pyx_t_1 = __pyx_memoryview_copy_new_contig((&__pyx_v_mslice), ((char *)"c"), __pyx_v_self->view.ndim, __pyx_v_self->view.itemsize, (__pyx_v_flags | PyBUF_C_CONTIGUOUS), __pyx_v_self->dtype_is_object); if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 638, __pyx_L1_error) - __pyx_v_mslice = __pyx_t_1; - - /* "View.MemoryView":643 - * self.dtype_is_object) - * - * return memoryview_copy_from_slice(self, &mslice) # <<<<<<<<<<<<<< - * - * def copy_fortran(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_memoryview_copy_object_from_slice(__pyx_v_self, (&__pyx_v_mslice)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 643, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":633 - * return slice_is_contig(mslice[0], 'F', self.view.ndim) - * - * def copy(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice mslice - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.copy", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":645 - * return memoryview_copy_from_slice(self, &mslice) - * - * def copy_fortran(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice src, dst - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_copy_fortran(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_memoryview_copy_fortran(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("copy_fortran (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice __pyx_v_src; - __Pyx_memviewslice __pyx_v_dst; - int __pyx_v_flags; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("copy_fortran", 0); - - /* "View.MemoryView":647 - * def copy_fortran(self): - * cdef __Pyx_memviewslice src, dst - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS # <<<<<<<<<<<<<< - * - * slice_copy(self, &src) - */ - __pyx_v_flags = (__pyx_v_self->flags & (~PyBUF_C_CONTIGUOUS)); - - /* "View.MemoryView":649 - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS - * - * slice_copy(self, &src) # <<<<<<<<<<<<<< - * dst = slice_copy_contig(&src, "fortran", self.view.ndim, - * self.view.itemsize, - */ - __pyx_memoryview_slice_copy(__pyx_v_self, (&__pyx_v_src)); - - /* "View.MemoryView":650 - * - * slice_copy(self, &src) - * dst = slice_copy_contig(&src, "fortran", self.view.ndim, # <<<<<<<<<<<<<< - * self.view.itemsize, - * flags|PyBUF_F_CONTIGUOUS, - */ - __pyx_t_1 = __pyx_memoryview_copy_new_contig((&__pyx_v_src), ((char *)"fortran"), __pyx_v_self->view.ndim, __pyx_v_self->view.itemsize, (__pyx_v_flags | PyBUF_F_CONTIGUOUS), __pyx_v_self->dtype_is_object); if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 650, __pyx_L1_error) - __pyx_v_dst = __pyx_t_1; - - /* "View.MemoryView":655 - * self.dtype_is_object) - * - * return memoryview_copy_from_slice(self, &dst) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_memoryview_copy_object_from_slice(__pyx_v_self, (&__pyx_v_dst)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 655, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":645 - * return memoryview_copy_from_slice(self, &mslice) - * - * def copy_fortran(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice src, dst - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.copy_fortran", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryview_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryview_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_memoryview___reduce_cython__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryview___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__14, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 2, __pyx_L1_error) - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryview_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryview_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_memoryview_2__setstate_cython__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryview_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__15, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 4, __pyx_L1_error) - - /* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":659 - * - * @cname('__pyx_memoryview_new') - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): # <<<<<<<<<<<<<< - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo - */ - -static PyObject *__pyx_memoryview_new(PyObject *__pyx_v_o, int __pyx_v_flags, int __pyx_v_dtype_is_object, __Pyx_TypeInfo *__pyx_v_typeinfo) { - struct __pyx_memoryview_obj *__pyx_v_result = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_cwrapper", 0); - - /* "View.MemoryView":660 - * @cname('__pyx_memoryview_new') - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): - * cdef memoryview result = memoryview(o, flags, dtype_is_object) # <<<<<<<<<<<<<< - * result.typeinfo = typeinfo - * return result - */ - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_flags); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 660, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 660, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 660, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_o); - __Pyx_GIVEREF(__pyx_v_o); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_o); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 660, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result = ((struct __pyx_memoryview_obj *)__pyx_t_2); - __pyx_t_2 = 0; - - /* "View.MemoryView":661 - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo # <<<<<<<<<<<<<< - * return result - * - */ - __pyx_v_result->typeinfo = __pyx_v_typeinfo; - - /* "View.MemoryView":662 - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo - * return result # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_check') - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)__pyx_v_result)); - __pyx_r = ((PyObject *)__pyx_v_result); - goto __pyx_L0; - - /* "View.MemoryView":659 - * - * @cname('__pyx_memoryview_new') - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): # <<<<<<<<<<<<<< - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview_cwrapper", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":665 - * - * @cname('__pyx_memoryview_check') - * cdef inline bint memoryview_check(object o): # <<<<<<<<<<<<<< - * return isinstance(o, memoryview) - * - */ - -static CYTHON_INLINE int __pyx_memoryview_check(PyObject *__pyx_v_o) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - __Pyx_RefNannySetupContext("memoryview_check", 0); - - /* "View.MemoryView":666 - * @cname('__pyx_memoryview_check') - * cdef inline bint memoryview_check(object o): - * return isinstance(o, memoryview) # <<<<<<<<<<<<<< - * - * cdef tuple _unellipsify(object index, int ndim): - */ - __pyx_t_1 = __Pyx_TypeCheck(__pyx_v_o, __pyx_memoryview_type); - __pyx_r = __pyx_t_1; - goto __pyx_L0; - - /* "View.MemoryView":665 - * - * @cname('__pyx_memoryview_check') - * cdef inline bint memoryview_check(object o): # <<<<<<<<<<<<<< - * return isinstance(o, memoryview) - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":668 - * return isinstance(o, memoryview) - * - * cdef tuple _unellipsify(object index, int ndim): # <<<<<<<<<<<<<< - * """ - * Replace all ellipses with full slices and fill incomplete indices with - */ - -static PyObject *_unellipsify(PyObject *__pyx_v_index, int __pyx_v_ndim) { - PyObject *__pyx_v_tup = NULL; - PyObject *__pyx_v_result = NULL; - int __pyx_v_have_slices; - int __pyx_v_seen_ellipsis; - CYTHON_UNUSED PyObject *__pyx_v_idx = NULL; - PyObject *__pyx_v_item = NULL; - Py_ssize_t __pyx_v_nslices; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - Py_ssize_t __pyx_t_5; - PyObject *(*__pyx_t_6)(PyObject *); - PyObject *__pyx_t_7 = NULL; - Py_ssize_t __pyx_t_8; - int __pyx_t_9; - int __pyx_t_10; - PyObject *__pyx_t_11 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_unellipsify", 0); - - /* "View.MemoryView":673 - * full slices. - * """ - * if not isinstance(index, tuple): # <<<<<<<<<<<<<< - * tup = (index,) - * else: - */ - __pyx_t_1 = PyTuple_Check(__pyx_v_index); - __pyx_t_2 = ((!(__pyx_t_1 != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":674 - * """ - * if not isinstance(index, tuple): - * tup = (index,) # <<<<<<<<<<<<<< - * else: - * tup = index - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 674, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_index); - __Pyx_GIVEREF(__pyx_v_index); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_index); - __pyx_v_tup = __pyx_t_3; - __pyx_t_3 = 0; - - /* "View.MemoryView":673 - * full slices. - * """ - * if not isinstance(index, tuple): # <<<<<<<<<<<<<< - * tup = (index,) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":676 - * tup = (index,) - * else: - * tup = index # <<<<<<<<<<<<<< - * - * result = [] - */ - /*else*/ { - __Pyx_INCREF(__pyx_v_index); - __pyx_v_tup = __pyx_v_index; - } - __pyx_L3:; - - /* "View.MemoryView":678 - * tup = index - * - * result = [] # <<<<<<<<<<<<<< - * have_slices = False - * seen_ellipsis = False - */ - __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 678, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_v_result = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":679 - * - * result = [] - * have_slices = False # <<<<<<<<<<<<<< - * seen_ellipsis = False - * for idx, item in enumerate(tup): - */ - __pyx_v_have_slices = 0; - - /* "View.MemoryView":680 - * result = [] - * have_slices = False - * seen_ellipsis = False # <<<<<<<<<<<<<< - * for idx, item in enumerate(tup): - * if item is Ellipsis: - */ - __pyx_v_seen_ellipsis = 0; - - /* "View.MemoryView":681 - * have_slices = False - * seen_ellipsis = False - * for idx, item in enumerate(tup): # <<<<<<<<<<<<<< - * if item is Ellipsis: - * if not seen_ellipsis: - */ - __Pyx_INCREF(__pyx_int_0); - __pyx_t_3 = __pyx_int_0; - if (likely(PyList_CheckExact(__pyx_v_tup)) || PyTuple_CheckExact(__pyx_v_tup)) { - __pyx_t_4 = __pyx_v_tup; __Pyx_INCREF(__pyx_t_4); __pyx_t_5 = 0; - __pyx_t_6 = NULL; - } else { - __pyx_t_5 = -1; __pyx_t_4 = PyObject_GetIter(__pyx_v_tup); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 681, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_6 = Py_TYPE(__pyx_t_4)->tp_iternext; if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 681, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_6)) { - if (likely(PyList_CheckExact(__pyx_t_4))) { - if (__pyx_t_5 >= PyList_GET_SIZE(__pyx_t_4)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_7 = PyList_GET_ITEM(__pyx_t_4, __pyx_t_5); __Pyx_INCREF(__pyx_t_7); __pyx_t_5++; if (unlikely(0 < 0)) __PYX_ERR(1, 681, __pyx_L1_error) - #else - __pyx_t_7 = PySequence_ITEM(__pyx_t_4, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 681, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - #endif - } else { - if (__pyx_t_5 >= PyTuple_GET_SIZE(__pyx_t_4)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_7 = PyTuple_GET_ITEM(__pyx_t_4, __pyx_t_5); __Pyx_INCREF(__pyx_t_7); __pyx_t_5++; if (unlikely(0 < 0)) __PYX_ERR(1, 681, __pyx_L1_error) - #else - __pyx_t_7 = PySequence_ITEM(__pyx_t_4, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 681, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - #endif - } - } else { - __pyx_t_7 = __pyx_t_6(__pyx_t_4); - if (unlikely(!__pyx_t_7)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(1, 681, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_7); - } - __Pyx_XDECREF_SET(__pyx_v_item, __pyx_t_7); - __pyx_t_7 = 0; - __Pyx_INCREF(__pyx_t_3); - __Pyx_XDECREF_SET(__pyx_v_idx, __pyx_t_3); - __pyx_t_7 = __Pyx_PyInt_AddObjC(__pyx_t_3, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 681, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_3); - __pyx_t_3 = __pyx_t_7; - __pyx_t_7 = 0; - - /* "View.MemoryView":682 - * seen_ellipsis = False - * for idx, item in enumerate(tup): - * if item is Ellipsis: # <<<<<<<<<<<<<< - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - */ - __pyx_t_2 = (__pyx_v_item == __pyx_builtin_Ellipsis); - __pyx_t_1 = (__pyx_t_2 != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":683 - * for idx, item in enumerate(tup): - * if item is Ellipsis: - * if not seen_ellipsis: # <<<<<<<<<<<<<< - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - * seen_ellipsis = True - */ - __pyx_t_1 = ((!(__pyx_v_seen_ellipsis != 0)) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":684 - * if item is Ellipsis: - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) # <<<<<<<<<<<<<< - * seen_ellipsis = True - * else: - */ - __pyx_t_8 = PyObject_Length(__pyx_v_tup); if (unlikely(__pyx_t_8 == ((Py_ssize_t)-1))) __PYX_ERR(1, 684, __pyx_L1_error) - __pyx_t_7 = PyList_New(1 * ((((__pyx_v_ndim - __pyx_t_8) + 1)<0) ? 0:((__pyx_v_ndim - __pyx_t_8) + 1))); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 684, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - { Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < ((__pyx_v_ndim - __pyx_t_8) + 1); __pyx_temp++) { - __Pyx_INCREF(__pyx_slice__16); - __Pyx_GIVEREF(__pyx_slice__16); - PyList_SET_ITEM(__pyx_t_7, __pyx_temp, __pyx_slice__16); - } - } - __pyx_t_9 = __Pyx_PyList_Extend(__pyx_v_result, __pyx_t_7); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 684, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "View.MemoryView":685 - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - * seen_ellipsis = True # <<<<<<<<<<<<<< - * else: - * result.append(slice(None)) - */ - __pyx_v_seen_ellipsis = 1; - - /* "View.MemoryView":683 - * for idx, item in enumerate(tup): - * if item is Ellipsis: - * if not seen_ellipsis: # <<<<<<<<<<<<<< - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - * seen_ellipsis = True - */ - goto __pyx_L7; - } - - /* "View.MemoryView":687 - * seen_ellipsis = True - * else: - * result.append(slice(None)) # <<<<<<<<<<<<<< - * have_slices = True - * else: - */ - /*else*/ { - __pyx_t_9 = __Pyx_PyList_Append(__pyx_v_result, __pyx_slice__16); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 687, __pyx_L1_error) - } - __pyx_L7:; - - /* "View.MemoryView":688 - * else: - * result.append(slice(None)) - * have_slices = True # <<<<<<<<<<<<<< - * else: - * if not isinstance(item, slice) and not PyIndex_Check(item): - */ - __pyx_v_have_slices = 1; - - /* "View.MemoryView":682 - * seen_ellipsis = False - * for idx, item in enumerate(tup): - * if item is Ellipsis: # <<<<<<<<<<<<<< - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - */ - goto __pyx_L6; - } - - /* "View.MemoryView":690 - * have_slices = True - * else: - * if not isinstance(item, slice) and not PyIndex_Check(item): # <<<<<<<<<<<<<< - * raise TypeError("Cannot index with type '%s'" % type(item)) - * - */ - /*else*/ { - __pyx_t_2 = PySlice_Check(__pyx_v_item); - __pyx_t_10 = ((!(__pyx_t_2 != 0)) != 0); - if (__pyx_t_10) { - } else { - __pyx_t_1 = __pyx_t_10; - goto __pyx_L9_bool_binop_done; - } - __pyx_t_10 = ((!(PyIndex_Check(__pyx_v_item) != 0)) != 0); - __pyx_t_1 = __pyx_t_10; - __pyx_L9_bool_binop_done:; - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":691 - * else: - * if not isinstance(item, slice) and not PyIndex_Check(item): - * raise TypeError("Cannot index with type '%s'" % type(item)) # <<<<<<<<<<<<<< - * - * have_slices = have_slices or isinstance(item, slice) - */ - __pyx_t_7 = __Pyx_PyString_FormatSafe(__pyx_kp_s_Cannot_index_with_type_s, ((PyObject *)Py_TYPE(__pyx_v_item))); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 691, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_11 = __Pyx_PyObject_CallOneArg(__pyx_builtin_TypeError, __pyx_t_7); if (unlikely(!__pyx_t_11)) __PYX_ERR(1, 691, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_Raise(__pyx_t_11, 0, 0, 0); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __PYX_ERR(1, 691, __pyx_L1_error) - - /* "View.MemoryView":690 - * have_slices = True - * else: - * if not isinstance(item, slice) and not PyIndex_Check(item): # <<<<<<<<<<<<<< - * raise TypeError("Cannot index with type '%s'" % type(item)) - * - */ - } - - /* "View.MemoryView":693 - * raise TypeError("Cannot index with type '%s'" % type(item)) - * - * have_slices = have_slices or isinstance(item, slice) # <<<<<<<<<<<<<< - * result.append(item) - * - */ - __pyx_t_10 = (__pyx_v_have_slices != 0); - if (!__pyx_t_10) { - } else { - __pyx_t_1 = __pyx_t_10; - goto __pyx_L11_bool_binop_done; - } - __pyx_t_10 = PySlice_Check(__pyx_v_item); - __pyx_t_2 = (__pyx_t_10 != 0); - __pyx_t_1 = __pyx_t_2; - __pyx_L11_bool_binop_done:; - __pyx_v_have_slices = __pyx_t_1; - - /* "View.MemoryView":694 - * - * have_slices = have_slices or isinstance(item, slice) - * result.append(item) # <<<<<<<<<<<<<< - * - * nslices = ndim - len(result) - */ - __pyx_t_9 = __Pyx_PyList_Append(__pyx_v_result, __pyx_v_item); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 694, __pyx_L1_error) - } - __pyx_L6:; - - /* "View.MemoryView":681 - * have_slices = False - * seen_ellipsis = False - * for idx, item in enumerate(tup): # <<<<<<<<<<<<<< - * if item is Ellipsis: - * if not seen_ellipsis: - */ - } - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":696 - * result.append(item) - * - * nslices = ndim - len(result) # <<<<<<<<<<<<<< - * if nslices: - * result.extend([slice(None)] * nslices) - */ - __pyx_t_5 = PyList_GET_SIZE(__pyx_v_result); if (unlikely(__pyx_t_5 == ((Py_ssize_t)-1))) __PYX_ERR(1, 696, __pyx_L1_error) - __pyx_v_nslices = (__pyx_v_ndim - __pyx_t_5); - - /* "View.MemoryView":697 - * - * nslices = ndim - len(result) - * if nslices: # <<<<<<<<<<<<<< - * result.extend([slice(None)] * nslices) - * - */ - __pyx_t_1 = (__pyx_v_nslices != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":698 - * nslices = ndim - len(result) - * if nslices: - * result.extend([slice(None)] * nslices) # <<<<<<<<<<<<<< - * - * return have_slices or nslices, tuple(result) - */ - __pyx_t_3 = PyList_New(1 * ((__pyx_v_nslices<0) ? 0:__pyx_v_nslices)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - { Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < __pyx_v_nslices; __pyx_temp++) { - __Pyx_INCREF(__pyx_slice__16); - __Pyx_GIVEREF(__pyx_slice__16); - PyList_SET_ITEM(__pyx_t_3, __pyx_temp, __pyx_slice__16); - } - } - __pyx_t_9 = __Pyx_PyList_Extend(__pyx_v_result, __pyx_t_3); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 698, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":697 - * - * nslices = ndim - len(result) - * if nslices: # <<<<<<<<<<<<<< - * result.extend([slice(None)] * nslices) - * - */ - } - - /* "View.MemoryView":700 - * result.extend([slice(None)] * nslices) - * - * return have_slices or nslices, tuple(result) # <<<<<<<<<<<<<< - * - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): - */ - __Pyx_XDECREF(__pyx_r); - if (!__pyx_v_have_slices) { - } else { - __pyx_t_4 = __Pyx_PyBool_FromLong(__pyx_v_have_slices); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 700, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L14_bool_binop_done; - } - __pyx_t_4 = PyInt_FromSsize_t(__pyx_v_nslices); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 700, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __pyx_t_4; - __pyx_t_4 = 0; - __pyx_L14_bool_binop_done:; - __pyx_t_4 = PyList_AsTuple(__pyx_v_result); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 700, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_11 = PyTuple_New(2); if (unlikely(!__pyx_t_11)) __PYX_ERR(1, 700, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_11, 1, __pyx_t_4); - __pyx_t_3 = 0; - __pyx_t_4 = 0; - __pyx_r = ((PyObject*)__pyx_t_11); - __pyx_t_11 = 0; - goto __pyx_L0; - - /* "View.MemoryView":668 - * return isinstance(o, memoryview) - * - * cdef tuple _unellipsify(object index, int ndim): # <<<<<<<<<<<<<< - * """ - * Replace all ellipses with full slices and fill incomplete indices with - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_11); - __Pyx_AddTraceback("View.MemoryView._unellipsify", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_tup); - __Pyx_XDECREF(__pyx_v_result); - __Pyx_XDECREF(__pyx_v_idx); - __Pyx_XDECREF(__pyx_v_item); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":702 - * return have_slices or nslices, tuple(result) - * - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): # <<<<<<<<<<<<<< - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - */ - -static PyObject *assert_direct_dimensions(Py_ssize_t *__pyx_v_suboffsets, int __pyx_v_ndim) { - Py_ssize_t __pyx_v_suboffset; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - Py_ssize_t *__pyx_t_1; - Py_ssize_t *__pyx_t_2; - Py_ssize_t *__pyx_t_3; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("assert_direct_dimensions", 0); - - /* "View.MemoryView":703 - * - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): - * for suboffset in suboffsets[:ndim]: # <<<<<<<<<<<<<< - * if suboffset >= 0: - * raise ValueError("Indirect dimensions not supported") - */ - __pyx_t_2 = (__pyx_v_suboffsets + __pyx_v_ndim); - for (__pyx_t_3 = __pyx_v_suboffsets; __pyx_t_3 < __pyx_t_2; __pyx_t_3++) { - __pyx_t_1 = __pyx_t_3; - __pyx_v_suboffset = (__pyx_t_1[0]); - - /* "View.MemoryView":704 - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * raise ValueError("Indirect dimensions not supported") - * - */ - __pyx_t_4 = ((__pyx_v_suboffset >= 0) != 0); - if (unlikely(__pyx_t_4)) { - - /* "View.MemoryView":705 - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - * raise ValueError("Indirect dimensions not supported") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_5 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__17, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 705, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_Raise(__pyx_t_5, 0, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __PYX_ERR(1, 705, __pyx_L1_error) - - /* "View.MemoryView":704 - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * raise ValueError("Indirect dimensions not supported") - * - */ - } - } - - /* "View.MemoryView":702 - * return have_slices or nslices, tuple(result) - * - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): # <<<<<<<<<<<<<< - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.assert_direct_dimensions", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":712 - * - * @cname('__pyx_memview_slice') - * cdef memoryview memview_slice(memoryview memview, object indices): # <<<<<<<<<<<<<< - * cdef int new_ndim = 0, suboffset_dim = -1, dim - * cdef bint negative_step - */ - -static struct __pyx_memoryview_obj *__pyx_memview_slice(struct __pyx_memoryview_obj *__pyx_v_memview, PyObject *__pyx_v_indices) { - int __pyx_v_new_ndim; - int __pyx_v_suboffset_dim; - int __pyx_v_dim; - __Pyx_memviewslice __pyx_v_src; - __Pyx_memviewslice __pyx_v_dst; - __Pyx_memviewslice *__pyx_v_p_src; - struct __pyx_memoryviewslice_obj *__pyx_v_memviewsliceobj = 0; - __Pyx_memviewslice *__pyx_v_p_dst; - int *__pyx_v_p_suboffset_dim; - Py_ssize_t __pyx_v_start; - Py_ssize_t __pyx_v_stop; - Py_ssize_t __pyx_v_step; - int __pyx_v_have_start; - int __pyx_v_have_stop; - int __pyx_v_have_step; - PyObject *__pyx_v_index = NULL; - struct __pyx_memoryview_obj *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - struct __pyx_memoryview_obj *__pyx_t_4; - char *__pyx_t_5; - int __pyx_t_6; - Py_ssize_t __pyx_t_7; - PyObject *(*__pyx_t_8)(PyObject *); - PyObject *__pyx_t_9 = NULL; - Py_ssize_t __pyx_t_10; - int __pyx_t_11; - Py_ssize_t __pyx_t_12; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memview_slice", 0); - - /* "View.MemoryView":713 - * @cname('__pyx_memview_slice') - * cdef memoryview memview_slice(memoryview memview, object indices): - * cdef int new_ndim = 0, suboffset_dim = -1, dim # <<<<<<<<<<<<<< - * cdef bint negative_step - * cdef __Pyx_memviewslice src, dst - */ - __pyx_v_new_ndim = 0; - __pyx_v_suboffset_dim = -1; - - /* "View.MemoryView":720 - * - * - * memset(&dst, 0, sizeof(dst)) # <<<<<<<<<<<<<< - * - * cdef _memoryviewslice memviewsliceobj - */ - (void)(memset((&__pyx_v_dst), 0, (sizeof(__pyx_v_dst)))); - - /* "View.MemoryView":724 - * cdef _memoryviewslice memviewsliceobj - * - * assert memview.view.ndim > 0 # <<<<<<<<<<<<<< - * - * if isinstance(memview, _memoryviewslice): - */ - #ifndef CYTHON_WITHOUT_ASSERTIONS - if (unlikely(!Py_OptimizeFlag)) { - if (unlikely(!((__pyx_v_memview->view.ndim > 0) != 0))) { - PyErr_SetNone(PyExc_AssertionError); - __PYX_ERR(1, 724, __pyx_L1_error) - } - } - #endif - - /* "View.MemoryView":726 - * assert memview.view.ndim > 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * memviewsliceobj = memview - * p_src = &memviewsliceobj.from_slice - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":727 - * - * if isinstance(memview, _memoryviewslice): - * memviewsliceobj = memview # <<<<<<<<<<<<<< - * p_src = &memviewsliceobj.from_slice - * else: - */ - if (!(likely(((((PyObject *)__pyx_v_memview)) == Py_None) || likely(__Pyx_TypeTest(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type))))) __PYX_ERR(1, 727, __pyx_L1_error) - __pyx_t_3 = ((PyObject *)__pyx_v_memview); - __Pyx_INCREF(__pyx_t_3); - __pyx_v_memviewsliceobj = ((struct __pyx_memoryviewslice_obj *)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":728 - * if isinstance(memview, _memoryviewslice): - * memviewsliceobj = memview - * p_src = &memviewsliceobj.from_slice # <<<<<<<<<<<<<< - * else: - * slice_copy(memview, &src) - */ - __pyx_v_p_src = (&__pyx_v_memviewsliceobj->from_slice); - - /* "View.MemoryView":726 - * assert memview.view.ndim > 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * memviewsliceobj = memview - * p_src = &memviewsliceobj.from_slice - */ - goto __pyx_L3; - } - - /* "View.MemoryView":730 - * p_src = &memviewsliceobj.from_slice - * else: - * slice_copy(memview, &src) # <<<<<<<<<<<<<< - * p_src = &src - * - */ - /*else*/ { - __pyx_memoryview_slice_copy(__pyx_v_memview, (&__pyx_v_src)); - - /* "View.MemoryView":731 - * else: - * slice_copy(memview, &src) - * p_src = &src # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_p_src = (&__pyx_v_src); - } - __pyx_L3:; - - /* "View.MemoryView":737 - * - * - * dst.memview = p_src.memview # <<<<<<<<<<<<<< - * dst.data = p_src.data - * - */ - __pyx_t_4 = __pyx_v_p_src->memview; - __pyx_v_dst.memview = __pyx_t_4; - - /* "View.MemoryView":738 - * - * dst.memview = p_src.memview - * dst.data = p_src.data # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_5 = __pyx_v_p_src->data; - __pyx_v_dst.data = __pyx_t_5; - - /* "View.MemoryView":743 - * - * - * cdef __Pyx_memviewslice *p_dst = &dst # <<<<<<<<<<<<<< - * cdef int *p_suboffset_dim = &suboffset_dim - * cdef Py_ssize_t start, stop, step - */ - __pyx_v_p_dst = (&__pyx_v_dst); - - /* "View.MemoryView":744 - * - * cdef __Pyx_memviewslice *p_dst = &dst - * cdef int *p_suboffset_dim = &suboffset_dim # <<<<<<<<<<<<<< - * cdef Py_ssize_t start, stop, step - * cdef bint have_start, have_stop, have_step - */ - __pyx_v_p_suboffset_dim = (&__pyx_v_suboffset_dim); - - /* "View.MemoryView":748 - * cdef bint have_start, have_stop, have_step - * - * for dim, index in enumerate(indices): # <<<<<<<<<<<<<< - * if PyIndex_Check(index): - * slice_memviewslice( - */ - __pyx_t_6 = 0; - if (likely(PyList_CheckExact(__pyx_v_indices)) || PyTuple_CheckExact(__pyx_v_indices)) { - __pyx_t_3 = __pyx_v_indices; __Pyx_INCREF(__pyx_t_3); __pyx_t_7 = 0; - __pyx_t_8 = NULL; - } else { - __pyx_t_7 = -1; __pyx_t_3 = PyObject_GetIter(__pyx_v_indices); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 748, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_8 = Py_TYPE(__pyx_t_3)->tp_iternext; if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 748, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_8)) { - if (likely(PyList_CheckExact(__pyx_t_3))) { - if (__pyx_t_7 >= PyList_GET_SIZE(__pyx_t_3)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_9 = PyList_GET_ITEM(__pyx_t_3, __pyx_t_7); __Pyx_INCREF(__pyx_t_9); __pyx_t_7++; if (unlikely(0 < 0)) __PYX_ERR(1, 748, __pyx_L1_error) - #else - __pyx_t_9 = PySequence_ITEM(__pyx_t_3, __pyx_t_7); __pyx_t_7++; if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 748, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - #endif - } else { - if (__pyx_t_7 >= PyTuple_GET_SIZE(__pyx_t_3)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_9 = PyTuple_GET_ITEM(__pyx_t_3, __pyx_t_7); __Pyx_INCREF(__pyx_t_9); __pyx_t_7++; if (unlikely(0 < 0)) __PYX_ERR(1, 748, __pyx_L1_error) - #else - __pyx_t_9 = PySequence_ITEM(__pyx_t_3, __pyx_t_7); __pyx_t_7++; if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 748, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - #endif - } - } else { - __pyx_t_9 = __pyx_t_8(__pyx_t_3); - if (unlikely(!__pyx_t_9)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(1, 748, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_9); - } - __Pyx_XDECREF_SET(__pyx_v_index, __pyx_t_9); - __pyx_t_9 = 0; - __pyx_v_dim = __pyx_t_6; - __pyx_t_6 = (__pyx_t_6 + 1); - - /* "View.MemoryView":749 - * - * for dim, index in enumerate(indices): - * if PyIndex_Check(index): # <<<<<<<<<<<<<< - * slice_memviewslice( - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - */ - __pyx_t_2 = (PyIndex_Check(__pyx_v_index) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":753 - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - * dim, new_ndim, p_suboffset_dim, - * index, 0, 0, # start, stop, step # <<<<<<<<<<<<<< - * 0, 0, 0, # have_{start,stop,step} - * False) - */ - __pyx_t_10 = __Pyx_PyIndex_AsSsize_t(__pyx_v_index); if (unlikely((__pyx_t_10 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 753, __pyx_L1_error) - - /* "View.MemoryView":750 - * for dim, index in enumerate(indices): - * if PyIndex_Check(index): - * slice_memviewslice( # <<<<<<<<<<<<<< - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - * dim, new_ndim, p_suboffset_dim, - */ - __pyx_t_11 = __pyx_memoryview_slice_memviewslice(__pyx_v_p_dst, (__pyx_v_p_src->shape[__pyx_v_dim]), (__pyx_v_p_src->strides[__pyx_v_dim]), (__pyx_v_p_src->suboffsets[__pyx_v_dim]), __pyx_v_dim, __pyx_v_new_ndim, __pyx_v_p_suboffset_dim, __pyx_t_10, 0, 0, 0, 0, 0, 0); if (unlikely(__pyx_t_11 == ((int)-1))) __PYX_ERR(1, 750, __pyx_L1_error) - - /* "View.MemoryView":749 - * - * for dim, index in enumerate(indices): - * if PyIndex_Check(index): # <<<<<<<<<<<<<< - * slice_memviewslice( - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - */ - goto __pyx_L6; - } - - /* "View.MemoryView":756 - * 0, 0, 0, # have_{start,stop,step} - * False) - * elif index is None: # <<<<<<<<<<<<<< - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 - */ - __pyx_t_2 = (__pyx_v_index == Py_None); - __pyx_t_1 = (__pyx_t_2 != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":757 - * False) - * elif index is None: - * p_dst.shape[new_ndim] = 1 # <<<<<<<<<<<<<< - * p_dst.strides[new_ndim] = 0 - * p_dst.suboffsets[new_ndim] = -1 - */ - (__pyx_v_p_dst->shape[__pyx_v_new_ndim]) = 1; - - /* "View.MemoryView":758 - * elif index is None: - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 # <<<<<<<<<<<<<< - * p_dst.suboffsets[new_ndim] = -1 - * new_ndim += 1 - */ - (__pyx_v_p_dst->strides[__pyx_v_new_ndim]) = 0; - - /* "View.MemoryView":759 - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 - * p_dst.suboffsets[new_ndim] = -1 # <<<<<<<<<<<<<< - * new_ndim += 1 - * else: - */ - (__pyx_v_p_dst->suboffsets[__pyx_v_new_ndim]) = -1L; - - /* "View.MemoryView":760 - * p_dst.strides[new_ndim] = 0 - * p_dst.suboffsets[new_ndim] = -1 - * new_ndim += 1 # <<<<<<<<<<<<<< - * else: - * start = index.start or 0 - */ - __pyx_v_new_ndim = (__pyx_v_new_ndim + 1); - - /* "View.MemoryView":756 - * 0, 0, 0, # have_{start,stop,step} - * False) - * elif index is None: # <<<<<<<<<<<<<< - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 - */ - goto __pyx_L6; - } - - /* "View.MemoryView":762 - * new_ndim += 1 - * else: - * start = index.start or 0 # <<<<<<<<<<<<<< - * stop = index.stop or 0 - * step = index.step or 0 - */ - /*else*/ { - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_start); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 762, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 762, __pyx_L1_error) - if (!__pyx_t_1) { - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } else { - __pyx_t_12 = __Pyx_PyIndex_AsSsize_t(__pyx_t_9); if (unlikely((__pyx_t_12 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 762, __pyx_L1_error) - __pyx_t_10 = __pyx_t_12; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - goto __pyx_L7_bool_binop_done; - } - __pyx_t_10 = 0; - __pyx_L7_bool_binop_done:; - __pyx_v_start = __pyx_t_10; - - /* "View.MemoryView":763 - * else: - * start = index.start or 0 - * stop = index.stop or 0 # <<<<<<<<<<<<<< - * step = index.step or 0 - * - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_stop); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 763, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 763, __pyx_L1_error) - if (!__pyx_t_1) { - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } else { - __pyx_t_12 = __Pyx_PyIndex_AsSsize_t(__pyx_t_9); if (unlikely((__pyx_t_12 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 763, __pyx_L1_error) - __pyx_t_10 = __pyx_t_12; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - goto __pyx_L9_bool_binop_done; - } - __pyx_t_10 = 0; - __pyx_L9_bool_binop_done:; - __pyx_v_stop = __pyx_t_10; - - /* "View.MemoryView":764 - * start = index.start or 0 - * stop = index.stop or 0 - * step = index.step or 0 # <<<<<<<<<<<<<< - * - * have_start = index.start is not None - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_step); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 764, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 764, __pyx_L1_error) - if (!__pyx_t_1) { - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } else { - __pyx_t_12 = __Pyx_PyIndex_AsSsize_t(__pyx_t_9); if (unlikely((__pyx_t_12 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 764, __pyx_L1_error) - __pyx_t_10 = __pyx_t_12; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - goto __pyx_L11_bool_binop_done; - } - __pyx_t_10 = 0; - __pyx_L11_bool_binop_done:; - __pyx_v_step = __pyx_t_10; - - /* "View.MemoryView":766 - * step = index.step or 0 - * - * have_start = index.start is not None # <<<<<<<<<<<<<< - * have_stop = index.stop is not None - * have_step = index.step is not None - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_start); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 766, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = (__pyx_t_9 != Py_None); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_v_have_start = __pyx_t_1; - - /* "View.MemoryView":767 - * - * have_start = index.start is not None - * have_stop = index.stop is not None # <<<<<<<<<<<<<< - * have_step = index.step is not None - * - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_stop); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 767, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = (__pyx_t_9 != Py_None); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_v_have_stop = __pyx_t_1; - - /* "View.MemoryView":768 - * have_start = index.start is not None - * have_stop = index.stop is not None - * have_step = index.step is not None # <<<<<<<<<<<<<< - * - * slice_memviewslice( - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_step); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 768, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = (__pyx_t_9 != Py_None); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_v_have_step = __pyx_t_1; - - /* "View.MemoryView":770 - * have_step = index.step is not None - * - * slice_memviewslice( # <<<<<<<<<<<<<< - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - * dim, new_ndim, p_suboffset_dim, - */ - __pyx_t_11 = __pyx_memoryview_slice_memviewslice(__pyx_v_p_dst, (__pyx_v_p_src->shape[__pyx_v_dim]), (__pyx_v_p_src->strides[__pyx_v_dim]), (__pyx_v_p_src->suboffsets[__pyx_v_dim]), __pyx_v_dim, __pyx_v_new_ndim, __pyx_v_p_suboffset_dim, __pyx_v_start, __pyx_v_stop, __pyx_v_step, __pyx_v_have_start, __pyx_v_have_stop, __pyx_v_have_step, 1); if (unlikely(__pyx_t_11 == ((int)-1))) __PYX_ERR(1, 770, __pyx_L1_error) - - /* "View.MemoryView":776 - * have_start, have_stop, have_step, - * True) - * new_ndim += 1 # <<<<<<<<<<<<<< - * - * if isinstance(memview, _memoryviewslice): - */ - __pyx_v_new_ndim = (__pyx_v_new_ndim + 1); - } - __pyx_L6:; - - /* "View.MemoryView":748 - * cdef bint have_start, have_stop, have_step - * - * for dim, index in enumerate(indices): # <<<<<<<<<<<<<< - * if PyIndex_Check(index): - * slice_memviewslice( - */ - } - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":778 - * new_ndim += 1 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":779 - * - * if isinstance(memview, _memoryviewslice): - * return memoryview_fromslice(dst, new_ndim, # <<<<<<<<<<<<<< - * memviewsliceobj.to_object_func, - * memviewsliceobj.to_dtype_func, - */ - __Pyx_XDECREF(((PyObject *)__pyx_r)); - - /* "View.MemoryView":780 - * if isinstance(memview, _memoryviewslice): - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, # <<<<<<<<<<<<<< - * memviewsliceobj.to_dtype_func, - * memview.dtype_is_object) - */ - if (unlikely(!__pyx_v_memviewsliceobj)) { __Pyx_RaiseUnboundLocalError("memviewsliceobj"); __PYX_ERR(1, 780, __pyx_L1_error) } - - /* "View.MemoryView":781 - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, - * memviewsliceobj.to_dtype_func, # <<<<<<<<<<<<<< - * memview.dtype_is_object) - * else: - */ - if (unlikely(!__pyx_v_memviewsliceobj)) { __Pyx_RaiseUnboundLocalError("memviewsliceobj"); __PYX_ERR(1, 781, __pyx_L1_error) } - - /* "View.MemoryView":779 - * - * if isinstance(memview, _memoryviewslice): - * return memoryview_fromslice(dst, new_ndim, # <<<<<<<<<<<<<< - * memviewsliceobj.to_object_func, - * memviewsliceobj.to_dtype_func, - */ - __pyx_t_3 = __pyx_memoryview_fromslice(__pyx_v_dst, __pyx_v_new_ndim, __pyx_v_memviewsliceobj->to_object_func, __pyx_v_memviewsliceobj->to_dtype_func, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 779, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_memoryview_type))))) __PYX_ERR(1, 779, __pyx_L1_error) - __pyx_r = ((struct __pyx_memoryview_obj *)__pyx_t_3); - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "View.MemoryView":778 - * new_ndim += 1 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, - */ - } - - /* "View.MemoryView":784 - * memview.dtype_is_object) - * else: - * return memoryview_fromslice(dst, new_ndim, NULL, NULL, # <<<<<<<<<<<<<< - * memview.dtype_is_object) - * - */ - /*else*/ { - __Pyx_XDECREF(((PyObject *)__pyx_r)); - - /* "View.MemoryView":785 - * else: - * return memoryview_fromslice(dst, new_ndim, NULL, NULL, - * memview.dtype_is_object) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = __pyx_memoryview_fromslice(__pyx_v_dst, __pyx_v_new_ndim, NULL, NULL, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 784, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "View.MemoryView":784 - * memview.dtype_is_object) - * else: - * return memoryview_fromslice(dst, new_ndim, NULL, NULL, # <<<<<<<<<<<<<< - * memview.dtype_is_object) - * - */ - if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_memoryview_type))))) __PYX_ERR(1, 784, __pyx_L1_error) - __pyx_r = ((struct __pyx_memoryview_obj *)__pyx_t_3); - __pyx_t_3 = 0; - goto __pyx_L0; - } - - /* "View.MemoryView":712 - * - * @cname('__pyx_memview_slice') - * cdef memoryview memview_slice(memoryview memview, object indices): # <<<<<<<<<<<<<< - * cdef int new_ndim = 0, suboffset_dim = -1, dim - * cdef bint negative_step - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_AddTraceback("View.MemoryView.memview_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_memviewsliceobj); - __Pyx_XDECREF(__pyx_v_index); - __Pyx_XGIVEREF((PyObject *)__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":809 - * - * @cname('__pyx_memoryview_slice_memviewslice') - * cdef int slice_memviewslice( # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * Py_ssize_t shape, Py_ssize_t stride, Py_ssize_t suboffset, - */ - -static int __pyx_memoryview_slice_memviewslice(__Pyx_memviewslice *__pyx_v_dst, Py_ssize_t __pyx_v_shape, Py_ssize_t __pyx_v_stride, Py_ssize_t __pyx_v_suboffset, int __pyx_v_dim, int __pyx_v_new_ndim, int *__pyx_v_suboffset_dim, Py_ssize_t __pyx_v_start, Py_ssize_t __pyx_v_stop, Py_ssize_t __pyx_v_step, int __pyx_v_have_start, int __pyx_v_have_stop, int __pyx_v_have_step, int __pyx_v_is_slice) { - Py_ssize_t __pyx_v_new_shape; - int __pyx_v_negative_step; - int __pyx_r; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - - /* "View.MemoryView":829 - * cdef bint negative_step - * - * if not is_slice: # <<<<<<<<<<<<<< - * - * if start < 0: - */ - __pyx_t_1 = ((!(__pyx_v_is_slice != 0)) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":831 - * if not is_slice: - * - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if not 0 <= start < shape: - */ - __pyx_t_1 = ((__pyx_v_start < 0) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":832 - * - * if start < 0: - * start += shape # <<<<<<<<<<<<<< - * if not 0 <= start < shape: - * _err_dim(IndexError, "Index out of bounds (axis %d)", dim) - */ - __pyx_v_start = (__pyx_v_start + __pyx_v_shape); - - /* "View.MemoryView":831 - * if not is_slice: - * - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if not 0 <= start < shape: - */ - } - - /* "View.MemoryView":833 - * if start < 0: - * start += shape - * if not 0 <= start < shape: # <<<<<<<<<<<<<< - * _err_dim(IndexError, "Index out of bounds (axis %d)", dim) - * else: - */ - __pyx_t_1 = (0 <= __pyx_v_start); - if (__pyx_t_1) { - __pyx_t_1 = (__pyx_v_start < __pyx_v_shape); - } - __pyx_t_2 = ((!(__pyx_t_1 != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":834 - * start += shape - * if not 0 <= start < shape: - * _err_dim(IndexError, "Index out of bounds (axis %d)", dim) # <<<<<<<<<<<<<< - * else: - * - */ - __pyx_t_3 = __pyx_memoryview_err_dim(__pyx_builtin_IndexError, ((char *)"Index out of bounds (axis %d)"), __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 834, __pyx_L1_error) - - /* "View.MemoryView":833 - * if start < 0: - * start += shape - * if not 0 <= start < shape: # <<<<<<<<<<<<<< - * _err_dim(IndexError, "Index out of bounds (axis %d)", dim) - * else: - */ - } - - /* "View.MemoryView":829 - * cdef bint negative_step - * - * if not is_slice: # <<<<<<<<<<<<<< - * - * if start < 0: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":837 - * else: - * - * negative_step = have_step != 0 and step < 0 # <<<<<<<<<<<<<< - * - * if have_step and step == 0: - */ - /*else*/ { - __pyx_t_1 = ((__pyx_v_have_step != 0) != 0); - if (__pyx_t_1) { - } else { - __pyx_t_2 = __pyx_t_1; - goto __pyx_L6_bool_binop_done; - } - __pyx_t_1 = ((__pyx_v_step < 0) != 0); - __pyx_t_2 = __pyx_t_1; - __pyx_L6_bool_binop_done:; - __pyx_v_negative_step = __pyx_t_2; - - /* "View.MemoryView":839 - * negative_step = have_step != 0 and step < 0 - * - * if have_step and step == 0: # <<<<<<<<<<<<<< - * _err_dim(ValueError, "Step may not be zero (axis %d)", dim) - * - */ - __pyx_t_1 = (__pyx_v_have_step != 0); - if (__pyx_t_1) { - } else { - __pyx_t_2 = __pyx_t_1; - goto __pyx_L9_bool_binop_done; - } - __pyx_t_1 = ((__pyx_v_step == 0) != 0); - __pyx_t_2 = __pyx_t_1; - __pyx_L9_bool_binop_done:; - if (__pyx_t_2) { - - /* "View.MemoryView":840 - * - * if have_step and step == 0: - * _err_dim(ValueError, "Step may not be zero (axis %d)", dim) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = __pyx_memoryview_err_dim(__pyx_builtin_ValueError, ((char *)"Step may not be zero (axis %d)"), __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 840, __pyx_L1_error) - - /* "View.MemoryView":839 - * negative_step = have_step != 0 and step < 0 - * - * if have_step and step == 0: # <<<<<<<<<<<<<< - * _err_dim(ValueError, "Step may not be zero (axis %d)", dim) - * - */ - } - - /* "View.MemoryView":843 - * - * - * if have_start: # <<<<<<<<<<<<<< - * if start < 0: - * start += shape - */ - __pyx_t_2 = (__pyx_v_have_start != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":844 - * - * if have_start: - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if start < 0: - */ - __pyx_t_2 = ((__pyx_v_start < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":845 - * if have_start: - * if start < 0: - * start += shape # <<<<<<<<<<<<<< - * if start < 0: - * start = 0 - */ - __pyx_v_start = (__pyx_v_start + __pyx_v_shape); - - /* "View.MemoryView":846 - * if start < 0: - * start += shape - * if start < 0: # <<<<<<<<<<<<<< - * start = 0 - * elif start >= shape: - */ - __pyx_t_2 = ((__pyx_v_start < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":847 - * start += shape - * if start < 0: - * start = 0 # <<<<<<<<<<<<<< - * elif start >= shape: - * if negative_step: - */ - __pyx_v_start = 0; - - /* "View.MemoryView":846 - * if start < 0: - * start += shape - * if start < 0: # <<<<<<<<<<<<<< - * start = 0 - * elif start >= shape: - */ - } - - /* "View.MemoryView":844 - * - * if have_start: - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if start < 0: - */ - goto __pyx_L12; - } - - /* "View.MemoryView":848 - * if start < 0: - * start = 0 - * elif start >= shape: # <<<<<<<<<<<<<< - * if negative_step: - * start = shape - 1 - */ - __pyx_t_2 = ((__pyx_v_start >= __pyx_v_shape) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":849 - * start = 0 - * elif start >= shape: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - __pyx_t_2 = (__pyx_v_negative_step != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":850 - * elif start >= shape: - * if negative_step: - * start = shape - 1 # <<<<<<<<<<<<<< - * else: - * start = shape - */ - __pyx_v_start = (__pyx_v_shape - 1); - - /* "View.MemoryView":849 - * start = 0 - * elif start >= shape: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - goto __pyx_L14; - } - - /* "View.MemoryView":852 - * start = shape - 1 - * else: - * start = shape # <<<<<<<<<<<<<< - * else: - * if negative_step: - */ - /*else*/ { - __pyx_v_start = __pyx_v_shape; - } - __pyx_L14:; - - /* "View.MemoryView":848 - * if start < 0: - * start = 0 - * elif start >= shape: # <<<<<<<<<<<<<< - * if negative_step: - * start = shape - 1 - */ - } - __pyx_L12:; - - /* "View.MemoryView":843 - * - * - * if have_start: # <<<<<<<<<<<<<< - * if start < 0: - * start += shape - */ - goto __pyx_L11; - } - - /* "View.MemoryView":854 - * start = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - /*else*/ { - __pyx_t_2 = (__pyx_v_negative_step != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":855 - * else: - * if negative_step: - * start = shape - 1 # <<<<<<<<<<<<<< - * else: - * start = 0 - */ - __pyx_v_start = (__pyx_v_shape - 1); - - /* "View.MemoryView":854 - * start = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - goto __pyx_L15; - } - - /* "View.MemoryView":857 - * start = shape - 1 - * else: - * start = 0 # <<<<<<<<<<<<<< - * - * if have_stop: - */ - /*else*/ { - __pyx_v_start = 0; - } - __pyx_L15:; - } - __pyx_L11:; - - /* "View.MemoryView":859 - * start = 0 - * - * if have_stop: # <<<<<<<<<<<<<< - * if stop < 0: - * stop += shape - */ - __pyx_t_2 = (__pyx_v_have_stop != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":860 - * - * if have_stop: - * if stop < 0: # <<<<<<<<<<<<<< - * stop += shape - * if stop < 0: - */ - __pyx_t_2 = ((__pyx_v_stop < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":861 - * if have_stop: - * if stop < 0: - * stop += shape # <<<<<<<<<<<<<< - * if stop < 0: - * stop = 0 - */ - __pyx_v_stop = (__pyx_v_stop + __pyx_v_shape); - - /* "View.MemoryView":862 - * if stop < 0: - * stop += shape - * if stop < 0: # <<<<<<<<<<<<<< - * stop = 0 - * elif stop > shape: - */ - __pyx_t_2 = ((__pyx_v_stop < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":863 - * stop += shape - * if stop < 0: - * stop = 0 # <<<<<<<<<<<<<< - * elif stop > shape: - * stop = shape - */ - __pyx_v_stop = 0; - - /* "View.MemoryView":862 - * if stop < 0: - * stop += shape - * if stop < 0: # <<<<<<<<<<<<<< - * stop = 0 - * elif stop > shape: - */ - } - - /* "View.MemoryView":860 - * - * if have_stop: - * if stop < 0: # <<<<<<<<<<<<<< - * stop += shape - * if stop < 0: - */ - goto __pyx_L17; - } - - /* "View.MemoryView":864 - * if stop < 0: - * stop = 0 - * elif stop > shape: # <<<<<<<<<<<<<< - * stop = shape - * else: - */ - __pyx_t_2 = ((__pyx_v_stop > __pyx_v_shape) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":865 - * stop = 0 - * elif stop > shape: - * stop = shape # <<<<<<<<<<<<<< - * else: - * if negative_step: - */ - __pyx_v_stop = __pyx_v_shape; - - /* "View.MemoryView":864 - * if stop < 0: - * stop = 0 - * elif stop > shape: # <<<<<<<<<<<<<< - * stop = shape - * else: - */ - } - __pyx_L17:; - - /* "View.MemoryView":859 - * start = 0 - * - * if have_stop: # <<<<<<<<<<<<<< - * if stop < 0: - * stop += shape - */ - goto __pyx_L16; - } - - /* "View.MemoryView":867 - * stop = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * stop = -1 - * else: - */ - /*else*/ { - __pyx_t_2 = (__pyx_v_negative_step != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":868 - * else: - * if negative_step: - * stop = -1 # <<<<<<<<<<<<<< - * else: - * stop = shape - */ - __pyx_v_stop = -1L; - - /* "View.MemoryView":867 - * stop = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * stop = -1 - * else: - */ - goto __pyx_L19; - } - - /* "View.MemoryView":870 - * stop = -1 - * else: - * stop = shape # <<<<<<<<<<<<<< - * - * if not have_step: - */ - /*else*/ { - __pyx_v_stop = __pyx_v_shape; - } - __pyx_L19:; - } - __pyx_L16:; - - /* "View.MemoryView":872 - * stop = shape - * - * if not have_step: # <<<<<<<<<<<<<< - * step = 1 - * - */ - __pyx_t_2 = ((!(__pyx_v_have_step != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":873 - * - * if not have_step: - * step = 1 # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_step = 1; - - /* "View.MemoryView":872 - * stop = shape - * - * if not have_step: # <<<<<<<<<<<<<< - * step = 1 - * - */ - } - - /* "View.MemoryView":877 - * - * with cython.cdivision(True): - * new_shape = (stop - start) // step # <<<<<<<<<<<<<< - * - * if (stop - start) - step * new_shape: - */ - __pyx_v_new_shape = ((__pyx_v_stop - __pyx_v_start) / __pyx_v_step); - - /* "View.MemoryView":879 - * new_shape = (stop - start) // step - * - * if (stop - start) - step * new_shape: # <<<<<<<<<<<<<< - * new_shape += 1 - * - */ - __pyx_t_2 = (((__pyx_v_stop - __pyx_v_start) - (__pyx_v_step * __pyx_v_new_shape)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":880 - * - * if (stop - start) - step * new_shape: - * new_shape += 1 # <<<<<<<<<<<<<< - * - * if new_shape < 0: - */ - __pyx_v_new_shape = (__pyx_v_new_shape + 1); - - /* "View.MemoryView":879 - * new_shape = (stop - start) // step - * - * if (stop - start) - step * new_shape: # <<<<<<<<<<<<<< - * new_shape += 1 - * - */ - } - - /* "View.MemoryView":882 - * new_shape += 1 - * - * if new_shape < 0: # <<<<<<<<<<<<<< - * new_shape = 0 - * - */ - __pyx_t_2 = ((__pyx_v_new_shape < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":883 - * - * if new_shape < 0: - * new_shape = 0 # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_new_shape = 0; - - /* "View.MemoryView":882 - * new_shape += 1 - * - * if new_shape < 0: # <<<<<<<<<<<<<< - * new_shape = 0 - * - */ - } - - /* "View.MemoryView":886 - * - * - * dst.strides[new_ndim] = stride * step # <<<<<<<<<<<<<< - * dst.shape[new_ndim] = new_shape - * dst.suboffsets[new_ndim] = suboffset - */ - (__pyx_v_dst->strides[__pyx_v_new_ndim]) = (__pyx_v_stride * __pyx_v_step); - - /* "View.MemoryView":887 - * - * dst.strides[new_ndim] = stride * step - * dst.shape[new_ndim] = new_shape # <<<<<<<<<<<<<< - * dst.suboffsets[new_ndim] = suboffset - * - */ - (__pyx_v_dst->shape[__pyx_v_new_ndim]) = __pyx_v_new_shape; - - /* "View.MemoryView":888 - * dst.strides[new_ndim] = stride * step - * dst.shape[new_ndim] = new_shape - * dst.suboffsets[new_ndim] = suboffset # <<<<<<<<<<<<<< - * - * - */ - (__pyx_v_dst->suboffsets[__pyx_v_new_ndim]) = __pyx_v_suboffset; - } - __pyx_L3:; - - /* "View.MemoryView":891 - * - * - * if suboffset_dim[0] < 0: # <<<<<<<<<<<<<< - * dst.data += start * stride - * else: - */ - __pyx_t_2 = (((__pyx_v_suboffset_dim[0]) < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":892 - * - * if suboffset_dim[0] < 0: - * dst.data += start * stride # <<<<<<<<<<<<<< - * else: - * dst.suboffsets[suboffset_dim[0]] += start * stride - */ - __pyx_v_dst->data = (__pyx_v_dst->data + (__pyx_v_start * __pyx_v_stride)); - - /* "View.MemoryView":891 - * - * - * if suboffset_dim[0] < 0: # <<<<<<<<<<<<<< - * dst.data += start * stride - * else: - */ - goto __pyx_L23; - } - - /* "View.MemoryView":894 - * dst.data += start * stride - * else: - * dst.suboffsets[suboffset_dim[0]] += start * stride # <<<<<<<<<<<<<< - * - * if suboffset >= 0: - */ - /*else*/ { - __pyx_t_3 = (__pyx_v_suboffset_dim[0]); - (__pyx_v_dst->suboffsets[__pyx_t_3]) = ((__pyx_v_dst->suboffsets[__pyx_t_3]) + (__pyx_v_start * __pyx_v_stride)); - } - __pyx_L23:; - - /* "View.MemoryView":896 - * dst.suboffsets[suboffset_dim[0]] += start * stride - * - * if suboffset >= 0: # <<<<<<<<<<<<<< - * if not is_slice: - * if new_ndim == 0: - */ - __pyx_t_2 = ((__pyx_v_suboffset >= 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":897 - * - * if suboffset >= 0: - * if not is_slice: # <<<<<<<<<<<<<< - * if new_ndim == 0: - * dst.data = ( dst.data)[0] + suboffset - */ - __pyx_t_2 = ((!(__pyx_v_is_slice != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":898 - * if suboffset >= 0: - * if not is_slice: - * if new_ndim == 0: # <<<<<<<<<<<<<< - * dst.data = ( dst.data)[0] + suboffset - * else: - */ - __pyx_t_2 = ((__pyx_v_new_ndim == 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":899 - * if not is_slice: - * if new_ndim == 0: - * dst.data = ( dst.data)[0] + suboffset # <<<<<<<<<<<<<< - * else: - * _err_dim(IndexError, "All dimensions preceding dimension %d " - */ - __pyx_v_dst->data = ((((char **)__pyx_v_dst->data)[0]) + __pyx_v_suboffset); - - /* "View.MemoryView":898 - * if suboffset >= 0: - * if not is_slice: - * if new_ndim == 0: # <<<<<<<<<<<<<< - * dst.data = ( dst.data)[0] + suboffset - * else: - */ - goto __pyx_L26; - } - - /* "View.MemoryView":901 - * dst.data = ( dst.data)[0] + suboffset - * else: - * _err_dim(IndexError, "All dimensions preceding dimension %d " # <<<<<<<<<<<<<< - * "must be indexed and not sliced", dim) - * else: - */ - /*else*/ { - - /* "View.MemoryView":902 - * else: - * _err_dim(IndexError, "All dimensions preceding dimension %d " - * "must be indexed and not sliced", dim) # <<<<<<<<<<<<<< - * else: - * suboffset_dim[0] = new_ndim - */ - __pyx_t_3 = __pyx_memoryview_err_dim(__pyx_builtin_IndexError, ((char *)"All dimensions preceding dimension %d must be indexed and not sliced"), __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 901, __pyx_L1_error) - } - __pyx_L26:; - - /* "View.MemoryView":897 - * - * if suboffset >= 0: - * if not is_slice: # <<<<<<<<<<<<<< - * if new_ndim == 0: - * dst.data = ( dst.data)[0] + suboffset - */ - goto __pyx_L25; - } - - /* "View.MemoryView":904 - * "must be indexed and not sliced", dim) - * else: - * suboffset_dim[0] = new_ndim # <<<<<<<<<<<<<< - * - * return 0 - */ - /*else*/ { - (__pyx_v_suboffset_dim[0]) = __pyx_v_new_ndim; - } - __pyx_L25:; - - /* "View.MemoryView":896 - * dst.suboffsets[suboffset_dim[0]] += start * stride - * - * if suboffset >= 0: # <<<<<<<<<<<<<< - * if not is_slice: - * if new_ndim == 0: - */ - } - - /* "View.MemoryView":906 - * suboffset_dim[0] = new_ndim - * - * return 0 # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":809 - * - * @cname('__pyx_memoryview_slice_memviewslice') - * cdef int slice_memviewslice( # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * Py_ssize_t shape, Py_ssize_t stride, Py_ssize_t suboffset, - */ - - /* function exit code */ - __pyx_L1_error:; - { - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.slice_memviewslice", __pyx_clineno, __pyx_lineno, __pyx_filename); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - } - __pyx_r = -1; - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":912 - * - * @cname('__pyx_pybuffer_index') - * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, # <<<<<<<<<<<<<< - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 - */ - -static char *__pyx_pybuffer_index(Py_buffer *__pyx_v_view, char *__pyx_v_bufp, Py_ssize_t __pyx_v_index, Py_ssize_t __pyx_v_dim) { - Py_ssize_t __pyx_v_shape; - Py_ssize_t __pyx_v_stride; - Py_ssize_t __pyx_v_suboffset; - Py_ssize_t __pyx_v_itemsize; - char *__pyx_v_resultp; - char *__pyx_r; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("pybuffer_index", 0); - - /* "View.MemoryView":914 - * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 # <<<<<<<<<<<<<< - * cdef Py_ssize_t itemsize = view.itemsize - * cdef char *resultp - */ - __pyx_v_suboffset = -1L; - - /* "View.MemoryView":915 - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 - * cdef Py_ssize_t itemsize = view.itemsize # <<<<<<<<<<<<<< - * cdef char *resultp - * - */ - __pyx_t_1 = __pyx_v_view->itemsize; - __pyx_v_itemsize = __pyx_t_1; - - /* "View.MemoryView":918 - * cdef char *resultp - * - * if view.ndim == 0: # <<<<<<<<<<<<<< - * shape = view.len / itemsize - * stride = itemsize - */ - __pyx_t_2 = ((__pyx_v_view->ndim == 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":919 - * - * if view.ndim == 0: - * shape = view.len / itemsize # <<<<<<<<<<<<<< - * stride = itemsize - * else: - */ - if (unlikely(__pyx_v_itemsize == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "integer division or modulo by zero"); - __PYX_ERR(1, 919, __pyx_L1_error) - } - else if (sizeof(Py_ssize_t) == sizeof(long) && (!(((Py_ssize_t)-1) > 0)) && unlikely(__pyx_v_itemsize == (Py_ssize_t)-1) && unlikely(UNARY_NEG_WOULD_OVERFLOW(__pyx_v_view->len))) { - PyErr_SetString(PyExc_OverflowError, "value too large to perform division"); - __PYX_ERR(1, 919, __pyx_L1_error) - } - __pyx_v_shape = __Pyx_div_Py_ssize_t(__pyx_v_view->len, __pyx_v_itemsize); - - /* "View.MemoryView":920 - * if view.ndim == 0: - * shape = view.len / itemsize - * stride = itemsize # <<<<<<<<<<<<<< - * else: - * shape = view.shape[dim] - */ - __pyx_v_stride = __pyx_v_itemsize; - - /* "View.MemoryView":918 - * cdef char *resultp - * - * if view.ndim == 0: # <<<<<<<<<<<<<< - * shape = view.len / itemsize - * stride = itemsize - */ - goto __pyx_L3; - } - - /* "View.MemoryView":922 - * stride = itemsize - * else: - * shape = view.shape[dim] # <<<<<<<<<<<<<< - * stride = view.strides[dim] - * if view.suboffsets != NULL: - */ - /*else*/ { - __pyx_v_shape = (__pyx_v_view->shape[__pyx_v_dim]); - - /* "View.MemoryView":923 - * else: - * shape = view.shape[dim] - * stride = view.strides[dim] # <<<<<<<<<<<<<< - * if view.suboffsets != NULL: - * suboffset = view.suboffsets[dim] - */ - __pyx_v_stride = (__pyx_v_view->strides[__pyx_v_dim]); - - /* "View.MemoryView":924 - * shape = view.shape[dim] - * stride = view.strides[dim] - * if view.suboffsets != NULL: # <<<<<<<<<<<<<< - * suboffset = view.suboffsets[dim] - * - */ - __pyx_t_2 = ((__pyx_v_view->suboffsets != NULL) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":925 - * stride = view.strides[dim] - * if view.suboffsets != NULL: - * suboffset = view.suboffsets[dim] # <<<<<<<<<<<<<< - * - * if index < 0: - */ - __pyx_v_suboffset = (__pyx_v_view->suboffsets[__pyx_v_dim]); - - /* "View.MemoryView":924 - * shape = view.shape[dim] - * stride = view.strides[dim] - * if view.suboffsets != NULL: # <<<<<<<<<<<<<< - * suboffset = view.suboffsets[dim] - * - */ - } - } - __pyx_L3:; - - /* "View.MemoryView":927 - * suboffset = view.suboffsets[dim] - * - * if index < 0: # <<<<<<<<<<<<<< - * index += view.shape[dim] - * if index < 0: - */ - __pyx_t_2 = ((__pyx_v_index < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":928 - * - * if index < 0: - * index += view.shape[dim] # <<<<<<<<<<<<<< - * if index < 0: - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - */ - __pyx_v_index = (__pyx_v_index + (__pyx_v_view->shape[__pyx_v_dim])); - - /* "View.MemoryView":929 - * if index < 0: - * index += view.shape[dim] - * if index < 0: # <<<<<<<<<<<<<< - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - */ - __pyx_t_2 = ((__pyx_v_index < 0) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":930 - * index += view.shape[dim] - * if index < 0: - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) # <<<<<<<<<<<<<< - * - * if index >= shape: - */ - __pyx_t_3 = PyInt_FromSsize_t(__pyx_v_dim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 930, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyString_Format(__pyx_kp_s_Out_of_bounds_on_buffer_access_a, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 930, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_builtin_IndexError, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 930, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 930, __pyx_L1_error) - - /* "View.MemoryView":929 - * if index < 0: - * index += view.shape[dim] - * if index < 0: # <<<<<<<<<<<<<< - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - */ - } - - /* "View.MemoryView":927 - * suboffset = view.suboffsets[dim] - * - * if index < 0: # <<<<<<<<<<<<<< - * index += view.shape[dim] - * if index < 0: - */ - } - - /* "View.MemoryView":932 - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - * if index >= shape: # <<<<<<<<<<<<<< - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - */ - __pyx_t_2 = ((__pyx_v_index >= __pyx_v_shape) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":933 - * - * if index >= shape: - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) # <<<<<<<<<<<<<< - * - * resultp = bufp + index * stride - */ - __pyx_t_3 = PyInt_FromSsize_t(__pyx_v_dim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 933, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyString_Format(__pyx_kp_s_Out_of_bounds_on_buffer_access_a, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 933, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_builtin_IndexError, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 933, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 933, __pyx_L1_error) - - /* "View.MemoryView":932 - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - * if index >= shape: # <<<<<<<<<<<<<< - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - */ - } - - /* "View.MemoryView":935 - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - * resultp = bufp + index * stride # <<<<<<<<<<<<<< - * if suboffset >= 0: - * resultp = ( resultp)[0] + suboffset - */ - __pyx_v_resultp = (__pyx_v_bufp + (__pyx_v_index * __pyx_v_stride)); - - /* "View.MemoryView":936 - * - * resultp = bufp + index * stride - * if suboffset >= 0: # <<<<<<<<<<<<<< - * resultp = ( resultp)[0] + suboffset - * - */ - __pyx_t_2 = ((__pyx_v_suboffset >= 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":937 - * resultp = bufp + index * stride - * if suboffset >= 0: - * resultp = ( resultp)[0] + suboffset # <<<<<<<<<<<<<< - * - * return resultp - */ - __pyx_v_resultp = ((((char **)__pyx_v_resultp)[0]) + __pyx_v_suboffset); - - /* "View.MemoryView":936 - * - * resultp = bufp + index * stride - * if suboffset >= 0: # <<<<<<<<<<<<<< - * resultp = ( resultp)[0] + suboffset - * - */ - } - - /* "View.MemoryView":939 - * resultp = ( resultp)[0] + suboffset - * - * return resultp # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = __pyx_v_resultp; - goto __pyx_L0; - - /* "View.MemoryView":912 - * - * @cname('__pyx_pybuffer_index') - * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, # <<<<<<<<<<<<<< - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView.pybuffer_index", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":945 - * - * @cname('__pyx_memslice_transpose') - * cdef int transpose_memslice(__Pyx_memviewslice *memslice) nogil except 0: # <<<<<<<<<<<<<< - * cdef int ndim = memslice.memview.view.ndim - * - */ - -static int __pyx_memslice_transpose(__Pyx_memviewslice *__pyx_v_memslice) { - int __pyx_v_ndim; - Py_ssize_t *__pyx_v_shape; - Py_ssize_t *__pyx_v_strides; - int __pyx_v_i; - int __pyx_v_j; - int __pyx_r; - int __pyx_t_1; - Py_ssize_t *__pyx_t_2; - long __pyx_t_3; - long __pyx_t_4; - Py_ssize_t __pyx_t_5; - Py_ssize_t __pyx_t_6; - int __pyx_t_7; - int __pyx_t_8; - int __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - - /* "View.MemoryView":946 - * @cname('__pyx_memslice_transpose') - * cdef int transpose_memslice(__Pyx_memviewslice *memslice) nogil except 0: - * cdef int ndim = memslice.memview.view.ndim # <<<<<<<<<<<<<< - * - * cdef Py_ssize_t *shape = memslice.shape - */ - __pyx_t_1 = __pyx_v_memslice->memview->view.ndim; - __pyx_v_ndim = __pyx_t_1; - - /* "View.MemoryView":948 - * cdef int ndim = memslice.memview.view.ndim - * - * cdef Py_ssize_t *shape = memslice.shape # <<<<<<<<<<<<<< - * cdef Py_ssize_t *strides = memslice.strides - * - */ - __pyx_t_2 = __pyx_v_memslice->shape; - __pyx_v_shape = __pyx_t_2; - - /* "View.MemoryView":949 - * - * cdef Py_ssize_t *shape = memslice.shape - * cdef Py_ssize_t *strides = memslice.strides # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_2 = __pyx_v_memslice->strides; - __pyx_v_strides = __pyx_t_2; - - /* "View.MemoryView":953 - * - * cdef int i, j - * for i in range(ndim / 2): # <<<<<<<<<<<<<< - * j = ndim - 1 - i - * strides[i], strides[j] = strides[j], strides[i] - */ - __pyx_t_3 = __Pyx_div_long(__pyx_v_ndim, 2); - __pyx_t_4 = __pyx_t_3; - for (__pyx_t_1 = 0; __pyx_t_1 < __pyx_t_4; __pyx_t_1+=1) { - __pyx_v_i = __pyx_t_1; - - /* "View.MemoryView":954 - * cdef int i, j - * for i in range(ndim / 2): - * j = ndim - 1 - i # <<<<<<<<<<<<<< - * strides[i], strides[j] = strides[j], strides[i] - * shape[i], shape[j] = shape[j], shape[i] - */ - __pyx_v_j = ((__pyx_v_ndim - 1) - __pyx_v_i); - - /* "View.MemoryView":955 - * for i in range(ndim / 2): - * j = ndim - 1 - i - * strides[i], strides[j] = strides[j], strides[i] # <<<<<<<<<<<<<< - * shape[i], shape[j] = shape[j], shape[i] - * - */ - __pyx_t_5 = (__pyx_v_strides[__pyx_v_j]); - __pyx_t_6 = (__pyx_v_strides[__pyx_v_i]); - (__pyx_v_strides[__pyx_v_i]) = __pyx_t_5; - (__pyx_v_strides[__pyx_v_j]) = __pyx_t_6; - - /* "View.MemoryView":956 - * j = ndim - 1 - i - * strides[i], strides[j] = strides[j], strides[i] - * shape[i], shape[j] = shape[j], shape[i] # <<<<<<<<<<<<<< - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: - */ - __pyx_t_6 = (__pyx_v_shape[__pyx_v_j]); - __pyx_t_5 = (__pyx_v_shape[__pyx_v_i]); - (__pyx_v_shape[__pyx_v_i]) = __pyx_t_6; - (__pyx_v_shape[__pyx_v_j]) = __pyx_t_5; - - /* "View.MemoryView":958 - * shape[i], shape[j] = shape[j], shape[i] - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: # <<<<<<<<<<<<<< - * _err(ValueError, "Cannot transpose memoryview with indirect dimensions") - * - */ - __pyx_t_8 = (((__pyx_v_memslice->suboffsets[__pyx_v_i]) >= 0) != 0); - if (!__pyx_t_8) { - } else { - __pyx_t_7 = __pyx_t_8; - goto __pyx_L6_bool_binop_done; - } - __pyx_t_8 = (((__pyx_v_memslice->suboffsets[__pyx_v_j]) >= 0) != 0); - __pyx_t_7 = __pyx_t_8; - __pyx_L6_bool_binop_done:; - if (__pyx_t_7) { - - /* "View.MemoryView":959 - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: - * _err(ValueError, "Cannot transpose memoryview with indirect dimensions") # <<<<<<<<<<<<<< - * - * return 1 - */ - __pyx_t_9 = __pyx_memoryview_err(__pyx_builtin_ValueError, ((char *)"Cannot transpose memoryview with indirect dimensions")); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 959, __pyx_L1_error) - - /* "View.MemoryView":958 - * shape[i], shape[j] = shape[j], shape[i] - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: # <<<<<<<<<<<<<< - * _err(ValueError, "Cannot transpose memoryview with indirect dimensions") - * - */ - } - } - - /* "View.MemoryView":961 - * _err(ValueError, "Cannot transpose memoryview with indirect dimensions") - * - * return 1 # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = 1; - goto __pyx_L0; - - /* "View.MemoryView":945 - * - * @cname('__pyx_memslice_transpose') - * cdef int transpose_memslice(__Pyx_memviewslice *memslice) nogil except 0: # <<<<<<<<<<<<<< - * cdef int ndim = memslice.memview.view.ndim - * - */ - - /* function exit code */ - __pyx_L1_error:; - { - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.transpose_memslice", __pyx_clineno, __pyx_lineno, __pyx_filename); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - } - __pyx_r = 0; - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":978 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * def __dealloc__(self): # <<<<<<<<<<<<<< - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) - * - */ - -/* Python wrapper */ -static void __pyx_memoryviewslice___dealloc__(PyObject *__pyx_v_self); /*proto*/ -static void __pyx_memoryviewslice___dealloc__(PyObject *__pyx_v_self) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0); - __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -static void __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(struct __pyx_memoryviewslice_obj *__pyx_v_self) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__", 0); - - /* "View.MemoryView":979 - * - * def __dealloc__(self): - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) # <<<<<<<<<<<<<< - * - * cdef convert_item_to_object(self, char *itemp): - */ - __PYX_XDEC_MEMVIEW((&__pyx_v_self->from_slice), 1); - - /* "View.MemoryView":978 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * def __dealloc__(self): # <<<<<<<<<<<<<< - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) - * - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":981 - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * if self.to_object_func != NULL: - * return self.to_object_func(itemp) - */ - -static PyObject *__pyx_memoryviewslice_convert_item_to_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("convert_item_to_object", 0); - - /* "View.MemoryView":982 - * - * cdef convert_item_to_object(self, char *itemp): - * if self.to_object_func != NULL: # <<<<<<<<<<<<<< - * return self.to_object_func(itemp) - * else: - */ - __pyx_t_1 = ((__pyx_v_self->to_object_func != NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":983 - * cdef convert_item_to_object(self, char *itemp): - * if self.to_object_func != NULL: - * return self.to_object_func(itemp) # <<<<<<<<<<<<<< - * else: - * return memoryview.convert_item_to_object(self, itemp) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_v_self->to_object_func(__pyx_v_itemp); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 983, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":982 - * - * cdef convert_item_to_object(self, char *itemp): - * if self.to_object_func != NULL: # <<<<<<<<<<<<<< - * return self.to_object_func(itemp) - * else: - */ - } - - /* "View.MemoryView":985 - * return self.to_object_func(itemp) - * else: - * return memoryview.convert_item_to_object(self, itemp) # <<<<<<<<<<<<<< - * - * cdef assign_item_from_object(self, char *itemp, object value): - */ - /*else*/ { - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_memoryview_convert_item_to_object(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_itemp); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 985, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - } - - /* "View.MemoryView":981 - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * if self.to_object_func != NULL: - * return self.to_object_func(itemp) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":987 - * return memoryview.convert_item_to_object(self, itemp) - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * if self.to_dtype_func != NULL: - * self.to_dtype_func(itemp, value) - */ - -static PyObject *__pyx_memoryviewslice_assign_item_from_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("assign_item_from_object", 0); - - /* "View.MemoryView":988 - * - * cdef assign_item_from_object(self, char *itemp, object value): - * if self.to_dtype_func != NULL: # <<<<<<<<<<<<<< - * self.to_dtype_func(itemp, value) - * else: - */ - __pyx_t_1 = ((__pyx_v_self->to_dtype_func != NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":989 - * cdef assign_item_from_object(self, char *itemp, object value): - * if self.to_dtype_func != NULL: - * self.to_dtype_func(itemp, value) # <<<<<<<<<<<<<< - * else: - * memoryview.assign_item_from_object(self, itemp, value) - */ - __pyx_t_2 = __pyx_v_self->to_dtype_func(__pyx_v_itemp, __pyx_v_value); if (unlikely(__pyx_t_2 == ((int)0))) __PYX_ERR(1, 989, __pyx_L1_error) - - /* "View.MemoryView":988 - * - * cdef assign_item_from_object(self, char *itemp, object value): - * if self.to_dtype_func != NULL: # <<<<<<<<<<<<<< - * self.to_dtype_func(itemp, value) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":991 - * self.to_dtype_func(itemp, value) - * else: - * memoryview.assign_item_from_object(self, itemp, value) # <<<<<<<<<<<<<< - * - * @property - */ - /*else*/ { - __pyx_t_3 = __pyx_memoryview_assign_item_from_object(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_itemp, __pyx_v_value); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 991, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_L3:; - - /* "View.MemoryView":987 - * return memoryview.convert_item_to_object(self, itemp) - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * if self.to_dtype_func != NULL: - * self.to_dtype_func(itemp, value) - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.assign_item_from_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":994 - * - * @property - * def base(self): # <<<<<<<<<<<<<< - * return self.from_object - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_16_memoryviewslice_4base_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_16_memoryviewslice_4base_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base___get__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base___get__(struct __pyx_memoryviewslice_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":995 - * @property - * def base(self): - * return self.from_object # <<<<<<<<<<<<<< - * - * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)") - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->from_object); - __pyx_r = __pyx_v_self->from_object; - goto __pyx_L0; - - /* "View.MemoryView":994 - * - * @property - * def base(self): # <<<<<<<<<<<<<< - * return self.from_object - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryviewslice_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryviewslice_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_memoryviewslice___reduce_cython__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryviewslice___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__18, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 2, __pyx_L1_error) - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryviewslice_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryviewslice_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_memoryviewslice_2__setstate_cython__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryviewslice_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__19, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 4, __pyx_L1_error) - - /* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1001 - * - * @cname('__pyx_memoryview_fromslice') - * cdef memoryview_fromslice(__Pyx_memviewslice memviewslice, # <<<<<<<<<<<<<< - * int ndim, - * object (*to_object_func)(char *), - */ - -static PyObject *__pyx_memoryview_fromslice(__Pyx_memviewslice __pyx_v_memviewslice, int __pyx_v_ndim, PyObject *(*__pyx_v_to_object_func)(char *), int (*__pyx_v_to_dtype_func)(char *, PyObject *), int __pyx_v_dtype_is_object) { - struct __pyx_memoryviewslice_obj *__pyx_v_result = 0; - Py_ssize_t __pyx_v_suboffset; - PyObject *__pyx_v_length = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - __Pyx_TypeInfo *__pyx_t_4; - Py_buffer __pyx_t_5; - Py_ssize_t *__pyx_t_6; - Py_ssize_t *__pyx_t_7; - Py_ssize_t *__pyx_t_8; - Py_ssize_t __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_fromslice", 0); - - /* "View.MemoryView":1009 - * cdef _memoryviewslice result - * - * if memviewslice.memview == Py_None: # <<<<<<<<<<<<<< - * return None - * - */ - __pyx_t_1 = ((((PyObject *)__pyx_v_memviewslice.memview) == Py_None) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1010 - * - * if memviewslice.memview == Py_None: - * return None # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - - /* "View.MemoryView":1009 - * cdef _memoryviewslice result - * - * if memviewslice.memview == Py_None: # <<<<<<<<<<<<<< - * return None - * - */ - } - - /* "View.MemoryView":1015 - * - * - * result = _memoryviewslice(None, 0, dtype_is_object) # <<<<<<<<<<<<<< - * - * result.from_slice = memviewslice - */ - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1015, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1015, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(Py_None); - PyTuple_SET_ITEM(__pyx_t_3, 0, Py_None); - __Pyx_INCREF(__pyx_int_0); - __Pyx_GIVEREF(__pyx_int_0); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_0); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryviewslice_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1015, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result = ((struct __pyx_memoryviewslice_obj *)__pyx_t_2); - __pyx_t_2 = 0; - - /* "View.MemoryView":1017 - * result = _memoryviewslice(None, 0, dtype_is_object) - * - * result.from_slice = memviewslice # <<<<<<<<<<<<<< - * __PYX_INC_MEMVIEW(&memviewslice, 1) - * - */ - __pyx_v_result->from_slice = __pyx_v_memviewslice; - - /* "View.MemoryView":1018 - * - * result.from_slice = memviewslice - * __PYX_INC_MEMVIEW(&memviewslice, 1) # <<<<<<<<<<<<<< - * - * result.from_object = ( memviewslice.memview).base - */ - __PYX_INC_MEMVIEW((&__pyx_v_memviewslice), 1); - - /* "View.MemoryView":1020 - * __PYX_INC_MEMVIEW(&memviewslice, 1) - * - * result.from_object = ( memviewslice.memview).base # <<<<<<<<<<<<<< - * result.typeinfo = memviewslice.memview.typeinfo - * - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_memviewslice.memview), __pyx_n_s_base); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1020, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __Pyx_GOTREF(__pyx_v_result->from_object); - __Pyx_DECREF(__pyx_v_result->from_object); - __pyx_v_result->from_object = __pyx_t_2; - __pyx_t_2 = 0; - - /* "View.MemoryView":1021 - * - * result.from_object = ( memviewslice.memview).base - * result.typeinfo = memviewslice.memview.typeinfo # <<<<<<<<<<<<<< - * - * result.view = memviewslice.memview.view - */ - __pyx_t_4 = __pyx_v_memviewslice.memview->typeinfo; - __pyx_v_result->__pyx_base.typeinfo = __pyx_t_4; - - /* "View.MemoryView":1023 - * result.typeinfo = memviewslice.memview.typeinfo - * - * result.view = memviewslice.memview.view # <<<<<<<<<<<<<< - * result.view.buf = memviewslice.data - * result.view.ndim = ndim - */ - __pyx_t_5 = __pyx_v_memviewslice.memview->view; - __pyx_v_result->__pyx_base.view = __pyx_t_5; - - /* "View.MemoryView":1024 - * - * result.view = memviewslice.memview.view - * result.view.buf = memviewslice.data # <<<<<<<<<<<<<< - * result.view.ndim = ndim - * (<__pyx_buffer *> &result.view).obj = Py_None - */ - __pyx_v_result->__pyx_base.view.buf = ((void *)__pyx_v_memviewslice.data); - - /* "View.MemoryView":1025 - * result.view = memviewslice.memview.view - * result.view.buf = memviewslice.data - * result.view.ndim = ndim # <<<<<<<<<<<<<< - * (<__pyx_buffer *> &result.view).obj = Py_None - * Py_INCREF(Py_None) - */ - __pyx_v_result->__pyx_base.view.ndim = __pyx_v_ndim; - - /* "View.MemoryView":1026 - * result.view.buf = memviewslice.data - * result.view.ndim = ndim - * (<__pyx_buffer *> &result.view).obj = Py_None # <<<<<<<<<<<<<< - * Py_INCREF(Py_None) - * - */ - ((Py_buffer *)(&__pyx_v_result->__pyx_base.view))->obj = Py_None; - - /* "View.MemoryView":1027 - * result.view.ndim = ndim - * (<__pyx_buffer *> &result.view).obj = Py_None - * Py_INCREF(Py_None) # <<<<<<<<<<<<<< - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: - */ - Py_INCREF(Py_None); - - /* "View.MemoryView":1029 - * Py_INCREF(Py_None) - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: # <<<<<<<<<<<<<< - * result.flags = PyBUF_RECORDS - * else: - */ - __pyx_t_1 = ((((struct __pyx_memoryview_obj *)__pyx_v_memviewslice.memview)->flags & PyBUF_WRITABLE) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1030 - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: - * result.flags = PyBUF_RECORDS # <<<<<<<<<<<<<< - * else: - * result.flags = PyBUF_RECORDS_RO - */ - __pyx_v_result->__pyx_base.flags = PyBUF_RECORDS; - - /* "View.MemoryView":1029 - * Py_INCREF(Py_None) - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: # <<<<<<<<<<<<<< - * result.flags = PyBUF_RECORDS - * else: - */ - goto __pyx_L4; - } - - /* "View.MemoryView":1032 - * result.flags = PyBUF_RECORDS - * else: - * result.flags = PyBUF_RECORDS_RO # <<<<<<<<<<<<<< - * - * result.view.shape = result.from_slice.shape - */ - /*else*/ { - __pyx_v_result->__pyx_base.flags = PyBUF_RECORDS_RO; - } - __pyx_L4:; - - /* "View.MemoryView":1034 - * result.flags = PyBUF_RECORDS_RO - * - * result.view.shape = result.from_slice.shape # <<<<<<<<<<<<<< - * result.view.strides = result.from_slice.strides - * - */ - __pyx_v_result->__pyx_base.view.shape = ((Py_ssize_t *)__pyx_v_result->from_slice.shape); - - /* "View.MemoryView":1035 - * - * result.view.shape = result.from_slice.shape - * result.view.strides = result.from_slice.strides # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_result->__pyx_base.view.strides = ((Py_ssize_t *)__pyx_v_result->from_slice.strides); - - /* "View.MemoryView":1038 - * - * - * result.view.suboffsets = NULL # <<<<<<<<<<<<<< - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: - */ - __pyx_v_result->__pyx_base.view.suboffsets = NULL; - - /* "View.MemoryView":1039 - * - * result.view.suboffsets = NULL - * for suboffset in result.from_slice.suboffsets[:ndim]: # <<<<<<<<<<<<<< - * if suboffset >= 0: - * result.view.suboffsets = result.from_slice.suboffsets - */ - __pyx_t_7 = (__pyx_v_result->from_slice.suboffsets + __pyx_v_ndim); - for (__pyx_t_8 = __pyx_v_result->from_slice.suboffsets; __pyx_t_8 < __pyx_t_7; __pyx_t_8++) { - __pyx_t_6 = __pyx_t_8; - __pyx_v_suboffset = (__pyx_t_6[0]); - - /* "View.MemoryView":1040 - * result.view.suboffsets = NULL - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * result.view.suboffsets = result.from_slice.suboffsets - * break - */ - __pyx_t_1 = ((__pyx_v_suboffset >= 0) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1041 - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: - * result.view.suboffsets = result.from_slice.suboffsets # <<<<<<<<<<<<<< - * break - * - */ - __pyx_v_result->__pyx_base.view.suboffsets = ((Py_ssize_t *)__pyx_v_result->from_slice.suboffsets); - - /* "View.MemoryView":1042 - * if suboffset >= 0: - * result.view.suboffsets = result.from_slice.suboffsets - * break # <<<<<<<<<<<<<< - * - * result.view.len = result.view.itemsize - */ - goto __pyx_L6_break; - - /* "View.MemoryView":1040 - * result.view.suboffsets = NULL - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * result.view.suboffsets = result.from_slice.suboffsets - * break - */ - } - } - __pyx_L6_break:; - - /* "View.MemoryView":1044 - * break - * - * result.view.len = result.view.itemsize # <<<<<<<<<<<<<< - * for length in result.view.shape[:ndim]: - * result.view.len *= length - */ - __pyx_t_9 = __pyx_v_result->__pyx_base.view.itemsize; - __pyx_v_result->__pyx_base.view.len = __pyx_t_9; - - /* "View.MemoryView":1045 - * - * result.view.len = result.view.itemsize - * for length in result.view.shape[:ndim]: # <<<<<<<<<<<<<< - * result.view.len *= length - * - */ - __pyx_t_7 = (__pyx_v_result->__pyx_base.view.shape + __pyx_v_ndim); - for (__pyx_t_8 = __pyx_v_result->__pyx_base.view.shape; __pyx_t_8 < __pyx_t_7; __pyx_t_8++) { - __pyx_t_6 = __pyx_t_8; - __pyx_t_2 = PyInt_FromSsize_t((__pyx_t_6[0])); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1045, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_XDECREF_SET(__pyx_v_length, __pyx_t_2); - __pyx_t_2 = 0; - - /* "View.MemoryView":1046 - * result.view.len = result.view.itemsize - * for length in result.view.shape[:ndim]: - * result.view.len *= length # <<<<<<<<<<<<<< - * - * result.to_object_func = to_object_func - */ - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_result->__pyx_base.view.len); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1046, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_InPlaceMultiply(__pyx_t_2, __pyx_v_length); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1046, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_9 = __Pyx_PyIndex_AsSsize_t(__pyx_t_3); if (unlikely((__pyx_t_9 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 1046, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result->__pyx_base.view.len = __pyx_t_9; - } - - /* "View.MemoryView":1048 - * result.view.len *= length - * - * result.to_object_func = to_object_func # <<<<<<<<<<<<<< - * result.to_dtype_func = to_dtype_func - * - */ - __pyx_v_result->to_object_func = __pyx_v_to_object_func; - - /* "View.MemoryView":1049 - * - * result.to_object_func = to_object_func - * result.to_dtype_func = to_dtype_func # <<<<<<<<<<<<<< - * - * return result - */ - __pyx_v_result->to_dtype_func = __pyx_v_to_dtype_func; - - /* "View.MemoryView":1051 - * result.to_dtype_func = to_dtype_func - * - * return result # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_get_slice_from_memoryview') - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)__pyx_v_result)); - __pyx_r = ((PyObject *)__pyx_v_result); - goto __pyx_L0; - - /* "View.MemoryView":1001 - * - * @cname('__pyx_memoryview_fromslice') - * cdef memoryview_fromslice(__Pyx_memviewslice memviewslice, # <<<<<<<<<<<<<< - * int ndim, - * object (*to_object_func)(char *), - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview_fromslice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XDECREF(__pyx_v_length); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1054 - * - * @cname('__pyx_memoryview_get_slice_from_memoryview') - * cdef __Pyx_memviewslice *get_slice_from_memview(memoryview memview, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - */ - -static __Pyx_memviewslice *__pyx_memoryview_get_slice_from_memoryview(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_mslice) { - struct __pyx_memoryviewslice_obj *__pyx_v_obj = 0; - __Pyx_memviewslice *__pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("get_slice_from_memview", 0); - - /* "View.MemoryView":1057 - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * obj = memview - * return &obj.from_slice - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1058 - * cdef _memoryviewslice obj - * if isinstance(memview, _memoryviewslice): - * obj = memview # <<<<<<<<<<<<<< - * return &obj.from_slice - * else: - */ - if (!(likely(((((PyObject *)__pyx_v_memview)) == Py_None) || likely(__Pyx_TypeTest(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type))))) __PYX_ERR(1, 1058, __pyx_L1_error) - __pyx_t_3 = ((PyObject *)__pyx_v_memview); - __Pyx_INCREF(__pyx_t_3); - __pyx_v_obj = ((struct __pyx_memoryviewslice_obj *)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":1059 - * if isinstance(memview, _memoryviewslice): - * obj = memview - * return &obj.from_slice # <<<<<<<<<<<<<< - * else: - * slice_copy(memview, mslice) - */ - __pyx_r = (&__pyx_v_obj->from_slice); - goto __pyx_L0; - - /* "View.MemoryView":1057 - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * obj = memview - * return &obj.from_slice - */ - } - - /* "View.MemoryView":1061 - * return &obj.from_slice - * else: - * slice_copy(memview, mslice) # <<<<<<<<<<<<<< - * return mslice - * - */ - /*else*/ { - __pyx_memoryview_slice_copy(__pyx_v_memview, __pyx_v_mslice); - - /* "View.MemoryView":1062 - * else: - * slice_copy(memview, mslice) - * return mslice # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_slice_copy') - */ - __pyx_r = __pyx_v_mslice; - goto __pyx_L0; - } - - /* "View.MemoryView":1054 - * - * @cname('__pyx_memoryview_get_slice_from_memoryview') - * cdef __Pyx_memviewslice *get_slice_from_memview(memoryview memview, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.get_slice_from_memview", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_obj); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1065 - * - * @cname('__pyx_memoryview_slice_copy') - * cdef void slice_copy(memoryview memview, __Pyx_memviewslice *dst): # <<<<<<<<<<<<<< - * cdef int dim - * cdef (Py_ssize_t*) shape, strides, suboffsets - */ - -static void __pyx_memoryview_slice_copy(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_dst) { - int __pyx_v_dim; - Py_ssize_t *__pyx_v_shape; - Py_ssize_t *__pyx_v_strides; - Py_ssize_t *__pyx_v_suboffsets; - __Pyx_RefNannyDeclarations - Py_ssize_t *__pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - Py_ssize_t __pyx_t_5; - __Pyx_RefNannySetupContext("slice_copy", 0); - - /* "View.MemoryView":1069 - * cdef (Py_ssize_t*) shape, strides, suboffsets - * - * shape = memview.view.shape # <<<<<<<<<<<<<< - * strides = memview.view.strides - * suboffsets = memview.view.suboffsets - */ - __pyx_t_1 = __pyx_v_memview->view.shape; - __pyx_v_shape = __pyx_t_1; - - /* "View.MemoryView":1070 - * - * shape = memview.view.shape - * strides = memview.view.strides # <<<<<<<<<<<<<< - * suboffsets = memview.view.suboffsets - * - */ - __pyx_t_1 = __pyx_v_memview->view.strides; - __pyx_v_strides = __pyx_t_1; - - /* "View.MemoryView":1071 - * shape = memview.view.shape - * strides = memview.view.strides - * suboffsets = memview.view.suboffsets # <<<<<<<<<<<<<< - * - * dst.memview = <__pyx_memoryview *> memview - */ - __pyx_t_1 = __pyx_v_memview->view.suboffsets; - __pyx_v_suboffsets = __pyx_t_1; - - /* "View.MemoryView":1073 - * suboffsets = memview.view.suboffsets - * - * dst.memview = <__pyx_memoryview *> memview # <<<<<<<<<<<<<< - * dst.data = memview.view.buf - * - */ - __pyx_v_dst->memview = ((struct __pyx_memoryview_obj *)__pyx_v_memview); - - /* "View.MemoryView":1074 - * - * dst.memview = <__pyx_memoryview *> memview - * dst.data = memview.view.buf # <<<<<<<<<<<<<< - * - * for dim in range(memview.view.ndim): - */ - __pyx_v_dst->data = ((char *)__pyx_v_memview->view.buf); - - /* "View.MemoryView":1076 - * dst.data = memview.view.buf - * - * for dim in range(memview.view.ndim): # <<<<<<<<<<<<<< - * dst.shape[dim] = shape[dim] - * dst.strides[dim] = strides[dim] - */ - __pyx_t_2 = __pyx_v_memview->view.ndim; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_dim = __pyx_t_4; - - /* "View.MemoryView":1077 - * - * for dim in range(memview.view.ndim): - * dst.shape[dim] = shape[dim] # <<<<<<<<<<<<<< - * dst.strides[dim] = strides[dim] - * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1 - */ - (__pyx_v_dst->shape[__pyx_v_dim]) = (__pyx_v_shape[__pyx_v_dim]); - - /* "View.MemoryView":1078 - * for dim in range(memview.view.ndim): - * dst.shape[dim] = shape[dim] - * dst.strides[dim] = strides[dim] # <<<<<<<<<<<<<< - * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1 - * - */ - (__pyx_v_dst->strides[__pyx_v_dim]) = (__pyx_v_strides[__pyx_v_dim]); - - /* "View.MemoryView":1079 - * dst.shape[dim] = shape[dim] - * dst.strides[dim] = strides[dim] - * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1 # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_object') - */ - if ((__pyx_v_suboffsets != 0)) { - __pyx_t_5 = (__pyx_v_suboffsets[__pyx_v_dim]); - } else { - __pyx_t_5 = -1L; - } - (__pyx_v_dst->suboffsets[__pyx_v_dim]) = __pyx_t_5; - } - - /* "View.MemoryView":1065 - * - * @cname('__pyx_memoryview_slice_copy') - * cdef void slice_copy(memoryview memview, __Pyx_memviewslice *dst): # <<<<<<<<<<<<<< - * cdef int dim - * cdef (Py_ssize_t*) shape, strides, suboffsets - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":1082 - * - * @cname('__pyx_memoryview_copy_object') - * cdef memoryview_copy(memoryview memview): # <<<<<<<<<<<<<< - * "Create a new memoryview object" - * cdef __Pyx_memviewslice memviewslice - */ - -static PyObject *__pyx_memoryview_copy_object(struct __pyx_memoryview_obj *__pyx_v_memview) { - __Pyx_memviewslice __pyx_v_memviewslice; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_copy", 0); - - /* "View.MemoryView":1085 - * "Create a new memoryview object" - * cdef __Pyx_memviewslice memviewslice - * slice_copy(memview, &memviewslice) # <<<<<<<<<<<<<< - * return memoryview_copy_from_slice(memview, &memviewslice) - * - */ - __pyx_memoryview_slice_copy(__pyx_v_memview, (&__pyx_v_memviewslice)); - - /* "View.MemoryView":1086 - * cdef __Pyx_memviewslice memviewslice - * slice_copy(memview, &memviewslice) - * return memoryview_copy_from_slice(memview, &memviewslice) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_object_from_slice') - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __pyx_memoryview_copy_object_from_slice(__pyx_v_memview, (&__pyx_v_memviewslice)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1086, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":1082 - * - * @cname('__pyx_memoryview_copy_object') - * cdef memoryview_copy(memoryview memview): # <<<<<<<<<<<<<< - * "Create a new memoryview object" - * cdef __Pyx_memviewslice memviewslice - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview_copy", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1089 - * - * @cname('__pyx_memoryview_copy_object_from_slice') - * cdef memoryview_copy_from_slice(memoryview memview, __Pyx_memviewslice *memviewslice): # <<<<<<<<<<<<<< - * """ - * Create a new memoryview object from a given memoryview object and slice. - */ - -static PyObject *__pyx_memoryview_copy_object_from_slice(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_memviewslice) { - PyObject *(*__pyx_v_to_object_func)(char *); - int (*__pyx_v_to_dtype_func)(char *, PyObject *); - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *(*__pyx_t_3)(char *); - int (*__pyx_t_4)(char *, PyObject *); - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_copy_from_slice", 0); - - /* "View.MemoryView":1096 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * to_object_func = (<_memoryviewslice> memview).to_object_func - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1097 - * - * if isinstance(memview, _memoryviewslice): - * to_object_func = (<_memoryviewslice> memview).to_object_func # <<<<<<<<<<<<<< - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - * else: - */ - __pyx_t_3 = ((struct __pyx_memoryviewslice_obj *)__pyx_v_memview)->to_object_func; - __pyx_v_to_object_func = __pyx_t_3; - - /* "View.MemoryView":1098 - * if isinstance(memview, _memoryviewslice): - * to_object_func = (<_memoryviewslice> memview).to_object_func - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func # <<<<<<<<<<<<<< - * else: - * to_object_func = NULL - */ - __pyx_t_4 = ((struct __pyx_memoryviewslice_obj *)__pyx_v_memview)->to_dtype_func; - __pyx_v_to_dtype_func = __pyx_t_4; - - /* "View.MemoryView":1096 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * to_object_func = (<_memoryviewslice> memview).to_object_func - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1100 - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - * else: - * to_object_func = NULL # <<<<<<<<<<<<<< - * to_dtype_func = NULL - * - */ - /*else*/ { - __pyx_v_to_object_func = NULL; - - /* "View.MemoryView":1101 - * else: - * to_object_func = NULL - * to_dtype_func = NULL # <<<<<<<<<<<<<< - * - * return memoryview_fromslice(memviewslice[0], memview.view.ndim, - */ - __pyx_v_to_dtype_func = NULL; - } - __pyx_L3:; - - /* "View.MemoryView":1103 - * to_dtype_func = NULL - * - * return memoryview_fromslice(memviewslice[0], memview.view.ndim, # <<<<<<<<<<<<<< - * to_object_func, to_dtype_func, - * memview.dtype_is_object) - */ - __Pyx_XDECREF(__pyx_r); - - /* "View.MemoryView":1105 - * return memoryview_fromslice(memviewslice[0], memview.view.ndim, - * to_object_func, to_dtype_func, - * memview.dtype_is_object) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_5 = __pyx_memoryview_fromslice((__pyx_v_memviewslice[0]), __pyx_v_memview->view.ndim, __pyx_v_to_object_func, __pyx_v_to_dtype_func, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 1103, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "View.MemoryView":1089 - * - * @cname('__pyx_memoryview_copy_object_from_slice') - * cdef memoryview_copy_from_slice(memoryview memview, __Pyx_memviewslice *memviewslice): # <<<<<<<<<<<<<< - * """ - * Create a new memoryview object from a given memoryview object and slice. - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview_copy_from_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1111 - * - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: # <<<<<<<<<<<<<< - * if arg < 0: - * return -arg - */ - -static Py_ssize_t abs_py_ssize_t(Py_ssize_t __pyx_v_arg) { - Py_ssize_t __pyx_r; - int __pyx_t_1; - - /* "View.MemoryView":1112 - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: - * if arg < 0: # <<<<<<<<<<<<<< - * return -arg - * else: - */ - __pyx_t_1 = ((__pyx_v_arg < 0) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1113 - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: - * if arg < 0: - * return -arg # <<<<<<<<<<<<<< - * else: - * return arg - */ - __pyx_r = (-__pyx_v_arg); - goto __pyx_L0; - - /* "View.MemoryView":1112 - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: - * if arg < 0: # <<<<<<<<<<<<<< - * return -arg - * else: - */ - } - - /* "View.MemoryView":1115 - * return -arg - * else: - * return arg # <<<<<<<<<<<<<< - * - * @cname('__pyx_get_best_slice_order') - */ - /*else*/ { - __pyx_r = __pyx_v_arg; - goto __pyx_L0; - } - - /* "View.MemoryView":1111 - * - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: # <<<<<<<<<<<<<< - * if arg < 0: - * return -arg - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1118 - * - * @cname('__pyx_get_best_slice_order') - * cdef char get_best_order(__Pyx_memviewslice *mslice, int ndim) nogil: # <<<<<<<<<<<<<< - * """ - * Figure out the best memory access order for a given slice. - */ - -static char __pyx_get_best_slice_order(__Pyx_memviewslice *__pyx_v_mslice, int __pyx_v_ndim) { - int __pyx_v_i; - Py_ssize_t __pyx_v_c_stride; - Py_ssize_t __pyx_v_f_stride; - char __pyx_r; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - - /* "View.MemoryView":1123 - * """ - * cdef int i - * cdef Py_ssize_t c_stride = 0 # <<<<<<<<<<<<<< - * cdef Py_ssize_t f_stride = 0 - * - */ - __pyx_v_c_stride = 0; - - /* "View.MemoryView":1124 - * cdef int i - * cdef Py_ssize_t c_stride = 0 - * cdef Py_ssize_t f_stride = 0 # <<<<<<<<<<<<<< - * - * for i in range(ndim - 1, -1, -1): - */ - __pyx_v_f_stride = 0; - - /* "View.MemoryView":1126 - * cdef Py_ssize_t f_stride = 0 - * - * for i in range(ndim - 1, -1, -1): # <<<<<<<<<<<<<< - * if mslice.shape[i] > 1: - * c_stride = mslice.strides[i] - */ - for (__pyx_t_1 = (__pyx_v_ndim - 1); __pyx_t_1 > -1; __pyx_t_1-=1) { - __pyx_v_i = __pyx_t_1; - - /* "View.MemoryView":1127 - * - * for i in range(ndim - 1, -1, -1): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * c_stride = mslice.strides[i] - * break - */ - __pyx_t_2 = (((__pyx_v_mslice->shape[__pyx_v_i]) > 1) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1128 - * for i in range(ndim - 1, -1, -1): - * if mslice.shape[i] > 1: - * c_stride = mslice.strides[i] # <<<<<<<<<<<<<< - * break - * - */ - __pyx_v_c_stride = (__pyx_v_mslice->strides[__pyx_v_i]); - - /* "View.MemoryView":1129 - * if mslice.shape[i] > 1: - * c_stride = mslice.strides[i] - * break # <<<<<<<<<<<<<< - * - * for i in range(ndim): - */ - goto __pyx_L4_break; - - /* "View.MemoryView":1127 - * - * for i in range(ndim - 1, -1, -1): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * c_stride = mslice.strides[i] - * break - */ - } - } - __pyx_L4_break:; - - /* "View.MemoryView":1131 - * break - * - * for i in range(ndim): # <<<<<<<<<<<<<< - * if mslice.shape[i] > 1: - * f_stride = mslice.strides[i] - */ - __pyx_t_1 = __pyx_v_ndim; - __pyx_t_3 = __pyx_t_1; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1132 - * - * for i in range(ndim): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * f_stride = mslice.strides[i] - * break - */ - __pyx_t_2 = (((__pyx_v_mslice->shape[__pyx_v_i]) > 1) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1133 - * for i in range(ndim): - * if mslice.shape[i] > 1: - * f_stride = mslice.strides[i] # <<<<<<<<<<<<<< - * break - * - */ - __pyx_v_f_stride = (__pyx_v_mslice->strides[__pyx_v_i]); - - /* "View.MemoryView":1134 - * if mslice.shape[i] > 1: - * f_stride = mslice.strides[i] - * break # <<<<<<<<<<<<<< - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): - */ - goto __pyx_L7_break; - - /* "View.MemoryView":1132 - * - * for i in range(ndim): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * f_stride = mslice.strides[i] - * break - */ - } - } - __pyx_L7_break:; - - /* "View.MemoryView":1136 - * break - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): # <<<<<<<<<<<<<< - * return 'C' - * else: - */ - __pyx_t_2 = ((abs_py_ssize_t(__pyx_v_c_stride) <= abs_py_ssize_t(__pyx_v_f_stride)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1137 - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): - * return 'C' # <<<<<<<<<<<<<< - * else: - * return 'F' - */ - __pyx_r = 'C'; - goto __pyx_L0; - - /* "View.MemoryView":1136 - * break - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): # <<<<<<<<<<<<<< - * return 'C' - * else: - */ - } - - /* "View.MemoryView":1139 - * return 'C' - * else: - * return 'F' # <<<<<<<<<<<<<< - * - * @cython.cdivision(True) - */ - /*else*/ { - __pyx_r = 'F'; - goto __pyx_L0; - } - - /* "View.MemoryView":1118 - * - * @cname('__pyx_get_best_slice_order') - * cdef char get_best_order(__Pyx_memviewslice *mslice, int ndim) nogil: # <<<<<<<<<<<<<< - * """ - * Figure out the best memory access order for a given slice. - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1142 - * - * @cython.cdivision(True) - * cdef void _copy_strided_to_strided(char *src_data, Py_ssize_t *src_strides, # <<<<<<<<<<<<<< - * char *dst_data, Py_ssize_t *dst_strides, - * Py_ssize_t *src_shape, Py_ssize_t *dst_shape, - */ - -static void _copy_strided_to_strided(char *__pyx_v_src_data, Py_ssize_t *__pyx_v_src_strides, char *__pyx_v_dst_data, Py_ssize_t *__pyx_v_dst_strides, Py_ssize_t *__pyx_v_src_shape, Py_ssize_t *__pyx_v_dst_shape, int __pyx_v_ndim, size_t __pyx_v_itemsize) { - CYTHON_UNUSED Py_ssize_t __pyx_v_i; - CYTHON_UNUSED Py_ssize_t __pyx_v_src_extent; - Py_ssize_t __pyx_v_dst_extent; - Py_ssize_t __pyx_v_src_stride; - Py_ssize_t __pyx_v_dst_stride; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - Py_ssize_t __pyx_t_4; - Py_ssize_t __pyx_t_5; - Py_ssize_t __pyx_t_6; - - /* "View.MemoryView":1149 - * - * cdef Py_ssize_t i - * cdef Py_ssize_t src_extent = src_shape[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t dst_extent = dst_shape[0] - * cdef Py_ssize_t src_stride = src_strides[0] - */ - __pyx_v_src_extent = (__pyx_v_src_shape[0]); - - /* "View.MemoryView":1150 - * cdef Py_ssize_t i - * cdef Py_ssize_t src_extent = src_shape[0] - * cdef Py_ssize_t dst_extent = dst_shape[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t src_stride = src_strides[0] - * cdef Py_ssize_t dst_stride = dst_strides[0] - */ - __pyx_v_dst_extent = (__pyx_v_dst_shape[0]); - - /* "View.MemoryView":1151 - * cdef Py_ssize_t src_extent = src_shape[0] - * cdef Py_ssize_t dst_extent = dst_shape[0] - * cdef Py_ssize_t src_stride = src_strides[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t dst_stride = dst_strides[0] - * - */ - __pyx_v_src_stride = (__pyx_v_src_strides[0]); - - /* "View.MemoryView":1152 - * cdef Py_ssize_t dst_extent = dst_shape[0] - * cdef Py_ssize_t src_stride = src_strides[0] - * cdef Py_ssize_t dst_stride = dst_strides[0] # <<<<<<<<<<<<<< - * - * if ndim == 1: - */ - __pyx_v_dst_stride = (__pyx_v_dst_strides[0]); - - /* "View.MemoryView":1154 - * cdef Py_ssize_t dst_stride = dst_strides[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): - */ - __pyx_t_1 = ((__pyx_v_ndim == 1) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1155 - * - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<< - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) - */ - __pyx_t_2 = ((__pyx_v_src_stride > 0) != 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L5_bool_binop_done; - } - __pyx_t_2 = ((__pyx_v_dst_stride > 0) != 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L5_bool_binop_done; - } - - /* "View.MemoryView":1156 - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): # <<<<<<<<<<<<<< - * memcpy(dst_data, src_data, itemsize * dst_extent) - * else: - */ - __pyx_t_2 = (((size_t)__pyx_v_src_stride) == __pyx_v_itemsize); - if (__pyx_t_2) { - __pyx_t_2 = (__pyx_v_itemsize == ((size_t)__pyx_v_dst_stride)); - } - __pyx_t_3 = (__pyx_t_2 != 0); - __pyx_t_1 = __pyx_t_3; - __pyx_L5_bool_binop_done:; - - /* "View.MemoryView":1155 - * - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<< - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) - */ - if (__pyx_t_1) { - - /* "View.MemoryView":1157 - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) # <<<<<<<<<<<<<< - * else: - * for i in range(dst_extent): - */ - (void)(memcpy(__pyx_v_dst_data, __pyx_v_src_data, (__pyx_v_itemsize * __pyx_v_dst_extent))); - - /* "View.MemoryView":1155 - * - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<< - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) - */ - goto __pyx_L4; - } - - /* "View.MemoryView":1159 - * memcpy(dst_data, src_data, itemsize * dst_extent) - * else: - * for i in range(dst_extent): # <<<<<<<<<<<<<< - * memcpy(dst_data, src_data, itemsize) - * src_data += src_stride - */ - /*else*/ { - __pyx_t_4 = __pyx_v_dst_extent; - __pyx_t_5 = __pyx_t_4; - for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) { - __pyx_v_i = __pyx_t_6; - - /* "View.MemoryView":1160 - * else: - * for i in range(dst_extent): - * memcpy(dst_data, src_data, itemsize) # <<<<<<<<<<<<<< - * src_data += src_stride - * dst_data += dst_stride - */ - (void)(memcpy(__pyx_v_dst_data, __pyx_v_src_data, __pyx_v_itemsize)); - - /* "View.MemoryView":1161 - * for i in range(dst_extent): - * memcpy(dst_data, src_data, itemsize) - * src_data += src_stride # <<<<<<<<<<<<<< - * dst_data += dst_stride - * else: - */ - __pyx_v_src_data = (__pyx_v_src_data + __pyx_v_src_stride); - - /* "View.MemoryView":1162 - * memcpy(dst_data, src_data, itemsize) - * src_data += src_stride - * dst_data += dst_stride # <<<<<<<<<<<<<< - * else: - * for i in range(dst_extent): - */ - __pyx_v_dst_data = (__pyx_v_dst_data + __pyx_v_dst_stride); - } - } - __pyx_L4:; - - /* "View.MemoryView":1154 - * cdef Py_ssize_t dst_stride = dst_strides[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1164 - * dst_data += dst_stride - * else: - * for i in range(dst_extent): # <<<<<<<<<<<<<< - * _copy_strided_to_strided(src_data, src_strides + 1, - * dst_data, dst_strides + 1, - */ - /*else*/ { - __pyx_t_4 = __pyx_v_dst_extent; - __pyx_t_5 = __pyx_t_4; - for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) { - __pyx_v_i = __pyx_t_6; - - /* "View.MemoryView":1165 - * else: - * for i in range(dst_extent): - * _copy_strided_to_strided(src_data, src_strides + 1, # <<<<<<<<<<<<<< - * dst_data, dst_strides + 1, - * src_shape + 1, dst_shape + 1, - */ - _copy_strided_to_strided(__pyx_v_src_data, (__pyx_v_src_strides + 1), __pyx_v_dst_data, (__pyx_v_dst_strides + 1), (__pyx_v_src_shape + 1), (__pyx_v_dst_shape + 1), (__pyx_v_ndim - 1), __pyx_v_itemsize); - - /* "View.MemoryView":1169 - * src_shape + 1, dst_shape + 1, - * ndim - 1, itemsize) - * src_data += src_stride # <<<<<<<<<<<<<< - * dst_data += dst_stride - * - */ - __pyx_v_src_data = (__pyx_v_src_data + __pyx_v_src_stride); - - /* "View.MemoryView":1170 - * ndim - 1, itemsize) - * src_data += src_stride - * dst_data += dst_stride # <<<<<<<<<<<<<< - * - * cdef void copy_strided_to_strided(__Pyx_memviewslice *src, - */ - __pyx_v_dst_data = (__pyx_v_dst_data + __pyx_v_dst_stride); - } - } - __pyx_L3:; - - /* "View.MemoryView":1142 - * - * @cython.cdivision(True) - * cdef void _copy_strided_to_strided(char *src_data, Py_ssize_t *src_strides, # <<<<<<<<<<<<<< - * char *dst_data, Py_ssize_t *dst_strides, - * Py_ssize_t *src_shape, Py_ssize_t *dst_shape, - */ - - /* function exit code */ -} - -/* "View.MemoryView":1172 - * dst_data += dst_stride - * - * cdef void copy_strided_to_strided(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * int ndim, size_t itemsize) nogil: - */ - -static void copy_strided_to_strided(__Pyx_memviewslice *__pyx_v_src, __Pyx_memviewslice *__pyx_v_dst, int __pyx_v_ndim, size_t __pyx_v_itemsize) { - - /* "View.MemoryView":1175 - * __Pyx_memviewslice *dst, - * int ndim, size_t itemsize) nogil: - * _copy_strided_to_strided(src.data, src.strides, dst.data, dst.strides, # <<<<<<<<<<<<<< - * src.shape, dst.shape, ndim, itemsize) - * - */ - _copy_strided_to_strided(__pyx_v_src->data, __pyx_v_src->strides, __pyx_v_dst->data, __pyx_v_dst->strides, __pyx_v_src->shape, __pyx_v_dst->shape, __pyx_v_ndim, __pyx_v_itemsize); - - /* "View.MemoryView":1172 - * dst_data += dst_stride - * - * cdef void copy_strided_to_strided(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * int ndim, size_t itemsize) nogil: - */ - - /* function exit code */ -} - -/* "View.MemoryView":1179 - * - * @cname('__pyx_memoryview_slice_get_size') - * cdef Py_ssize_t slice_get_size(__Pyx_memviewslice *src, int ndim) nogil: # <<<<<<<<<<<<<< - * "Return the size of the memory occupied by the slice in number of bytes" - * cdef Py_ssize_t shape, size = src.memview.view.itemsize - */ - -static Py_ssize_t __pyx_memoryview_slice_get_size(__Pyx_memviewslice *__pyx_v_src, int __pyx_v_ndim) { - Py_ssize_t __pyx_v_shape; - Py_ssize_t __pyx_v_size; - Py_ssize_t __pyx_r; - Py_ssize_t __pyx_t_1; - Py_ssize_t *__pyx_t_2; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - - /* "View.MemoryView":1181 - * cdef Py_ssize_t slice_get_size(__Pyx_memviewslice *src, int ndim) nogil: - * "Return the size of the memory occupied by the slice in number of bytes" - * cdef Py_ssize_t shape, size = src.memview.view.itemsize # <<<<<<<<<<<<<< - * - * for shape in src.shape[:ndim]: - */ - __pyx_t_1 = __pyx_v_src->memview->view.itemsize; - __pyx_v_size = __pyx_t_1; - - /* "View.MemoryView":1183 - * cdef Py_ssize_t shape, size = src.memview.view.itemsize - * - * for shape in src.shape[:ndim]: # <<<<<<<<<<<<<< - * size *= shape - * - */ - __pyx_t_3 = (__pyx_v_src->shape + __pyx_v_ndim); - for (__pyx_t_4 = __pyx_v_src->shape; __pyx_t_4 < __pyx_t_3; __pyx_t_4++) { - __pyx_t_2 = __pyx_t_4; - __pyx_v_shape = (__pyx_t_2[0]); - - /* "View.MemoryView":1184 - * - * for shape in src.shape[:ndim]: - * size *= shape # <<<<<<<<<<<<<< - * - * return size - */ - __pyx_v_size = (__pyx_v_size * __pyx_v_shape); - } - - /* "View.MemoryView":1186 - * size *= shape - * - * return size # <<<<<<<<<<<<<< - * - * @cname('__pyx_fill_contig_strides_array') - */ - __pyx_r = __pyx_v_size; - goto __pyx_L0; - - /* "View.MemoryView":1179 - * - * @cname('__pyx_memoryview_slice_get_size') - * cdef Py_ssize_t slice_get_size(__Pyx_memviewslice *src, int ndim) nogil: # <<<<<<<<<<<<<< - * "Return the size of the memory occupied by the slice in number of bytes" - * cdef Py_ssize_t shape, size = src.memview.view.itemsize - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1189 - * - * @cname('__pyx_fill_contig_strides_array') - * cdef Py_ssize_t fill_contig_strides_array( # <<<<<<<<<<<<<< - * Py_ssize_t *shape, Py_ssize_t *strides, Py_ssize_t stride, - * int ndim, char order) nogil: - */ - -static Py_ssize_t __pyx_fill_contig_strides_array(Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, Py_ssize_t __pyx_v_stride, int __pyx_v_ndim, char __pyx_v_order) { - int __pyx_v_idx; - Py_ssize_t __pyx_r; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - - /* "View.MemoryView":1198 - * cdef int idx - * - * if order == 'F': # <<<<<<<<<<<<<< - * for idx in range(ndim): - * strides[idx] = stride - */ - __pyx_t_1 = ((__pyx_v_order == 'F') != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1199 - * - * if order == 'F': - * for idx in range(ndim): # <<<<<<<<<<<<<< - * strides[idx] = stride - * stride *= shape[idx] - */ - __pyx_t_2 = __pyx_v_ndim; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_idx = __pyx_t_4; - - /* "View.MemoryView":1200 - * if order == 'F': - * for idx in range(ndim): - * strides[idx] = stride # <<<<<<<<<<<<<< - * stride *= shape[idx] - * else: - */ - (__pyx_v_strides[__pyx_v_idx]) = __pyx_v_stride; - - /* "View.MemoryView":1201 - * for idx in range(ndim): - * strides[idx] = stride - * stride *= shape[idx] # <<<<<<<<<<<<<< - * else: - * for idx in range(ndim - 1, -1, -1): - */ - __pyx_v_stride = (__pyx_v_stride * (__pyx_v_shape[__pyx_v_idx])); - } - - /* "View.MemoryView":1198 - * cdef int idx - * - * if order == 'F': # <<<<<<<<<<<<<< - * for idx in range(ndim): - * strides[idx] = stride - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1203 - * stride *= shape[idx] - * else: - * for idx in range(ndim - 1, -1, -1): # <<<<<<<<<<<<<< - * strides[idx] = stride - * stride *= shape[idx] - */ - /*else*/ { - for (__pyx_t_2 = (__pyx_v_ndim - 1); __pyx_t_2 > -1; __pyx_t_2-=1) { - __pyx_v_idx = __pyx_t_2; - - /* "View.MemoryView":1204 - * else: - * for idx in range(ndim - 1, -1, -1): - * strides[idx] = stride # <<<<<<<<<<<<<< - * stride *= shape[idx] - * - */ - (__pyx_v_strides[__pyx_v_idx]) = __pyx_v_stride; - - /* "View.MemoryView":1205 - * for idx in range(ndim - 1, -1, -1): - * strides[idx] = stride - * stride *= shape[idx] # <<<<<<<<<<<<<< - * - * return stride - */ - __pyx_v_stride = (__pyx_v_stride * (__pyx_v_shape[__pyx_v_idx])); - } - } - __pyx_L3:; - - /* "View.MemoryView":1207 - * stride *= shape[idx] - * - * return stride # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_data_to_temp') - */ - __pyx_r = __pyx_v_stride; - goto __pyx_L0; - - /* "View.MemoryView":1189 - * - * @cname('__pyx_fill_contig_strides_array') - * cdef Py_ssize_t fill_contig_strides_array( # <<<<<<<<<<<<<< - * Py_ssize_t *shape, Py_ssize_t *strides, Py_ssize_t stride, - * int ndim, char order) nogil: - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1210 - * - * @cname('__pyx_memoryview_copy_data_to_temp') - * cdef void *copy_data_to_temp(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *tmpslice, - * char order, - */ - -static void *__pyx_memoryview_copy_data_to_temp(__Pyx_memviewslice *__pyx_v_src, __Pyx_memviewslice *__pyx_v_tmpslice, char __pyx_v_order, int __pyx_v_ndim) { - int __pyx_v_i; - void *__pyx_v_result; - size_t __pyx_v_itemsize; - size_t __pyx_v_size; - void *__pyx_r; - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - struct __pyx_memoryview_obj *__pyx_t_4; - int __pyx_t_5; - int __pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - - /* "View.MemoryView":1221 - * cdef void *result - * - * cdef size_t itemsize = src.memview.view.itemsize # <<<<<<<<<<<<<< - * cdef size_t size = slice_get_size(src, ndim) - * - */ - __pyx_t_1 = __pyx_v_src->memview->view.itemsize; - __pyx_v_itemsize = __pyx_t_1; - - /* "View.MemoryView":1222 - * - * cdef size_t itemsize = src.memview.view.itemsize - * cdef size_t size = slice_get_size(src, ndim) # <<<<<<<<<<<<<< - * - * result = malloc(size) - */ - __pyx_v_size = __pyx_memoryview_slice_get_size(__pyx_v_src, __pyx_v_ndim); - - /* "View.MemoryView":1224 - * cdef size_t size = slice_get_size(src, ndim) - * - * result = malloc(size) # <<<<<<<<<<<<<< - * if not result: - * _err(MemoryError, NULL) - */ - __pyx_v_result = malloc(__pyx_v_size); - - /* "View.MemoryView":1225 - * - * result = malloc(size) - * if not result: # <<<<<<<<<<<<<< - * _err(MemoryError, NULL) - * - */ - __pyx_t_2 = ((!(__pyx_v_result != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1226 - * result = malloc(size) - * if not result: - * _err(MemoryError, NULL) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = __pyx_memoryview_err(__pyx_builtin_MemoryError, NULL); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 1226, __pyx_L1_error) - - /* "View.MemoryView":1225 - * - * result = malloc(size) - * if not result: # <<<<<<<<<<<<<< - * _err(MemoryError, NULL) - * - */ - } - - /* "View.MemoryView":1229 - * - * - * tmpslice.data = result # <<<<<<<<<<<<<< - * tmpslice.memview = src.memview - * for i in range(ndim): - */ - __pyx_v_tmpslice->data = ((char *)__pyx_v_result); - - /* "View.MemoryView":1230 - * - * tmpslice.data = result - * tmpslice.memview = src.memview # <<<<<<<<<<<<<< - * for i in range(ndim): - * tmpslice.shape[i] = src.shape[i] - */ - __pyx_t_4 = __pyx_v_src->memview; - __pyx_v_tmpslice->memview = __pyx_t_4; - - /* "View.MemoryView":1231 - * tmpslice.data = result - * tmpslice.memview = src.memview - * for i in range(ndim): # <<<<<<<<<<<<<< - * tmpslice.shape[i] = src.shape[i] - * tmpslice.suboffsets[i] = -1 - */ - __pyx_t_3 = __pyx_v_ndim; - __pyx_t_5 = __pyx_t_3; - for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) { - __pyx_v_i = __pyx_t_6; - - /* "View.MemoryView":1232 - * tmpslice.memview = src.memview - * for i in range(ndim): - * tmpslice.shape[i] = src.shape[i] # <<<<<<<<<<<<<< - * tmpslice.suboffsets[i] = -1 - * - */ - (__pyx_v_tmpslice->shape[__pyx_v_i]) = (__pyx_v_src->shape[__pyx_v_i]); - - /* "View.MemoryView":1233 - * for i in range(ndim): - * tmpslice.shape[i] = src.shape[i] - * tmpslice.suboffsets[i] = -1 # <<<<<<<<<<<<<< - * - * fill_contig_strides_array(&tmpslice.shape[0], &tmpslice.strides[0], itemsize, - */ - (__pyx_v_tmpslice->suboffsets[__pyx_v_i]) = -1L; - } - - /* "View.MemoryView":1235 - * tmpslice.suboffsets[i] = -1 - * - * fill_contig_strides_array(&tmpslice.shape[0], &tmpslice.strides[0], itemsize, # <<<<<<<<<<<<<< - * ndim, order) - * - */ - (void)(__pyx_fill_contig_strides_array((&(__pyx_v_tmpslice->shape[0])), (&(__pyx_v_tmpslice->strides[0])), __pyx_v_itemsize, __pyx_v_ndim, __pyx_v_order)); - - /* "View.MemoryView":1239 - * - * - * for i in range(ndim): # <<<<<<<<<<<<<< - * if tmpslice.shape[i] == 1: - * tmpslice.strides[i] = 0 - */ - __pyx_t_3 = __pyx_v_ndim; - __pyx_t_5 = __pyx_t_3; - for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) { - __pyx_v_i = __pyx_t_6; - - /* "View.MemoryView":1240 - * - * for i in range(ndim): - * if tmpslice.shape[i] == 1: # <<<<<<<<<<<<<< - * tmpslice.strides[i] = 0 - * - */ - __pyx_t_2 = (((__pyx_v_tmpslice->shape[__pyx_v_i]) == 1) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1241 - * for i in range(ndim): - * if tmpslice.shape[i] == 1: - * tmpslice.strides[i] = 0 # <<<<<<<<<<<<<< - * - * if slice_is_contig(src[0], order, ndim): - */ - (__pyx_v_tmpslice->strides[__pyx_v_i]) = 0; - - /* "View.MemoryView":1240 - * - * for i in range(ndim): - * if tmpslice.shape[i] == 1: # <<<<<<<<<<<<<< - * tmpslice.strides[i] = 0 - * - */ - } - } - - /* "View.MemoryView":1243 - * tmpslice.strides[i] = 0 - * - * if slice_is_contig(src[0], order, ndim): # <<<<<<<<<<<<<< - * memcpy(result, src.data, size) - * else: - */ - __pyx_t_2 = (__pyx_memviewslice_is_contig((__pyx_v_src[0]), __pyx_v_order, __pyx_v_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1244 - * - * if slice_is_contig(src[0], order, ndim): - * memcpy(result, src.data, size) # <<<<<<<<<<<<<< - * else: - * copy_strided_to_strided(src, tmpslice, ndim, itemsize) - */ - (void)(memcpy(__pyx_v_result, __pyx_v_src->data, __pyx_v_size)); - - /* "View.MemoryView":1243 - * tmpslice.strides[i] = 0 - * - * if slice_is_contig(src[0], order, ndim): # <<<<<<<<<<<<<< - * memcpy(result, src.data, size) - * else: - */ - goto __pyx_L9; - } - - /* "View.MemoryView":1246 - * memcpy(result, src.data, size) - * else: - * copy_strided_to_strided(src, tmpslice, ndim, itemsize) # <<<<<<<<<<<<<< - * - * return result - */ - /*else*/ { - copy_strided_to_strided(__pyx_v_src, __pyx_v_tmpslice, __pyx_v_ndim, __pyx_v_itemsize); - } - __pyx_L9:; - - /* "View.MemoryView":1248 - * copy_strided_to_strided(src, tmpslice, ndim, itemsize) - * - * return result # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = __pyx_v_result; - goto __pyx_L0; - - /* "View.MemoryView":1210 - * - * @cname('__pyx_memoryview_copy_data_to_temp') - * cdef void *copy_data_to_temp(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *tmpslice, - * char order, - */ - - /* function exit code */ - __pyx_L1_error:; - { - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.copy_data_to_temp", __pyx_clineno, __pyx_lineno, __pyx_filename); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - } - __pyx_r = NULL; - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1253 - * - * @cname('__pyx_memoryview_err_extents') - * cdef int _err_extents(int i, Py_ssize_t extent1, # <<<<<<<<<<<<<< - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError("got differing extents in dimension %d (got %d and %d)" % - */ - -static int __pyx_memoryview_err_extents(int __pyx_v_i, Py_ssize_t __pyx_v_extent1, Py_ssize_t __pyx_v_extent2) { - int __pyx_r; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("_err_extents", 0); - - /* "View.MemoryView":1256 - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError("got differing extents in dimension %d (got %d and %d)" % - * (i, extent1, extent2)) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_err_dim') - */ - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_i); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1256, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_extent1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1256, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyInt_FromSsize_t(__pyx_v_extent2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1256, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyTuple_New(3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1256, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_t_3); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_3 = 0; - - /* "View.MemoryView":1255 - * cdef int _err_extents(int i, Py_ssize_t extent1, - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError("got differing extents in dimension %d (got %d and %d)" % # <<<<<<<<<<<<<< - * (i, extent1, extent2)) - * - */ - __pyx_t_3 = __Pyx_PyString_Format(__pyx_kp_s_got_differing_extents_in_dimensi, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1255, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1255, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_4, 0, 0, 0); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __PYX_ERR(1, 1255, __pyx_L1_error) - - /* "View.MemoryView":1253 - * - * @cname('__pyx_memoryview_err_extents') - * cdef int _err_extents(int i, Py_ssize_t extent1, # <<<<<<<<<<<<<< - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError("got differing extents in dimension %d (got %d and %d)" % - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView._err_extents", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - return __pyx_r; -} - -/* "View.MemoryView":1259 - * - * @cname('__pyx_memoryview_err_dim') - * cdef int _err_dim(object error, char *msg, int dim) except -1 with gil: # <<<<<<<<<<<<<< - * raise error(msg.decode('ascii') % dim) - * - */ - -static int __pyx_memoryview_err_dim(PyObject *__pyx_v_error, char *__pyx_v_msg, int __pyx_v_dim) { - int __pyx_r; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("_err_dim", 0); - __Pyx_INCREF(__pyx_v_error); - - /* "View.MemoryView":1260 - * @cname('__pyx_memoryview_err_dim') - * cdef int _err_dim(object error, char *msg, int dim) except -1 with gil: - * raise error(msg.decode('ascii') % dim) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_err') - */ - __pyx_t_2 = __Pyx_decode_c_string(__pyx_v_msg, 0, strlen(__pyx_v_msg), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1260, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyInt_From_int(__pyx_v_dim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1260, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyUnicode_Format(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1260, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_INCREF(__pyx_v_error); - __pyx_t_3 = __pyx_v_error; __pyx_t_2 = NULL; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - } - } - __pyx_t_1 = (__pyx_t_2) ? __Pyx_PyObject_Call2Args(__pyx_t_3, __pyx_t_2, __pyx_t_4) : __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_t_4); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1260, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 1260, __pyx_L1_error) - - /* "View.MemoryView":1259 - * - * @cname('__pyx_memoryview_err_dim') - * cdef int _err_dim(object error, char *msg, int dim) except -1 with gil: # <<<<<<<<<<<<<< - * raise error(msg.decode('ascii') % dim) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView._err_dim", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __Pyx_XDECREF(__pyx_v_error); - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - return __pyx_r; -} - -/* "View.MemoryView":1263 - * - * @cname('__pyx_memoryview_err') - * cdef int _err(object error, char *msg) except -1 with gil: # <<<<<<<<<<<<<< - * if msg != NULL: - * raise error(msg.decode('ascii')) - */ - -static int __pyx_memoryview_err(PyObject *__pyx_v_error, char *__pyx_v_msg) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("_err", 0); - __Pyx_INCREF(__pyx_v_error); - - /* "View.MemoryView":1264 - * @cname('__pyx_memoryview_err') - * cdef int _err(object error, char *msg) except -1 with gil: - * if msg != NULL: # <<<<<<<<<<<<<< - * raise error(msg.decode('ascii')) - * else: - */ - __pyx_t_1 = ((__pyx_v_msg != NULL) != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":1265 - * cdef int _err(object error, char *msg) except -1 with gil: - * if msg != NULL: - * raise error(msg.decode('ascii')) # <<<<<<<<<<<<<< - * else: - * raise error - */ - __pyx_t_3 = __Pyx_decode_c_string(__pyx_v_msg, 0, strlen(__pyx_v_msg), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1265, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_error); - __pyx_t_4 = __pyx_v_error; __pyx_t_5 = NULL; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_4))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_4); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_4, function); - } - } - __pyx_t_2 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_4, __pyx_t_5, __pyx_t_3) : __Pyx_PyObject_CallOneArg(__pyx_t_4, __pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1265, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(1, 1265, __pyx_L1_error) - - /* "View.MemoryView":1264 - * @cname('__pyx_memoryview_err') - * cdef int _err(object error, char *msg) except -1 with gil: - * if msg != NULL: # <<<<<<<<<<<<<< - * raise error(msg.decode('ascii')) - * else: - */ - } - - /* "View.MemoryView":1267 - * raise error(msg.decode('ascii')) - * else: - * raise error # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_contents') - */ - /*else*/ { - __Pyx_Raise(__pyx_v_error, 0, 0, 0); - __PYX_ERR(1, 1267, __pyx_L1_error) - } - - /* "View.MemoryView":1263 - * - * @cname('__pyx_memoryview_err') - * cdef int _err(object error, char *msg) except -1 with gil: # <<<<<<<<<<<<<< - * if msg != NULL: - * raise error(msg.decode('ascii')) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView._err", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __Pyx_XDECREF(__pyx_v_error); - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - return __pyx_r; -} - -/* "View.MemoryView":1270 - * - * @cname('__pyx_memoryview_copy_contents') - * cdef int memoryview_copy_contents(__Pyx_memviewslice src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice dst, - * int src_ndim, int dst_ndim, - */ - -static int __pyx_memoryview_copy_contents(__Pyx_memviewslice __pyx_v_src, __Pyx_memviewslice __pyx_v_dst, int __pyx_v_src_ndim, int __pyx_v_dst_ndim, int __pyx_v_dtype_is_object) { - void *__pyx_v_tmpdata; - size_t __pyx_v_itemsize; - int __pyx_v_i; - char __pyx_v_order; - int __pyx_v_broadcasting; - int __pyx_v_direct_copy; - __Pyx_memviewslice __pyx_v_tmp; - int __pyx_v_ndim; - int __pyx_r; - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - int __pyx_t_5; - int __pyx_t_6; - void *__pyx_t_7; - int __pyx_t_8; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - - /* "View.MemoryView":1278 - * Check for overlapping memory and verify the shapes. - * """ - * cdef void *tmpdata = NULL # <<<<<<<<<<<<<< - * cdef size_t itemsize = src.memview.view.itemsize - * cdef int i - */ - __pyx_v_tmpdata = NULL; - - /* "View.MemoryView":1279 - * """ - * cdef void *tmpdata = NULL - * cdef size_t itemsize = src.memview.view.itemsize # <<<<<<<<<<<<<< - * cdef int i - * cdef char order = get_best_order(&src, src_ndim) - */ - __pyx_t_1 = __pyx_v_src.memview->view.itemsize; - __pyx_v_itemsize = __pyx_t_1; - - /* "View.MemoryView":1281 - * cdef size_t itemsize = src.memview.view.itemsize - * cdef int i - * cdef char order = get_best_order(&src, src_ndim) # <<<<<<<<<<<<<< - * cdef bint broadcasting = False - * cdef bint direct_copy = False - */ - __pyx_v_order = __pyx_get_best_slice_order((&__pyx_v_src), __pyx_v_src_ndim); - - /* "View.MemoryView":1282 - * cdef int i - * cdef char order = get_best_order(&src, src_ndim) - * cdef bint broadcasting = False # <<<<<<<<<<<<<< - * cdef bint direct_copy = False - * cdef __Pyx_memviewslice tmp - */ - __pyx_v_broadcasting = 0; - - /* "View.MemoryView":1283 - * cdef char order = get_best_order(&src, src_ndim) - * cdef bint broadcasting = False - * cdef bint direct_copy = False # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice tmp - * - */ - __pyx_v_direct_copy = 0; - - /* "View.MemoryView":1286 - * cdef __Pyx_memviewslice tmp - * - * if src_ndim < dst_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: - */ - __pyx_t_2 = ((__pyx_v_src_ndim < __pyx_v_dst_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1287 - * - * if src_ndim < dst_ndim: - * broadcast_leading(&src, src_ndim, dst_ndim) # <<<<<<<<<<<<<< - * elif dst_ndim < src_ndim: - * broadcast_leading(&dst, dst_ndim, src_ndim) - */ - __pyx_memoryview_broadcast_leading((&__pyx_v_src), __pyx_v_src_ndim, __pyx_v_dst_ndim); - - /* "View.MemoryView":1286 - * cdef __Pyx_memviewslice tmp - * - * if src_ndim < dst_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1288 - * if src_ndim < dst_ndim: - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&dst, dst_ndim, src_ndim) - * - */ - __pyx_t_2 = ((__pyx_v_dst_ndim < __pyx_v_src_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1289 - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: - * broadcast_leading(&dst, dst_ndim, src_ndim) # <<<<<<<<<<<<<< - * - * cdef int ndim = max(src_ndim, dst_ndim) - */ - __pyx_memoryview_broadcast_leading((&__pyx_v_dst), __pyx_v_dst_ndim, __pyx_v_src_ndim); - - /* "View.MemoryView":1288 - * if src_ndim < dst_ndim: - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&dst, dst_ndim, src_ndim) - * - */ - } - __pyx_L3:; - - /* "View.MemoryView":1291 - * broadcast_leading(&dst, dst_ndim, src_ndim) - * - * cdef int ndim = max(src_ndim, dst_ndim) # <<<<<<<<<<<<<< - * - * for i in range(ndim): - */ - __pyx_t_3 = __pyx_v_dst_ndim; - __pyx_t_4 = __pyx_v_src_ndim; - if (((__pyx_t_3 > __pyx_t_4) != 0)) { - __pyx_t_5 = __pyx_t_3; - } else { - __pyx_t_5 = __pyx_t_4; - } - __pyx_v_ndim = __pyx_t_5; - - /* "View.MemoryView":1293 - * cdef int ndim = max(src_ndim, dst_ndim) - * - * for i in range(ndim): # <<<<<<<<<<<<<< - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: - */ - __pyx_t_5 = __pyx_v_ndim; - __pyx_t_3 = __pyx_t_5; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1294 - * - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: # <<<<<<<<<<<<<< - * if src.shape[i] == 1: - * broadcasting = True - */ - __pyx_t_2 = (((__pyx_v_src.shape[__pyx_v_i]) != (__pyx_v_dst.shape[__pyx_v_i])) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1295 - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: # <<<<<<<<<<<<<< - * broadcasting = True - * src.strides[i] = 0 - */ - __pyx_t_2 = (((__pyx_v_src.shape[__pyx_v_i]) == 1) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1296 - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: - * broadcasting = True # <<<<<<<<<<<<<< - * src.strides[i] = 0 - * else: - */ - __pyx_v_broadcasting = 1; - - /* "View.MemoryView":1297 - * if src.shape[i] == 1: - * broadcasting = True - * src.strides[i] = 0 # <<<<<<<<<<<<<< - * else: - * _err_extents(i, dst.shape[i], src.shape[i]) - */ - (__pyx_v_src.strides[__pyx_v_i]) = 0; - - /* "View.MemoryView":1295 - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: # <<<<<<<<<<<<<< - * broadcasting = True - * src.strides[i] = 0 - */ - goto __pyx_L7; - } - - /* "View.MemoryView":1299 - * src.strides[i] = 0 - * else: - * _err_extents(i, dst.shape[i], src.shape[i]) # <<<<<<<<<<<<<< - * - * if src.suboffsets[i] >= 0: - */ - /*else*/ { - __pyx_t_6 = __pyx_memoryview_err_extents(__pyx_v_i, (__pyx_v_dst.shape[__pyx_v_i]), (__pyx_v_src.shape[__pyx_v_i])); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(1, 1299, __pyx_L1_error) - } - __pyx_L7:; - - /* "View.MemoryView":1294 - * - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: # <<<<<<<<<<<<<< - * if src.shape[i] == 1: - * broadcasting = True - */ - } - - /* "View.MemoryView":1301 - * _err_extents(i, dst.shape[i], src.shape[i]) - * - * if src.suboffsets[i] >= 0: # <<<<<<<<<<<<<< - * _err_dim(ValueError, "Dimension %d is not direct", i) - * - */ - __pyx_t_2 = (((__pyx_v_src.suboffsets[__pyx_v_i]) >= 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1302 - * - * if src.suboffsets[i] >= 0: - * _err_dim(ValueError, "Dimension %d is not direct", i) # <<<<<<<<<<<<<< - * - * if slices_overlap(&src, &dst, ndim, itemsize): - */ - __pyx_t_6 = __pyx_memoryview_err_dim(__pyx_builtin_ValueError, ((char *)"Dimension %d is not direct"), __pyx_v_i); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(1, 1302, __pyx_L1_error) - - /* "View.MemoryView":1301 - * _err_extents(i, dst.shape[i], src.shape[i]) - * - * if src.suboffsets[i] >= 0: # <<<<<<<<<<<<<< - * _err_dim(ValueError, "Dimension %d is not direct", i) - * - */ - } - } - - /* "View.MemoryView":1304 - * _err_dim(ValueError, "Dimension %d is not direct", i) - * - * if slices_overlap(&src, &dst, ndim, itemsize): # <<<<<<<<<<<<<< - * - * if not slice_is_contig(src, order, ndim): - */ - __pyx_t_2 = (__pyx_slices_overlap((&__pyx_v_src), (&__pyx_v_dst), __pyx_v_ndim, __pyx_v_itemsize) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1306 - * if slices_overlap(&src, &dst, ndim, itemsize): - * - * if not slice_is_contig(src, order, ndim): # <<<<<<<<<<<<<< - * order = get_best_order(&dst, ndim) - * - */ - __pyx_t_2 = ((!(__pyx_memviewslice_is_contig(__pyx_v_src, __pyx_v_order, __pyx_v_ndim) != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1307 - * - * if not slice_is_contig(src, order, ndim): - * order = get_best_order(&dst, ndim) # <<<<<<<<<<<<<< - * - * tmpdata = copy_data_to_temp(&src, &tmp, order, ndim) - */ - __pyx_v_order = __pyx_get_best_slice_order((&__pyx_v_dst), __pyx_v_ndim); - - /* "View.MemoryView":1306 - * if slices_overlap(&src, &dst, ndim, itemsize): - * - * if not slice_is_contig(src, order, ndim): # <<<<<<<<<<<<<< - * order = get_best_order(&dst, ndim) - * - */ - } - - /* "View.MemoryView":1309 - * order = get_best_order(&dst, ndim) - * - * tmpdata = copy_data_to_temp(&src, &tmp, order, ndim) # <<<<<<<<<<<<<< - * src = tmp - * - */ - __pyx_t_7 = __pyx_memoryview_copy_data_to_temp((&__pyx_v_src), (&__pyx_v_tmp), __pyx_v_order, __pyx_v_ndim); if (unlikely(__pyx_t_7 == ((void *)NULL))) __PYX_ERR(1, 1309, __pyx_L1_error) - __pyx_v_tmpdata = __pyx_t_7; - - /* "View.MemoryView":1310 - * - * tmpdata = copy_data_to_temp(&src, &tmp, order, ndim) - * src = tmp # <<<<<<<<<<<<<< - * - * if not broadcasting: - */ - __pyx_v_src = __pyx_v_tmp; - - /* "View.MemoryView":1304 - * _err_dim(ValueError, "Dimension %d is not direct", i) - * - * if slices_overlap(&src, &dst, ndim, itemsize): # <<<<<<<<<<<<<< - * - * if not slice_is_contig(src, order, ndim): - */ - } - - /* "View.MemoryView":1312 - * src = tmp - * - * if not broadcasting: # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_2 = ((!(__pyx_v_broadcasting != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1315 - * - * - * if slice_is_contig(src, 'C', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): - */ - __pyx_t_2 = (__pyx_memviewslice_is_contig(__pyx_v_src, 'C', __pyx_v_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1316 - * - * if slice_is_contig(src, 'C', ndim): - * direct_copy = slice_is_contig(dst, 'C', ndim) # <<<<<<<<<<<<<< - * elif slice_is_contig(src, 'F', ndim): - * direct_copy = slice_is_contig(dst, 'F', ndim) - */ - __pyx_v_direct_copy = __pyx_memviewslice_is_contig(__pyx_v_dst, 'C', __pyx_v_ndim); - - /* "View.MemoryView":1315 - * - * - * if slice_is_contig(src, 'C', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): - */ - goto __pyx_L12; - } - - /* "View.MemoryView":1317 - * if slice_is_contig(src, 'C', ndim): - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - */ - __pyx_t_2 = (__pyx_memviewslice_is_contig(__pyx_v_src, 'F', __pyx_v_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1318 - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): - * direct_copy = slice_is_contig(dst, 'F', ndim) # <<<<<<<<<<<<<< - * - * if direct_copy: - */ - __pyx_v_direct_copy = __pyx_memviewslice_is_contig(__pyx_v_dst, 'F', __pyx_v_ndim); - - /* "View.MemoryView":1317 - * if slice_is_contig(src, 'C', ndim): - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - */ - } - __pyx_L12:; - - /* "View.MemoryView":1320 - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - * if direct_copy: # <<<<<<<<<<<<<< - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - */ - __pyx_t_2 = (__pyx_v_direct_copy != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1322 - * if direct_copy: - * - * refcount_copying(&dst, dtype_is_object, ndim, False) # <<<<<<<<<<<<<< - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) - * refcount_copying(&dst, dtype_is_object, ndim, True) - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 0); - - /* "View.MemoryView":1323 - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) # <<<<<<<<<<<<<< - * refcount_copying(&dst, dtype_is_object, ndim, True) - * free(tmpdata) - */ - (void)(memcpy(__pyx_v_dst.data, __pyx_v_src.data, __pyx_memoryview_slice_get_size((&__pyx_v_src), __pyx_v_ndim))); - - /* "View.MemoryView":1324 - * refcount_copying(&dst, dtype_is_object, ndim, False) - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) - * refcount_copying(&dst, dtype_is_object, ndim, True) # <<<<<<<<<<<<<< - * free(tmpdata) - * return 0 - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 1); - - /* "View.MemoryView":1325 - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) - * refcount_copying(&dst, dtype_is_object, ndim, True) - * free(tmpdata) # <<<<<<<<<<<<<< - * return 0 - * - */ - free(__pyx_v_tmpdata); - - /* "View.MemoryView":1326 - * refcount_copying(&dst, dtype_is_object, ndim, True) - * free(tmpdata) - * return 0 # <<<<<<<<<<<<<< - * - * if order == 'F' == get_best_order(&dst, ndim): - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":1320 - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - * if direct_copy: # <<<<<<<<<<<<<< - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - */ - } - - /* "View.MemoryView":1312 - * src = tmp - * - * if not broadcasting: # <<<<<<<<<<<<<< - * - * - */ - } - - /* "View.MemoryView":1328 - * return 0 - * - * if order == 'F' == get_best_order(&dst, ndim): # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_2 = (__pyx_v_order == 'F'); - if (__pyx_t_2) { - __pyx_t_2 = ('F' == __pyx_get_best_slice_order((&__pyx_v_dst), __pyx_v_ndim)); - } - __pyx_t_8 = (__pyx_t_2 != 0); - if (__pyx_t_8) { - - /* "View.MemoryView":1331 - * - * - * transpose_memslice(&src) # <<<<<<<<<<<<<< - * transpose_memslice(&dst) - * - */ - __pyx_t_5 = __pyx_memslice_transpose((&__pyx_v_src)); if (unlikely(__pyx_t_5 == ((int)0))) __PYX_ERR(1, 1331, __pyx_L1_error) - - /* "View.MemoryView":1332 - * - * transpose_memslice(&src) - * transpose_memslice(&dst) # <<<<<<<<<<<<<< - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - */ - __pyx_t_5 = __pyx_memslice_transpose((&__pyx_v_dst)); if (unlikely(__pyx_t_5 == ((int)0))) __PYX_ERR(1, 1332, __pyx_L1_error) - - /* "View.MemoryView":1328 - * return 0 - * - * if order == 'F' == get_best_order(&dst, ndim): # <<<<<<<<<<<<<< - * - * - */ - } - - /* "View.MemoryView":1334 - * transpose_memslice(&dst) - * - * refcount_copying(&dst, dtype_is_object, ndim, False) # <<<<<<<<<<<<<< - * copy_strided_to_strided(&src, &dst, ndim, itemsize) - * refcount_copying(&dst, dtype_is_object, ndim, True) - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 0); - - /* "View.MemoryView":1335 - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - * copy_strided_to_strided(&src, &dst, ndim, itemsize) # <<<<<<<<<<<<<< - * refcount_copying(&dst, dtype_is_object, ndim, True) - * - */ - copy_strided_to_strided((&__pyx_v_src), (&__pyx_v_dst), __pyx_v_ndim, __pyx_v_itemsize); - - /* "View.MemoryView":1336 - * refcount_copying(&dst, dtype_is_object, ndim, False) - * copy_strided_to_strided(&src, &dst, ndim, itemsize) - * refcount_copying(&dst, dtype_is_object, ndim, True) # <<<<<<<<<<<<<< - * - * free(tmpdata) - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 1); - - /* "View.MemoryView":1338 - * refcount_copying(&dst, dtype_is_object, ndim, True) - * - * free(tmpdata) # <<<<<<<<<<<<<< - * return 0 - * - */ - free(__pyx_v_tmpdata); - - /* "View.MemoryView":1339 - * - * free(tmpdata) - * return 0 # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_broadcast_leading') - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":1270 - * - * @cname('__pyx_memoryview_copy_contents') - * cdef int memoryview_copy_contents(__Pyx_memviewslice src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice dst, - * int src_ndim, int dst_ndim, - */ - - /* function exit code */ - __pyx_L1_error:; - { - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.memoryview_copy_contents", __pyx_clineno, __pyx_lineno, __pyx_filename); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - } - __pyx_r = -1; - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1342 - * - * @cname('__pyx_memoryview_broadcast_leading') - * cdef void broadcast_leading(__Pyx_memviewslice *mslice, # <<<<<<<<<<<<<< - * int ndim, - * int ndim_other) nogil: - */ - -static void __pyx_memoryview_broadcast_leading(__Pyx_memviewslice *__pyx_v_mslice, int __pyx_v_ndim, int __pyx_v_ndim_other) { - int __pyx_v_i; - int __pyx_v_offset; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - - /* "View.MemoryView":1346 - * int ndim_other) nogil: - * cdef int i - * cdef int offset = ndim_other - ndim # <<<<<<<<<<<<<< - * - * for i in range(ndim - 1, -1, -1): - */ - __pyx_v_offset = (__pyx_v_ndim_other - __pyx_v_ndim); - - /* "View.MemoryView":1348 - * cdef int offset = ndim_other - ndim - * - * for i in range(ndim - 1, -1, -1): # <<<<<<<<<<<<<< - * mslice.shape[i + offset] = mslice.shape[i] - * mslice.strides[i + offset] = mslice.strides[i] - */ - for (__pyx_t_1 = (__pyx_v_ndim - 1); __pyx_t_1 > -1; __pyx_t_1-=1) { - __pyx_v_i = __pyx_t_1; - - /* "View.MemoryView":1349 - * - * for i in range(ndim - 1, -1, -1): - * mslice.shape[i + offset] = mslice.shape[i] # <<<<<<<<<<<<<< - * mslice.strides[i + offset] = mslice.strides[i] - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] - */ - (__pyx_v_mslice->shape[(__pyx_v_i + __pyx_v_offset)]) = (__pyx_v_mslice->shape[__pyx_v_i]); - - /* "View.MemoryView":1350 - * for i in range(ndim - 1, -1, -1): - * mslice.shape[i + offset] = mslice.shape[i] - * mslice.strides[i + offset] = mslice.strides[i] # <<<<<<<<<<<<<< - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] - * - */ - (__pyx_v_mslice->strides[(__pyx_v_i + __pyx_v_offset)]) = (__pyx_v_mslice->strides[__pyx_v_i]); - - /* "View.MemoryView":1351 - * mslice.shape[i + offset] = mslice.shape[i] - * mslice.strides[i + offset] = mslice.strides[i] - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] # <<<<<<<<<<<<<< - * - * for i in range(offset): - */ - (__pyx_v_mslice->suboffsets[(__pyx_v_i + __pyx_v_offset)]) = (__pyx_v_mslice->suboffsets[__pyx_v_i]); - } - - /* "View.MemoryView":1353 - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] - * - * for i in range(offset): # <<<<<<<<<<<<<< - * mslice.shape[i] = 1 - * mslice.strides[i] = mslice.strides[0] - */ - __pyx_t_1 = __pyx_v_offset; - __pyx_t_2 = __pyx_t_1; - for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { - __pyx_v_i = __pyx_t_3; - - /* "View.MemoryView":1354 - * - * for i in range(offset): - * mslice.shape[i] = 1 # <<<<<<<<<<<<<< - * mslice.strides[i] = mslice.strides[0] - * mslice.suboffsets[i] = -1 - */ - (__pyx_v_mslice->shape[__pyx_v_i]) = 1; - - /* "View.MemoryView":1355 - * for i in range(offset): - * mslice.shape[i] = 1 - * mslice.strides[i] = mslice.strides[0] # <<<<<<<<<<<<<< - * mslice.suboffsets[i] = -1 - * - */ - (__pyx_v_mslice->strides[__pyx_v_i]) = (__pyx_v_mslice->strides[0]); - - /* "View.MemoryView":1356 - * mslice.shape[i] = 1 - * mslice.strides[i] = mslice.strides[0] - * mslice.suboffsets[i] = -1 # <<<<<<<<<<<<<< - * - * - */ - (__pyx_v_mslice->suboffsets[__pyx_v_i]) = -1L; - } - - /* "View.MemoryView":1342 - * - * @cname('__pyx_memoryview_broadcast_leading') - * cdef void broadcast_leading(__Pyx_memviewslice *mslice, # <<<<<<<<<<<<<< - * int ndim, - * int ndim_other) nogil: - */ - - /* function exit code */ -} - -/* "View.MemoryView":1364 - * - * @cname('__pyx_memoryview_refcount_copying') - * cdef void refcount_copying(__Pyx_memviewslice *dst, bint dtype_is_object, # <<<<<<<<<<<<<< - * int ndim, bint inc) nogil: - * - */ - -static void __pyx_memoryview_refcount_copying(__Pyx_memviewslice *__pyx_v_dst, int __pyx_v_dtype_is_object, int __pyx_v_ndim, int __pyx_v_inc) { - int __pyx_t_1; - - /* "View.MemoryView":1368 - * - * - * if dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice_with_gil(dst.data, dst.shape, - * dst.strides, ndim, inc) - */ - __pyx_t_1 = (__pyx_v_dtype_is_object != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1369 - * - * if dtype_is_object: - * refcount_objects_in_slice_with_gil(dst.data, dst.shape, # <<<<<<<<<<<<<< - * dst.strides, ndim, inc) - * - */ - __pyx_memoryview_refcount_objects_in_slice_with_gil(__pyx_v_dst->data, __pyx_v_dst->shape, __pyx_v_dst->strides, __pyx_v_ndim, __pyx_v_inc); - - /* "View.MemoryView":1368 - * - * - * if dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice_with_gil(dst.data, dst.shape, - * dst.strides, ndim, inc) - */ - } - - /* "View.MemoryView":1364 - * - * @cname('__pyx_memoryview_refcount_copying') - * cdef void refcount_copying(__Pyx_memviewslice *dst, bint dtype_is_object, # <<<<<<<<<<<<<< - * int ndim, bint inc) nogil: - * - */ - - /* function exit code */ -} - -/* "View.MemoryView":1373 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice_with_gil') - * cdef void refcount_objects_in_slice_with_gil(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * bint inc) with gil: - */ - -static void __pyx_memoryview_refcount_objects_in_slice_with_gil(char *__pyx_v_data, Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, int __pyx_v_ndim, int __pyx_v_inc) { - __Pyx_RefNannyDeclarations - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("refcount_objects_in_slice_with_gil", 0); - - /* "View.MemoryView":1376 - * Py_ssize_t *strides, int ndim, - * bint inc) with gil: - * refcount_objects_in_slice(data, shape, strides, ndim, inc) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_refcount_objects_in_slice') - */ - __pyx_memoryview_refcount_objects_in_slice(__pyx_v_data, __pyx_v_shape, __pyx_v_strides, __pyx_v_ndim, __pyx_v_inc); - - /* "View.MemoryView":1373 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice_with_gil') - * cdef void refcount_objects_in_slice_with_gil(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * bint inc) with gil: - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif -} - -/* "View.MemoryView":1379 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice') - * cdef void refcount_objects_in_slice(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, bint inc): - * cdef Py_ssize_t i - */ - -static void __pyx_memoryview_refcount_objects_in_slice(char *__pyx_v_data, Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, int __pyx_v_ndim, int __pyx_v_inc) { - CYTHON_UNUSED Py_ssize_t __pyx_v_i; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - Py_ssize_t __pyx_t_2; - Py_ssize_t __pyx_t_3; - int __pyx_t_4; - __Pyx_RefNannySetupContext("refcount_objects_in_slice", 0); - - /* "View.MemoryView":1383 - * cdef Py_ssize_t i - * - * for i in range(shape[0]): # <<<<<<<<<<<<<< - * if ndim == 1: - * if inc: - */ - __pyx_t_1 = (__pyx_v_shape[0]); - __pyx_t_2 = __pyx_t_1; - for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { - __pyx_v_i = __pyx_t_3; - - /* "View.MemoryView":1384 - * - * for i in range(shape[0]): - * if ndim == 1: # <<<<<<<<<<<<<< - * if inc: - * Py_INCREF(( data)[0]) - */ - __pyx_t_4 = ((__pyx_v_ndim == 1) != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":1385 - * for i in range(shape[0]): - * if ndim == 1: - * if inc: # <<<<<<<<<<<<<< - * Py_INCREF(( data)[0]) - * else: - */ - __pyx_t_4 = (__pyx_v_inc != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":1386 - * if ndim == 1: - * if inc: - * Py_INCREF(( data)[0]) # <<<<<<<<<<<<<< - * else: - * Py_DECREF(( data)[0]) - */ - Py_INCREF((((PyObject **)__pyx_v_data)[0])); - - /* "View.MemoryView":1385 - * for i in range(shape[0]): - * if ndim == 1: - * if inc: # <<<<<<<<<<<<<< - * Py_INCREF(( data)[0]) - * else: - */ - goto __pyx_L6; - } - - /* "View.MemoryView":1388 - * Py_INCREF(( data)[0]) - * else: - * Py_DECREF(( data)[0]) # <<<<<<<<<<<<<< - * else: - * refcount_objects_in_slice(data, shape + 1, strides + 1, - */ - /*else*/ { - Py_DECREF((((PyObject **)__pyx_v_data)[0])); - } - __pyx_L6:; - - /* "View.MemoryView":1384 - * - * for i in range(shape[0]): - * if ndim == 1: # <<<<<<<<<<<<<< - * if inc: - * Py_INCREF(( data)[0]) - */ - goto __pyx_L5; - } - - /* "View.MemoryView":1390 - * Py_DECREF(( data)[0]) - * else: - * refcount_objects_in_slice(data, shape + 1, strides + 1, # <<<<<<<<<<<<<< - * ndim - 1, inc) - * - */ - /*else*/ { - - /* "View.MemoryView":1391 - * else: - * refcount_objects_in_slice(data, shape + 1, strides + 1, - * ndim - 1, inc) # <<<<<<<<<<<<<< - * - * data += strides[0] - */ - __pyx_memoryview_refcount_objects_in_slice(__pyx_v_data, (__pyx_v_shape + 1), (__pyx_v_strides + 1), (__pyx_v_ndim - 1), __pyx_v_inc); - } - __pyx_L5:; - - /* "View.MemoryView":1393 - * ndim - 1, inc) - * - * data += strides[0] # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_data = (__pyx_v_data + (__pyx_v_strides[0])); - } - - /* "View.MemoryView":1379 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice') - * cdef void refcount_objects_in_slice(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, bint inc): - * cdef Py_ssize_t i - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":1399 - * - * @cname('__pyx_memoryview_slice_assign_scalar') - * cdef void slice_assign_scalar(__Pyx_memviewslice *dst, int ndim, # <<<<<<<<<<<<<< - * size_t itemsize, void *item, - * bint dtype_is_object) nogil: - */ - -static void __pyx_memoryview_slice_assign_scalar(__Pyx_memviewslice *__pyx_v_dst, int __pyx_v_ndim, size_t __pyx_v_itemsize, void *__pyx_v_item, int __pyx_v_dtype_is_object) { - - /* "View.MemoryView":1402 - * size_t itemsize, void *item, - * bint dtype_is_object) nogil: - * refcount_copying(dst, dtype_is_object, ndim, False) # <<<<<<<<<<<<<< - * _slice_assign_scalar(dst.data, dst.shape, dst.strides, ndim, - * itemsize, item) - */ - __pyx_memoryview_refcount_copying(__pyx_v_dst, __pyx_v_dtype_is_object, __pyx_v_ndim, 0); - - /* "View.MemoryView":1403 - * bint dtype_is_object) nogil: - * refcount_copying(dst, dtype_is_object, ndim, False) - * _slice_assign_scalar(dst.data, dst.shape, dst.strides, ndim, # <<<<<<<<<<<<<< - * itemsize, item) - * refcount_copying(dst, dtype_is_object, ndim, True) - */ - __pyx_memoryview__slice_assign_scalar(__pyx_v_dst->data, __pyx_v_dst->shape, __pyx_v_dst->strides, __pyx_v_ndim, __pyx_v_itemsize, __pyx_v_item); - - /* "View.MemoryView":1405 - * _slice_assign_scalar(dst.data, dst.shape, dst.strides, ndim, - * itemsize, item) - * refcount_copying(dst, dtype_is_object, ndim, True) # <<<<<<<<<<<<<< - * - * - */ - __pyx_memoryview_refcount_copying(__pyx_v_dst, __pyx_v_dtype_is_object, __pyx_v_ndim, 1); - - /* "View.MemoryView":1399 - * - * @cname('__pyx_memoryview_slice_assign_scalar') - * cdef void slice_assign_scalar(__Pyx_memviewslice *dst, int ndim, # <<<<<<<<<<<<<< - * size_t itemsize, void *item, - * bint dtype_is_object) nogil: - */ - - /* function exit code */ -} - -/* "View.MemoryView":1409 - * - * @cname('__pyx_memoryview__slice_assign_scalar') - * cdef void _slice_assign_scalar(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * size_t itemsize, void *item) nogil: - */ - -static void __pyx_memoryview__slice_assign_scalar(char *__pyx_v_data, Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, int __pyx_v_ndim, size_t __pyx_v_itemsize, void *__pyx_v_item) { - CYTHON_UNUSED Py_ssize_t __pyx_v_i; - Py_ssize_t __pyx_v_stride; - Py_ssize_t __pyx_v_extent; - int __pyx_t_1; - Py_ssize_t __pyx_t_2; - Py_ssize_t __pyx_t_3; - Py_ssize_t __pyx_t_4; - - /* "View.MemoryView":1413 - * size_t itemsize, void *item) nogil: - * cdef Py_ssize_t i - * cdef Py_ssize_t stride = strides[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t extent = shape[0] - * - */ - __pyx_v_stride = (__pyx_v_strides[0]); - - /* "View.MemoryView":1414 - * cdef Py_ssize_t i - * cdef Py_ssize_t stride = strides[0] - * cdef Py_ssize_t extent = shape[0] # <<<<<<<<<<<<<< - * - * if ndim == 1: - */ - __pyx_v_extent = (__pyx_v_shape[0]); - - /* "View.MemoryView":1416 - * cdef Py_ssize_t extent = shape[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * for i in range(extent): - * memcpy(data, item, itemsize) - */ - __pyx_t_1 = ((__pyx_v_ndim == 1) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1417 - * - * if ndim == 1: - * for i in range(extent): # <<<<<<<<<<<<<< - * memcpy(data, item, itemsize) - * data += stride - */ - __pyx_t_2 = __pyx_v_extent; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1418 - * if ndim == 1: - * for i in range(extent): - * memcpy(data, item, itemsize) # <<<<<<<<<<<<<< - * data += stride - * else: - */ - (void)(memcpy(__pyx_v_data, __pyx_v_item, __pyx_v_itemsize)); - - /* "View.MemoryView":1419 - * for i in range(extent): - * memcpy(data, item, itemsize) - * data += stride # <<<<<<<<<<<<<< - * else: - * for i in range(extent): - */ - __pyx_v_data = (__pyx_v_data + __pyx_v_stride); - } - - /* "View.MemoryView":1416 - * cdef Py_ssize_t extent = shape[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * for i in range(extent): - * memcpy(data, item, itemsize) - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1421 - * data += stride - * else: - * for i in range(extent): # <<<<<<<<<<<<<< - * _slice_assign_scalar(data, shape + 1, strides + 1, - * ndim - 1, itemsize, item) - */ - /*else*/ { - __pyx_t_2 = __pyx_v_extent; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1422 - * else: - * for i in range(extent): - * _slice_assign_scalar(data, shape + 1, strides + 1, # <<<<<<<<<<<<<< - * ndim - 1, itemsize, item) - * data += stride - */ - __pyx_memoryview__slice_assign_scalar(__pyx_v_data, (__pyx_v_shape + 1), (__pyx_v_strides + 1), (__pyx_v_ndim - 1), __pyx_v_itemsize, __pyx_v_item); - - /* "View.MemoryView":1424 - * _slice_assign_scalar(data, shape + 1, strides + 1, - * ndim - 1, itemsize, item) - * data += stride # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_data = (__pyx_v_data + __pyx_v_stride); - } - } - __pyx_L3:; - - /* "View.MemoryView":1409 - * - * @cname('__pyx_memoryview__slice_assign_scalar') - * cdef void _slice_assign_scalar(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * size_t itemsize, void *item) nogil: - */ - - /* function exit code */ -} - -/* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static PyMethodDef __pyx_mdef_15View_dot_MemoryView_1__pyx_unpickle_Enum = {"__pyx_unpickle_Enum", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum, METH_VARARGS|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v___pyx_type = 0; - long __pyx_v___pyx_checksum; - PyObject *__pyx_v___pyx_state = 0; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__pyx_unpickle_Enum (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pyx_type,&__pyx_n_s_pyx_checksum,&__pyx_n_s_pyx_state,0}; - PyObject* values[3] = {0,0,0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pyx_type)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pyx_checksum)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__pyx_unpickle_Enum", 1, 3, 3, 1); __PYX_ERR(1, 1, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pyx_state)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__pyx_unpickle_Enum", 1, 3, 3, 2); __PYX_ERR(1, 1, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__pyx_unpickle_Enum") < 0)) __PYX_ERR(1, 1, __pyx_L3_error) - } - } else if (PyTuple_GET_SIZE(__pyx_args) != 3) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - } - __pyx_v___pyx_type = values[0]; - __pyx_v___pyx_checksum = __Pyx_PyInt_As_long(values[1]); if (unlikely((__pyx_v___pyx_checksum == (long)-1) && PyErr_Occurred())) __PYX_ERR(1, 1, __pyx_L3_error) - __pyx_v___pyx_state = values[2]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__pyx_unpickle_Enum", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 1, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.__pyx_unpickle_Enum", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(__pyx_self, __pyx_v___pyx_type, __pyx_v___pyx_checksum, __pyx_v___pyx_state); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v___pyx_type, long __pyx_v___pyx_checksum, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_v___pyx_PickleError = 0; - PyObject *__pyx_v___pyx_result = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - int __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__pyx_unpickle_Enum", 0); - - /* "(tree fragment)":4 - * cdef object __pyx_PickleError - * cdef object __pyx_result - * if __pyx_checksum not in (0xb068931, 0x82a3537, 0x6ae9995): # <<<<<<<<<<<<<< - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError("Incompatible checksums (0x%x vs (0xb068931, 0x82a3537, 0x6ae9995) = (name))" % __pyx_checksum) - */ - __pyx_t_1 = __Pyx_PyInt_From_long(__pyx_v___pyx_checksum); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = (__Pyx_PySequence_ContainsTF(__pyx_t_1, __pyx_tuple__20, Py_NE)); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_3 = (__pyx_t_2 != 0); - if (__pyx_t_3) { - - /* "(tree fragment)":5 - * cdef object __pyx_result - * if __pyx_checksum not in (0xb068931, 0x82a3537, 0x6ae9995): - * from pickle import PickleError as __pyx_PickleError # <<<<<<<<<<<<<< - * raise __pyx_PickleError("Incompatible checksums (0x%x vs (0xb068931, 0x82a3537, 0x6ae9995) = (name))" % __pyx_checksum) - * __pyx_result = Enum.__new__(__pyx_type) - */ - __pyx_t_1 = PyList_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_n_s_PickleError); - __Pyx_GIVEREF(__pyx_n_s_PickleError); - PyList_SET_ITEM(__pyx_t_1, 0, __pyx_n_s_PickleError); - __pyx_t_4 = __Pyx_Import(__pyx_n_s_pickle, __pyx_t_1, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_4, __pyx_n_s_PickleError); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_1); - __pyx_v___pyx_PickleError = __pyx_t_1; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "(tree fragment)":6 - * if __pyx_checksum not in (0xb068931, 0x82a3537, 0x6ae9995): - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError("Incompatible checksums (0x%x vs (0xb068931, 0x82a3537, 0x6ae9995) = (name))" % __pyx_checksum) # <<<<<<<<<<<<<< - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: - */ - __pyx_t_1 = __Pyx_PyInt_From_long(__pyx_v___pyx_checksum); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_5 = __Pyx_PyString_Format(__pyx_kp_s_Incompatible_checksums_0x_x_vs_0, __pyx_t_1); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_INCREF(__pyx_v___pyx_PickleError); - __pyx_t_1 = __pyx_v___pyx_PickleError; __pyx_t_6 = NULL; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_6)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - } - } - __pyx_t_4 = (__pyx_t_6) ? __Pyx_PyObject_Call2Args(__pyx_t_1, __pyx_t_6, __pyx_t_5) : __Pyx_PyObject_CallOneArg(__pyx_t_1, __pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_Raise(__pyx_t_4, 0, 0, 0); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __PYX_ERR(1, 6, __pyx_L1_error) - - /* "(tree fragment)":4 - * cdef object __pyx_PickleError - * cdef object __pyx_result - * if __pyx_checksum not in (0xb068931, 0x82a3537, 0x6ae9995): # <<<<<<<<<<<<<< - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError("Incompatible checksums (0x%x vs (0xb068931, 0x82a3537, 0x6ae9995) = (name))" % __pyx_checksum) - */ - } - - /* "(tree fragment)":7 - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError("Incompatible checksums (0x%x vs (0xb068931, 0x82a3537, 0x6ae9995) = (name))" % __pyx_checksum) - * __pyx_result = Enum.__new__(__pyx_type) # <<<<<<<<<<<<<< - * if __pyx_state is not None: - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_MemviewEnum_type), __pyx_n_s_new); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 7, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_5 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - } - } - __pyx_t_4 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_1, __pyx_t_5, __pyx_v___pyx_type) : __Pyx_PyObject_CallOneArg(__pyx_t_1, __pyx_v___pyx_type); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 7, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v___pyx_result = __pyx_t_4; - __pyx_t_4 = 0; - - /* "(tree fragment)":8 - * raise __pyx_PickleError("Incompatible checksums (0x%x vs (0xb068931, 0x82a3537, 0x6ae9995) = (name))" % __pyx_checksum) - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - */ - __pyx_t_3 = (__pyx_v___pyx_state != Py_None); - __pyx_t_2 = (__pyx_t_3 != 0); - if (__pyx_t_2) { - - /* "(tree fragment)":9 - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) # <<<<<<<<<<<<<< - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - */ - if (!(likely(PyTuple_CheckExact(__pyx_v___pyx_state))||((__pyx_v___pyx_state) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "tuple", Py_TYPE(__pyx_v___pyx_state)->tp_name), 0))) __PYX_ERR(1, 9, __pyx_L1_error) - __pyx_t_4 = __pyx_unpickle_Enum__set_state(((struct __pyx_MemviewEnum_obj *)__pyx_v___pyx_result), ((PyObject*)__pyx_v___pyx_state)); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 9, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "(tree fragment)":8 - * raise __pyx_PickleError("Incompatible checksums (0x%x vs (0xb068931, 0x82a3537, 0x6ae9995) = (name))" % __pyx_checksum) - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - */ - } - - /* "(tree fragment)":10 - * if __pyx_state is not None: - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result # <<<<<<<<<<<<<< - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v___pyx_result); - __pyx_r = __pyx_v___pyx_result; - goto __pyx_L0; - - /* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("View.MemoryView.__pyx_unpickle_Enum", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v___pyx_PickleError); - __Pyx_XDECREF(__pyx_v___pyx_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":11 - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<< - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - */ - -static PyObject *__pyx_unpickle_Enum__set_state(struct __pyx_MemviewEnum_obj *__pyx_v___pyx_result, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - Py_ssize_t __pyx_t_3; - int __pyx_t_4; - int __pyx_t_5; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__pyx_unpickle_Enum__set_state", 0); - - /* "(tree fragment)":12 - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] # <<<<<<<<<<<<<< - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - * __pyx_result.__dict__.update(__pyx_state[1]) - */ - if (unlikely(__pyx_v___pyx_state == Py_None)) { - PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable"); - __PYX_ERR(1, 12, __pyx_L1_error) - } - __pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __Pyx_GOTREF(__pyx_v___pyx_result->name); - __Pyx_DECREF(__pyx_v___pyx_result->name); - __pyx_v___pyx_result->name = __pyx_t_1; - __pyx_t_1 = 0; - - /* "(tree fragment)":13 - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<< - * __pyx_result.__dict__.update(__pyx_state[1]) - */ - if (unlikely(__pyx_v___pyx_state == Py_None)) { - PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()"); - __PYX_ERR(1, 13, __pyx_L1_error) - } - __pyx_t_3 = PyTuple_GET_SIZE(__pyx_v___pyx_state); if (unlikely(__pyx_t_3 == ((Py_ssize_t)-1))) __PYX_ERR(1, 13, __pyx_L1_error) - __pyx_t_4 = ((__pyx_t_3 > 1) != 0); - if (__pyx_t_4) { - } else { - __pyx_t_2 = __pyx_t_4; - goto __pyx_L4_bool_binop_done; - } - __pyx_t_4 = __Pyx_HasAttr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(1, 13, __pyx_L1_error) - __pyx_t_5 = (__pyx_t_4 != 0); - __pyx_t_2 = __pyx_t_5; - __pyx_L4_bool_binop_done:; - if (__pyx_t_2) { - - /* "(tree fragment)":14 - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - * __pyx_result.__dict__.update(__pyx_state[1]) # <<<<<<<<<<<<<< - */ - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_update); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(__pyx_v___pyx_state == Py_None)) { - PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable"); - __PYX_ERR(1, 14, __pyx_L1_error) - } - __pyx_t_6 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_8 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_7))) { - __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_7); - if (likely(__pyx_t_8)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7); - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_7, function); - } - } - __pyx_t_1 = (__pyx_t_8) ? __Pyx_PyObject_Call2Args(__pyx_t_7, __pyx_t_8, __pyx_t_6) : __Pyx_PyObject_CallOneArg(__pyx_t_7, __pyx_t_6); - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "(tree fragment)":13 - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<< - * __pyx_result.__dict__.update(__pyx_state[1]) - */ - } - - /* "(tree fragment)":11 - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<< - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_AddTraceback("View.MemoryView.__pyx_unpickle_Enum__set_state", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static struct __pyx_vtabstruct_array __pyx_vtable_array; - -static PyObject *__pyx_tp_new_array(PyTypeObject *t, PyObject *a, PyObject *k) { - struct __pyx_array_obj *p; - PyObject *o; - if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) { - o = (*t->tp_alloc)(t, 0); - } else { - o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); - } - if (unlikely(!o)) return 0; - p = ((struct __pyx_array_obj *)o); - p->__pyx_vtab = __pyx_vtabptr_array; - p->mode = ((PyObject*)Py_None); Py_INCREF(Py_None); - p->_format = ((PyObject*)Py_None); Py_INCREF(Py_None); - if (unlikely(__pyx_array___cinit__(o, a, k) < 0)) goto bad; - return o; - bad: - Py_DECREF(o); o = 0; - return NULL; -} - -static void __pyx_tp_dealloc_array(PyObject *o) { - struct __pyx_array_obj *p = (struct __pyx_array_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && (!PyType_IS_GC(Py_TYPE(o)) || !_PyGC_FINALIZED(o))) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - #endif - { - PyObject *etype, *eval, *etb; - PyErr_Fetch(&etype, &eval, &etb); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) + 1); - __pyx_array___dealloc__(o); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) - 1); - PyErr_Restore(etype, eval, etb); - } - Py_CLEAR(p->mode); - Py_CLEAR(p->_format); - (*Py_TYPE(o)->tp_free)(o); -} -static PyObject *__pyx_sq_item_array(PyObject *o, Py_ssize_t i) { - PyObject *r; - PyObject *x = PyInt_FromSsize_t(i); if(!x) return 0; - r = Py_TYPE(o)->tp_as_mapping->mp_subscript(o, x); - Py_DECREF(x); - return r; -} - -static int __pyx_mp_ass_subscript_array(PyObject *o, PyObject *i, PyObject *v) { - if (v) { - return __pyx_array___setitem__(o, i, v); - } - else { - PyErr_Format(PyExc_NotImplementedError, - "Subscript deletion not supported by %.200s", Py_TYPE(o)->tp_name); - return -1; - } -} - -static PyObject *__pyx_tp_getattro_array(PyObject *o, PyObject *n) { - PyObject *v = __Pyx_PyObject_GenericGetAttr(o, n); - if (!v && PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Clear(); - v = __pyx_array___getattr__(o, n); - } - return v; -} - -static PyObject *__pyx_getprop___pyx_array_memview(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(o); -} - -static PyMethodDef __pyx_methods_array[] = { - {"__getattr__", (PyCFunction)__pyx_array___getattr__, METH_O|METH_COEXIST, 0}, - {"__reduce_cython__", (PyCFunction)__pyx_pw___pyx_array_1__reduce_cython__, METH_NOARGS, 0}, - {"__setstate_cython__", (PyCFunction)__pyx_pw___pyx_array_3__setstate_cython__, METH_O, 0}, - {0, 0, 0, 0} -}; - -static struct PyGetSetDef __pyx_getsets_array[] = { - {(char *)"memview", __pyx_getprop___pyx_array_memview, 0, (char *)0, 0}, - {0, 0, 0, 0, 0} -}; - -static PySequenceMethods __pyx_tp_as_sequence_array = { - __pyx_array___len__, /*sq_length*/ - 0, /*sq_concat*/ - 0, /*sq_repeat*/ - __pyx_sq_item_array, /*sq_item*/ - 0, /*sq_slice*/ - 0, /*sq_ass_item*/ - 0, /*sq_ass_slice*/ - 0, /*sq_contains*/ - 0, /*sq_inplace_concat*/ - 0, /*sq_inplace_repeat*/ -}; - -static PyMappingMethods __pyx_tp_as_mapping_array = { - __pyx_array___len__, /*mp_length*/ - __pyx_array___getitem__, /*mp_subscript*/ - __pyx_mp_ass_subscript_array, /*mp_ass_subscript*/ -}; - -static PyBufferProcs __pyx_tp_as_buffer_array = { - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getreadbuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getwritebuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getsegcount*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getcharbuffer*/ - #endif - __pyx_array_getbuffer, /*bf_getbuffer*/ - 0, /*bf_releasebuffer*/ -}; - -static PyTypeObject __pyx_type___pyx_array = { - PyVarObject_HEAD_INIT(0, 0) - "monotonic_align.core.array", /*tp_name*/ - sizeof(struct __pyx_array_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_array, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - &__pyx_tp_as_sequence_array, /*tp_as_sequence*/ - &__pyx_tp_as_mapping_array, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - __pyx_tp_getattro_array, /*tp_getattro*/ - 0, /*tp_setattro*/ - &__pyx_tp_as_buffer_array, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE, /*tp_flags*/ - 0, /*tp_doc*/ - 0, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods_array, /*tp_methods*/ - 0, /*tp_members*/ - __pyx_getsets_array, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_array, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - 0, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, /*tp_pypy_flags*/ - #endif -}; - -static PyObject *__pyx_tp_new_Enum(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - struct __pyx_MemviewEnum_obj *p; - PyObject *o; - if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) { - o = (*t->tp_alloc)(t, 0); - } else { - o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); - } - if (unlikely(!o)) return 0; - p = ((struct __pyx_MemviewEnum_obj *)o); - p->name = Py_None; Py_INCREF(Py_None); - return o; -} - -static void __pyx_tp_dealloc_Enum(PyObject *o) { - struct __pyx_MemviewEnum_obj *p = (struct __pyx_MemviewEnum_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && !_PyGC_FINALIZED(o)) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - #endif - PyObject_GC_UnTrack(o); - Py_CLEAR(p->name); - (*Py_TYPE(o)->tp_free)(o); -} - -static int __pyx_tp_traverse_Enum(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_MemviewEnum_obj *p = (struct __pyx_MemviewEnum_obj *)o; - if (p->name) { - e = (*v)(p->name, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear_Enum(PyObject *o) { - PyObject* tmp; - struct __pyx_MemviewEnum_obj *p = (struct __pyx_MemviewEnum_obj *)o; - tmp = ((PyObject*)p->name); - p->name = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - return 0; -} - -static PyMethodDef __pyx_methods_Enum[] = { - {"__reduce_cython__", (PyCFunction)__pyx_pw___pyx_MemviewEnum_1__reduce_cython__, METH_NOARGS, 0}, - {"__setstate_cython__", (PyCFunction)__pyx_pw___pyx_MemviewEnum_3__setstate_cython__, METH_O, 0}, - {0, 0, 0, 0} -}; - -static PyTypeObject __pyx_type___pyx_MemviewEnum = { - PyVarObject_HEAD_INIT(0, 0) - "monotonic_align.core.Enum", /*tp_name*/ - sizeof(struct __pyx_MemviewEnum_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_Enum, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - __pyx_MemviewEnum___repr__, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_Enum, /*tp_traverse*/ - __pyx_tp_clear_Enum, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods_Enum, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - __pyx_MemviewEnum___init__, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_Enum, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - 0, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, /*tp_pypy_flags*/ - #endif -}; -static struct __pyx_vtabstruct_memoryview __pyx_vtable_memoryview; - -static PyObject *__pyx_tp_new_memoryview(PyTypeObject *t, PyObject *a, PyObject *k) { - struct __pyx_memoryview_obj *p; - PyObject *o; - if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) { - o = (*t->tp_alloc)(t, 0); - } else { - o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); - } - if (unlikely(!o)) return 0; - p = ((struct __pyx_memoryview_obj *)o); - p->__pyx_vtab = __pyx_vtabptr_memoryview; - p->obj = Py_None; Py_INCREF(Py_None); - p->_size = Py_None; Py_INCREF(Py_None); - p->_array_interface = Py_None; Py_INCREF(Py_None); - p->view.obj = NULL; - if (unlikely(__pyx_memoryview___cinit__(o, a, k) < 0)) goto bad; - return o; - bad: - Py_DECREF(o); o = 0; - return NULL; -} - -static void __pyx_tp_dealloc_memoryview(PyObject *o) { - struct __pyx_memoryview_obj *p = (struct __pyx_memoryview_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && !_PyGC_FINALIZED(o)) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - #endif - PyObject_GC_UnTrack(o); - { - PyObject *etype, *eval, *etb; - PyErr_Fetch(&etype, &eval, &etb); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) + 1); - __pyx_memoryview___dealloc__(o); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) - 1); - PyErr_Restore(etype, eval, etb); - } - Py_CLEAR(p->obj); - Py_CLEAR(p->_size); - Py_CLEAR(p->_array_interface); - (*Py_TYPE(o)->tp_free)(o); -} - -static int __pyx_tp_traverse_memoryview(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_memoryview_obj *p = (struct __pyx_memoryview_obj *)o; - if (p->obj) { - e = (*v)(p->obj, a); if (e) return e; - } - if (p->_size) { - e = (*v)(p->_size, a); if (e) return e; - } - if (p->_array_interface) { - e = (*v)(p->_array_interface, a); if (e) return e; - } - if (p->view.obj) { - e = (*v)(p->view.obj, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear_memoryview(PyObject *o) { - PyObject* tmp; - struct __pyx_memoryview_obj *p = (struct __pyx_memoryview_obj *)o; - tmp = ((PyObject*)p->obj); - p->obj = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - tmp = ((PyObject*)p->_size); - p->_size = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - tmp = ((PyObject*)p->_array_interface); - p->_array_interface = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - Py_CLEAR(p->view.obj); - return 0; -} -static PyObject *__pyx_sq_item_memoryview(PyObject *o, Py_ssize_t i) { - PyObject *r; - PyObject *x = PyInt_FromSsize_t(i); if(!x) return 0; - r = Py_TYPE(o)->tp_as_mapping->mp_subscript(o, x); - Py_DECREF(x); - return r; -} - -static int __pyx_mp_ass_subscript_memoryview(PyObject *o, PyObject *i, PyObject *v) { - if (v) { - return __pyx_memoryview___setitem__(o, i, v); - } - else { - PyErr_Format(PyExc_NotImplementedError, - "Subscript deletion not supported by %.200s", Py_TYPE(o)->tp_name); - return -1; - } -} - -static PyObject *__pyx_getprop___pyx_memoryview_T(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_base(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_shape(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_strides(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_suboffsets(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_ndim(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_itemsize(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_nbytes(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_size(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(o); -} - -static PyMethodDef __pyx_methods_memoryview[] = { - {"is_c_contig", (PyCFunction)__pyx_memoryview_is_c_contig, METH_NOARGS, 0}, - {"is_f_contig", (PyCFunction)__pyx_memoryview_is_f_contig, METH_NOARGS, 0}, - {"copy", (PyCFunction)__pyx_memoryview_copy, METH_NOARGS, 0}, - {"copy_fortran", (PyCFunction)__pyx_memoryview_copy_fortran, METH_NOARGS, 0}, - {"__reduce_cython__", (PyCFunction)__pyx_pw___pyx_memoryview_1__reduce_cython__, METH_NOARGS, 0}, - {"__setstate_cython__", (PyCFunction)__pyx_pw___pyx_memoryview_3__setstate_cython__, METH_O, 0}, - {0, 0, 0, 0} -}; - -static struct PyGetSetDef __pyx_getsets_memoryview[] = { - {(char *)"T", __pyx_getprop___pyx_memoryview_T, 0, (char *)0, 0}, - {(char *)"base", __pyx_getprop___pyx_memoryview_base, 0, (char *)0, 0}, - {(char *)"shape", __pyx_getprop___pyx_memoryview_shape, 0, (char *)0, 0}, - {(char *)"strides", __pyx_getprop___pyx_memoryview_strides, 0, (char *)0, 0}, - {(char *)"suboffsets", __pyx_getprop___pyx_memoryview_suboffsets, 0, (char *)0, 0}, - {(char *)"ndim", __pyx_getprop___pyx_memoryview_ndim, 0, (char *)0, 0}, - {(char *)"itemsize", __pyx_getprop___pyx_memoryview_itemsize, 0, (char *)0, 0}, - {(char *)"nbytes", __pyx_getprop___pyx_memoryview_nbytes, 0, (char *)0, 0}, - {(char *)"size", __pyx_getprop___pyx_memoryview_size, 0, (char *)0, 0}, - {0, 0, 0, 0, 0} -}; - -static PySequenceMethods __pyx_tp_as_sequence_memoryview = { - __pyx_memoryview___len__, /*sq_length*/ - 0, /*sq_concat*/ - 0, /*sq_repeat*/ - __pyx_sq_item_memoryview, /*sq_item*/ - 0, /*sq_slice*/ - 0, /*sq_ass_item*/ - 0, /*sq_ass_slice*/ - 0, /*sq_contains*/ - 0, /*sq_inplace_concat*/ - 0, /*sq_inplace_repeat*/ -}; - -static PyMappingMethods __pyx_tp_as_mapping_memoryview = { - __pyx_memoryview___len__, /*mp_length*/ - __pyx_memoryview___getitem__, /*mp_subscript*/ - __pyx_mp_ass_subscript_memoryview, /*mp_ass_subscript*/ -}; - -static PyBufferProcs __pyx_tp_as_buffer_memoryview = { - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getreadbuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getwritebuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getsegcount*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getcharbuffer*/ - #endif - __pyx_memoryview_getbuffer, /*bf_getbuffer*/ - 0, /*bf_releasebuffer*/ -}; - -static PyTypeObject __pyx_type___pyx_memoryview = { - PyVarObject_HEAD_INIT(0, 0) - "monotonic_align.core.memoryview", /*tp_name*/ - sizeof(struct __pyx_memoryview_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_memoryview, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - __pyx_memoryview___repr__, /*tp_repr*/ - 0, /*tp_as_number*/ - &__pyx_tp_as_sequence_memoryview, /*tp_as_sequence*/ - &__pyx_tp_as_mapping_memoryview, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - __pyx_memoryview___str__, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - &__pyx_tp_as_buffer_memoryview, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_memoryview, /*tp_traverse*/ - __pyx_tp_clear_memoryview, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods_memoryview, /*tp_methods*/ - 0, /*tp_members*/ - __pyx_getsets_memoryview, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_memoryview, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - 0, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, /*tp_pypy_flags*/ - #endif -}; -static struct __pyx_vtabstruct__memoryviewslice __pyx_vtable__memoryviewslice; - -static PyObject *__pyx_tp_new__memoryviewslice(PyTypeObject *t, PyObject *a, PyObject *k) { - struct __pyx_memoryviewslice_obj *p; - PyObject *o = __pyx_tp_new_memoryview(t, a, k); - if (unlikely(!o)) return 0; - p = ((struct __pyx_memoryviewslice_obj *)o); - p->__pyx_base.__pyx_vtab = (struct __pyx_vtabstruct_memoryview*)__pyx_vtabptr__memoryviewslice; - p->from_object = Py_None; Py_INCREF(Py_None); - p->from_slice.memview = NULL; - return o; -} - -static void __pyx_tp_dealloc__memoryviewslice(PyObject *o) { - struct __pyx_memoryviewslice_obj *p = (struct __pyx_memoryviewslice_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && !_PyGC_FINALIZED(o)) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - #endif - PyObject_GC_UnTrack(o); - { - PyObject *etype, *eval, *etb; - PyErr_Fetch(&etype, &eval, &etb); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) + 1); - __pyx_memoryviewslice___dealloc__(o); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) - 1); - PyErr_Restore(etype, eval, etb); - } - Py_CLEAR(p->from_object); - PyObject_GC_Track(o); - __pyx_tp_dealloc_memoryview(o); -} - -static int __pyx_tp_traverse__memoryviewslice(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_memoryviewslice_obj *p = (struct __pyx_memoryviewslice_obj *)o; - e = __pyx_tp_traverse_memoryview(o, v, a); if (e) return e; - if (p->from_object) { - e = (*v)(p->from_object, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear__memoryviewslice(PyObject *o) { - PyObject* tmp; - struct __pyx_memoryviewslice_obj *p = (struct __pyx_memoryviewslice_obj *)o; - __pyx_tp_clear_memoryview(o); - tmp = ((PyObject*)p->from_object); - p->from_object = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - __PYX_XDEC_MEMVIEW(&p->from_slice, 1); - return 0; -} - -static PyObject *__pyx_getprop___pyx_memoryviewslice_base(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_16_memoryviewslice_4base_1__get__(o); -} - -static PyMethodDef __pyx_methods__memoryviewslice[] = { - {"__reduce_cython__", (PyCFunction)__pyx_pw___pyx_memoryviewslice_1__reduce_cython__, METH_NOARGS, 0}, - {"__setstate_cython__", (PyCFunction)__pyx_pw___pyx_memoryviewslice_3__setstate_cython__, METH_O, 0}, - {0, 0, 0, 0} -}; - -static struct PyGetSetDef __pyx_getsets__memoryviewslice[] = { - {(char *)"base", __pyx_getprop___pyx_memoryviewslice_base, 0, (char *)0, 0}, - {0, 0, 0, 0, 0} -}; - -static PyTypeObject __pyx_type___pyx_memoryviewslice = { - PyVarObject_HEAD_INIT(0, 0) - "monotonic_align.core._memoryviewslice", /*tp_name*/ - sizeof(struct __pyx_memoryviewslice_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc__memoryviewslice, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - #if CYTHON_COMPILING_IN_PYPY - __pyx_memoryview___repr__, /*tp_repr*/ - #else - 0, /*tp_repr*/ - #endif - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - #if CYTHON_COMPILING_IN_PYPY - __pyx_memoryview___str__, /*tp_str*/ - #else - 0, /*tp_str*/ - #endif - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - "Internal class for passing memoryview slices to Python", /*tp_doc*/ - __pyx_tp_traverse__memoryviewslice, /*tp_traverse*/ - __pyx_tp_clear__memoryviewslice, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods__memoryviewslice, /*tp_methods*/ - 0, /*tp_members*/ - __pyx_getsets__memoryviewslice, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new__memoryviewslice, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - 0, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, /*tp_pypy_flags*/ - #endif -}; - -static PyMethodDef __pyx_methods[] = { - {"maximum_path_c", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_15monotonic_align_4core_1maximum_path_c, METH_VARARGS|METH_KEYWORDS, 0}, - {0, 0, 0, 0} -}; - -#if PY_MAJOR_VERSION >= 3 -#if CYTHON_PEP489_MULTI_PHASE_INIT -static PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def); /*proto*/ -static int __pyx_pymod_exec_core(PyObject* module); /*proto*/ -static PyModuleDef_Slot __pyx_moduledef_slots[] = { - {Py_mod_create, (void*)__pyx_pymod_create}, - {Py_mod_exec, (void*)__pyx_pymod_exec_core}, - {0, NULL} -}; -#endif - -static struct PyModuleDef __pyx_moduledef = { - PyModuleDef_HEAD_INIT, - "core", - 0, /* m_doc */ - #if CYTHON_PEP489_MULTI_PHASE_INIT - 0, /* m_size */ - #else - -1, /* m_size */ - #endif - __pyx_methods /* m_methods */, - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_moduledef_slots, /* m_slots */ - #else - NULL, /* m_reload */ - #endif - NULL, /* m_traverse */ - NULL, /* m_clear */ - NULL /* m_free */ -}; -#endif -#ifndef CYTHON_SMALL_CODE -#if defined(__clang__) - #define CYTHON_SMALL_CODE -#elif defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3)) - #define CYTHON_SMALL_CODE __attribute__((cold)) -#else - #define CYTHON_SMALL_CODE -#endif -#endif - -static __Pyx_StringTabEntry __pyx_string_tab[] = { - {&__pyx_n_s_ASCII, __pyx_k_ASCII, sizeof(__pyx_k_ASCII), 0, 0, 1, 1}, - {&__pyx_kp_s_Buffer_view_does_not_expose_stri, __pyx_k_Buffer_view_does_not_expose_stri, sizeof(__pyx_k_Buffer_view_does_not_expose_stri), 0, 0, 1, 0}, - {&__pyx_kp_s_Can_only_create_a_buffer_that_is, __pyx_k_Can_only_create_a_buffer_that_is, sizeof(__pyx_k_Can_only_create_a_buffer_that_is), 0, 0, 1, 0}, - {&__pyx_kp_s_Cannot_assign_to_read_only_memor, __pyx_k_Cannot_assign_to_read_only_memor, sizeof(__pyx_k_Cannot_assign_to_read_only_memor), 0, 0, 1, 0}, - {&__pyx_kp_s_Cannot_create_writable_memory_vi, __pyx_k_Cannot_create_writable_memory_vi, sizeof(__pyx_k_Cannot_create_writable_memory_vi), 0, 0, 1, 0}, - {&__pyx_kp_s_Cannot_index_with_type_s, __pyx_k_Cannot_index_with_type_s, sizeof(__pyx_k_Cannot_index_with_type_s), 0, 0, 1, 0}, - {&__pyx_n_s_Ellipsis, __pyx_k_Ellipsis, sizeof(__pyx_k_Ellipsis), 0, 0, 1, 1}, - {&__pyx_kp_s_Empty_shape_tuple_for_cython_arr, __pyx_k_Empty_shape_tuple_for_cython_arr, sizeof(__pyx_k_Empty_shape_tuple_for_cython_arr), 0, 0, 1, 0}, - {&__pyx_kp_s_Incompatible_checksums_0x_x_vs_0, __pyx_k_Incompatible_checksums_0x_x_vs_0, sizeof(__pyx_k_Incompatible_checksums_0x_x_vs_0), 0, 0, 1, 0}, - {&__pyx_n_s_IndexError, __pyx_k_IndexError, sizeof(__pyx_k_IndexError), 0, 0, 1, 1}, - {&__pyx_kp_s_Indirect_dimensions_not_supporte, __pyx_k_Indirect_dimensions_not_supporte, sizeof(__pyx_k_Indirect_dimensions_not_supporte), 0, 0, 1, 0}, - {&__pyx_kp_s_Invalid_mode_expected_c_or_fortr, __pyx_k_Invalid_mode_expected_c_or_fortr, sizeof(__pyx_k_Invalid_mode_expected_c_or_fortr), 0, 0, 1, 0}, - {&__pyx_kp_s_Invalid_shape_in_axis_d_d, __pyx_k_Invalid_shape_in_axis_d_d, sizeof(__pyx_k_Invalid_shape_in_axis_d_d), 0, 0, 1, 0}, - {&__pyx_n_s_MemoryError, __pyx_k_MemoryError, sizeof(__pyx_k_MemoryError), 0, 0, 1, 1}, - {&__pyx_kp_s_MemoryView_of_r_at_0x_x, __pyx_k_MemoryView_of_r_at_0x_x, sizeof(__pyx_k_MemoryView_of_r_at_0x_x), 0, 0, 1, 0}, - {&__pyx_kp_s_MemoryView_of_r_object, __pyx_k_MemoryView_of_r_object, sizeof(__pyx_k_MemoryView_of_r_object), 0, 0, 1, 0}, - {&__pyx_n_b_O, __pyx_k_O, sizeof(__pyx_k_O), 0, 0, 0, 1}, - {&__pyx_kp_s_Out_of_bounds_on_buffer_access_a, __pyx_k_Out_of_bounds_on_buffer_access_a, sizeof(__pyx_k_Out_of_bounds_on_buffer_access_a), 0, 0, 1, 0}, - {&__pyx_n_s_PickleError, __pyx_k_PickleError, sizeof(__pyx_k_PickleError), 0, 0, 1, 1}, - {&__pyx_n_s_TypeError, __pyx_k_TypeError, sizeof(__pyx_k_TypeError), 0, 0, 1, 1}, - {&__pyx_kp_s_Unable_to_convert_item_to_object, __pyx_k_Unable_to_convert_item_to_object, sizeof(__pyx_k_Unable_to_convert_item_to_object), 0, 0, 1, 0}, - {&__pyx_n_s_ValueError, __pyx_k_ValueError, sizeof(__pyx_k_ValueError), 0, 0, 1, 1}, - {&__pyx_n_s_View_MemoryView, __pyx_k_View_MemoryView, sizeof(__pyx_k_View_MemoryView), 0, 0, 1, 1}, - {&__pyx_n_s_allocate_buffer, __pyx_k_allocate_buffer, sizeof(__pyx_k_allocate_buffer), 0, 0, 1, 1}, - {&__pyx_n_s_base, __pyx_k_base, sizeof(__pyx_k_base), 0, 0, 1, 1}, - {&__pyx_n_s_c, __pyx_k_c, sizeof(__pyx_k_c), 0, 0, 1, 1}, - {&__pyx_n_u_c, __pyx_k_c, sizeof(__pyx_k_c), 0, 1, 0, 1}, - {&__pyx_n_s_class, __pyx_k_class, sizeof(__pyx_k_class), 0, 0, 1, 1}, - {&__pyx_n_s_cline_in_traceback, __pyx_k_cline_in_traceback, sizeof(__pyx_k_cline_in_traceback), 0, 0, 1, 1}, - {&__pyx_kp_s_contiguous_and_direct, __pyx_k_contiguous_and_direct, sizeof(__pyx_k_contiguous_and_direct), 0, 0, 1, 0}, - {&__pyx_kp_s_contiguous_and_indirect, __pyx_k_contiguous_and_indirect, sizeof(__pyx_k_contiguous_and_indirect), 0, 0, 1, 0}, - {&__pyx_n_s_dict, __pyx_k_dict, sizeof(__pyx_k_dict), 0, 0, 1, 1}, - {&__pyx_n_s_dtype_is_object, __pyx_k_dtype_is_object, sizeof(__pyx_k_dtype_is_object), 0, 0, 1, 1}, - {&__pyx_n_s_encode, __pyx_k_encode, sizeof(__pyx_k_encode), 0, 0, 1, 1}, - {&__pyx_n_s_enumerate, __pyx_k_enumerate, sizeof(__pyx_k_enumerate), 0, 0, 1, 1}, - {&__pyx_n_s_error, __pyx_k_error, sizeof(__pyx_k_error), 0, 0, 1, 1}, - {&__pyx_n_s_flags, __pyx_k_flags, sizeof(__pyx_k_flags), 0, 0, 1, 1}, - {&__pyx_n_s_format, __pyx_k_format, sizeof(__pyx_k_format), 0, 0, 1, 1}, - {&__pyx_n_s_fortran, __pyx_k_fortran, sizeof(__pyx_k_fortran), 0, 0, 1, 1}, - {&__pyx_n_u_fortran, __pyx_k_fortran, sizeof(__pyx_k_fortran), 0, 1, 0, 1}, - {&__pyx_n_s_getstate, __pyx_k_getstate, sizeof(__pyx_k_getstate), 0, 0, 1, 1}, - {&__pyx_kp_s_got_differing_extents_in_dimensi, __pyx_k_got_differing_extents_in_dimensi, sizeof(__pyx_k_got_differing_extents_in_dimensi), 0, 0, 1, 0}, - {&__pyx_n_s_id, __pyx_k_id, sizeof(__pyx_k_id), 0, 0, 1, 1}, - {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1}, - {&__pyx_n_s_itemsize, __pyx_k_itemsize, sizeof(__pyx_k_itemsize), 0, 0, 1, 1}, - {&__pyx_kp_s_itemsize_0_for_cython_array, __pyx_k_itemsize_0_for_cython_array, sizeof(__pyx_k_itemsize_0_for_cython_array), 0, 0, 1, 0}, - {&__pyx_n_s_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1}, - {&__pyx_n_s_memview, __pyx_k_memview, sizeof(__pyx_k_memview), 0, 0, 1, 1}, - {&__pyx_n_s_mode, __pyx_k_mode, sizeof(__pyx_k_mode), 0, 0, 1, 1}, - {&__pyx_n_s_name, __pyx_k_name, sizeof(__pyx_k_name), 0, 0, 1, 1}, - {&__pyx_n_s_name_2, __pyx_k_name_2, sizeof(__pyx_k_name_2), 0, 0, 1, 1}, - {&__pyx_n_s_ndim, __pyx_k_ndim, sizeof(__pyx_k_ndim), 0, 0, 1, 1}, - {&__pyx_n_s_new, __pyx_k_new, sizeof(__pyx_k_new), 0, 0, 1, 1}, - {&__pyx_kp_s_no_default___reduce___due_to_non, __pyx_k_no_default___reduce___due_to_non, sizeof(__pyx_k_no_default___reduce___due_to_non), 0, 0, 1, 0}, - {&__pyx_n_s_obj, __pyx_k_obj, sizeof(__pyx_k_obj), 0, 0, 1, 1}, - {&__pyx_n_s_pack, __pyx_k_pack, sizeof(__pyx_k_pack), 0, 0, 1, 1}, - {&__pyx_n_s_paths, __pyx_k_paths, sizeof(__pyx_k_paths), 0, 0, 1, 1}, - {&__pyx_n_s_pickle, __pyx_k_pickle, sizeof(__pyx_k_pickle), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_PickleError, __pyx_k_pyx_PickleError, sizeof(__pyx_k_pyx_PickleError), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_checksum, __pyx_k_pyx_checksum, sizeof(__pyx_k_pyx_checksum), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_getbuffer, __pyx_k_pyx_getbuffer, sizeof(__pyx_k_pyx_getbuffer), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_result, __pyx_k_pyx_result, sizeof(__pyx_k_pyx_result), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_state, __pyx_k_pyx_state, sizeof(__pyx_k_pyx_state), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_type, __pyx_k_pyx_type, sizeof(__pyx_k_pyx_type), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_unpickle_Enum, __pyx_k_pyx_unpickle_Enum, sizeof(__pyx_k_pyx_unpickle_Enum), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_vtable, __pyx_k_pyx_vtable, sizeof(__pyx_k_pyx_vtable), 0, 0, 1, 1}, - {&__pyx_n_s_range, __pyx_k_range, sizeof(__pyx_k_range), 0, 0, 1, 1}, - {&__pyx_n_s_reduce, __pyx_k_reduce, sizeof(__pyx_k_reduce), 0, 0, 1, 1}, - {&__pyx_n_s_reduce_cython, __pyx_k_reduce_cython, sizeof(__pyx_k_reduce_cython), 0, 0, 1, 1}, - {&__pyx_n_s_reduce_ex, __pyx_k_reduce_ex, sizeof(__pyx_k_reduce_ex), 0, 0, 1, 1}, - {&__pyx_n_s_setstate, __pyx_k_setstate, sizeof(__pyx_k_setstate), 0, 0, 1, 1}, - {&__pyx_n_s_setstate_cython, __pyx_k_setstate_cython, sizeof(__pyx_k_setstate_cython), 0, 0, 1, 1}, - {&__pyx_n_s_shape, __pyx_k_shape, sizeof(__pyx_k_shape), 0, 0, 1, 1}, - {&__pyx_n_s_size, __pyx_k_size, sizeof(__pyx_k_size), 0, 0, 1, 1}, - {&__pyx_n_s_start, __pyx_k_start, sizeof(__pyx_k_start), 0, 0, 1, 1}, - {&__pyx_n_s_step, __pyx_k_step, sizeof(__pyx_k_step), 0, 0, 1, 1}, - {&__pyx_n_s_stop, __pyx_k_stop, sizeof(__pyx_k_stop), 0, 0, 1, 1}, - {&__pyx_kp_s_strided_and_direct, __pyx_k_strided_and_direct, sizeof(__pyx_k_strided_and_direct), 0, 0, 1, 0}, - {&__pyx_kp_s_strided_and_direct_or_indirect, __pyx_k_strided_and_direct_or_indirect, sizeof(__pyx_k_strided_and_direct_or_indirect), 0, 0, 1, 0}, - {&__pyx_kp_s_strided_and_indirect, __pyx_k_strided_and_indirect, sizeof(__pyx_k_strided_and_indirect), 0, 0, 1, 0}, - {&__pyx_kp_s_stringsource, __pyx_k_stringsource, sizeof(__pyx_k_stringsource), 0, 0, 1, 0}, - {&__pyx_n_s_struct, __pyx_k_struct, sizeof(__pyx_k_struct), 0, 0, 1, 1}, - {&__pyx_n_s_t_xs, __pyx_k_t_xs, sizeof(__pyx_k_t_xs), 0, 0, 1, 1}, - {&__pyx_n_s_t_ys, __pyx_k_t_ys, sizeof(__pyx_k_t_ys), 0, 0, 1, 1}, - {&__pyx_n_s_test, __pyx_k_test, sizeof(__pyx_k_test), 0, 0, 1, 1}, - {&__pyx_kp_s_unable_to_allocate_array_data, __pyx_k_unable_to_allocate_array_data, sizeof(__pyx_k_unable_to_allocate_array_data), 0, 0, 1, 0}, - {&__pyx_kp_s_unable_to_allocate_shape_and_str, __pyx_k_unable_to_allocate_shape_and_str, sizeof(__pyx_k_unable_to_allocate_shape_and_str), 0, 0, 1, 0}, - {&__pyx_n_s_unpack, __pyx_k_unpack, sizeof(__pyx_k_unpack), 0, 0, 1, 1}, - {&__pyx_n_s_update, __pyx_k_update, sizeof(__pyx_k_update), 0, 0, 1, 1}, - {&__pyx_n_s_values, __pyx_k_values, sizeof(__pyx_k_values), 0, 0, 1, 1}, - {0, 0, 0, 0, 0, 0, 0} -}; -static CYTHON_SMALL_CODE int __Pyx_InitCachedBuiltins(void) { - __pyx_builtin_range = __Pyx_GetBuiltinName(__pyx_n_s_range); if (!__pyx_builtin_range) __PYX_ERR(0, 15, __pyx_L1_error) - __pyx_builtin_ValueError = __Pyx_GetBuiltinName(__pyx_n_s_ValueError); if (!__pyx_builtin_ValueError) __PYX_ERR(1, 134, __pyx_L1_error) - __pyx_builtin_MemoryError = __Pyx_GetBuiltinName(__pyx_n_s_MemoryError); if (!__pyx_builtin_MemoryError) __PYX_ERR(1, 149, __pyx_L1_error) - __pyx_builtin_enumerate = __Pyx_GetBuiltinName(__pyx_n_s_enumerate); if (!__pyx_builtin_enumerate) __PYX_ERR(1, 152, __pyx_L1_error) - __pyx_builtin_TypeError = __Pyx_GetBuiltinName(__pyx_n_s_TypeError); if (!__pyx_builtin_TypeError) __PYX_ERR(1, 2, __pyx_L1_error) - __pyx_builtin_Ellipsis = __Pyx_GetBuiltinName(__pyx_n_s_Ellipsis); if (!__pyx_builtin_Ellipsis) __PYX_ERR(1, 406, __pyx_L1_error) - __pyx_builtin_id = __Pyx_GetBuiltinName(__pyx_n_s_id); if (!__pyx_builtin_id) __PYX_ERR(1, 615, __pyx_L1_error) - __pyx_builtin_IndexError = __Pyx_GetBuiltinName(__pyx_n_s_IndexError); if (!__pyx_builtin_IndexError) __PYX_ERR(1, 834, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} - -static CYTHON_SMALL_CODE int __Pyx_InitCachedConstants(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_InitCachedConstants", 0); - - /* "View.MemoryView":134 - * - * if not self.ndim: - * raise ValueError("Empty shape tuple for cython.array") # <<<<<<<<<<<<<< - * - * if itemsize <= 0: - */ - __pyx_tuple__2 = PyTuple_Pack(1, __pyx_kp_s_Empty_shape_tuple_for_cython_arr); if (unlikely(!__pyx_tuple__2)) __PYX_ERR(1, 134, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__2); - __Pyx_GIVEREF(__pyx_tuple__2); - - /* "View.MemoryView":137 - * - * if itemsize <= 0: - * raise ValueError("itemsize <= 0 for cython.array") # <<<<<<<<<<<<<< - * - * if not isinstance(format, bytes): - */ - __pyx_tuple__3 = PyTuple_Pack(1, __pyx_kp_s_itemsize_0_for_cython_array); if (unlikely(!__pyx_tuple__3)) __PYX_ERR(1, 137, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__3); - __Pyx_GIVEREF(__pyx_tuple__3); - - /* "View.MemoryView":149 - * - * if not self._shape: - * raise MemoryError("unable to allocate shape and strides.") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__4 = PyTuple_Pack(1, __pyx_kp_s_unable_to_allocate_shape_and_str); if (unlikely(!__pyx_tuple__4)) __PYX_ERR(1, 149, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__4); - __Pyx_GIVEREF(__pyx_tuple__4); - - /* "View.MemoryView":177 - * self.data = malloc(self.len) - * if not self.data: - * raise MemoryError("unable to allocate array data.") # <<<<<<<<<<<<<< - * - * if self.dtype_is_object: - */ - __pyx_tuple__5 = PyTuple_Pack(1, __pyx_kp_s_unable_to_allocate_array_data); if (unlikely(!__pyx_tuple__5)) __PYX_ERR(1, 177, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__5); - __Pyx_GIVEREF(__pyx_tuple__5); - - /* "View.MemoryView":193 - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - * raise ValueError("Can only create a buffer that is contiguous in memory.") # <<<<<<<<<<<<<< - * info.buf = self.data - * info.len = self.len - */ - __pyx_tuple__6 = PyTuple_Pack(1, __pyx_kp_s_Can_only_create_a_buffer_that_is); if (unlikely(!__pyx_tuple__6)) __PYX_ERR(1, 193, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__6); - __Pyx_GIVEREF(__pyx_tuple__6); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_tuple__7 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__7)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__7); - __Pyx_GIVEREF(__pyx_tuple__7); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_tuple__8 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__8)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__8); - __Pyx_GIVEREF(__pyx_tuple__8); - - /* "View.MemoryView":420 - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: - * raise TypeError("Cannot assign to read-only memoryview") # <<<<<<<<<<<<<< - * - * have_slices, index = _unellipsify(index, self.view.ndim) - */ - __pyx_tuple__9 = PyTuple_Pack(1, __pyx_kp_s_Cannot_assign_to_read_only_memor); if (unlikely(!__pyx_tuple__9)) __PYX_ERR(1, 420, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__9); - __Pyx_GIVEREF(__pyx_tuple__9); - - /* "View.MemoryView":497 - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - * raise ValueError("Unable to convert item to object") # <<<<<<<<<<<<<< - * else: - * if len(self.view.format) == 1: - */ - __pyx_tuple__10 = PyTuple_Pack(1, __pyx_kp_s_Unable_to_convert_item_to_object); if (unlikely(!__pyx_tuple__10)) __PYX_ERR(1, 497, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__10); - __Pyx_GIVEREF(__pyx_tuple__10); - - /* "View.MemoryView":522 - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: - * raise ValueError("Cannot create writable memory view from read-only memoryview") # <<<<<<<<<<<<<< - * - * if flags & PyBUF_ND: - */ - __pyx_tuple__11 = PyTuple_Pack(1, __pyx_kp_s_Cannot_create_writable_memory_vi); if (unlikely(!__pyx_tuple__11)) __PYX_ERR(1, 522, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__11); - __Pyx_GIVEREF(__pyx_tuple__11); - - /* "View.MemoryView":572 - * if self.view.strides == NULL: - * - * raise ValueError("Buffer view does not expose strides") # <<<<<<<<<<<<<< - * - * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) - */ - __pyx_tuple__12 = PyTuple_Pack(1, __pyx_kp_s_Buffer_view_does_not_expose_stri); if (unlikely(!__pyx_tuple__12)) __PYX_ERR(1, 572, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__12); - __Pyx_GIVEREF(__pyx_tuple__12); - - /* "View.MemoryView":579 - * def suboffsets(self): - * if self.view.suboffsets == NULL: - * return (-1,) * self.view.ndim # <<<<<<<<<<<<<< - * - * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) - */ - __pyx_tuple__13 = PyTuple_New(1); if (unlikely(!__pyx_tuple__13)) __PYX_ERR(1, 579, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__13); - __Pyx_INCREF(__pyx_int_neg_1); - __Pyx_GIVEREF(__pyx_int_neg_1); - PyTuple_SET_ITEM(__pyx_tuple__13, 0, __pyx_int_neg_1); - __Pyx_GIVEREF(__pyx_tuple__13); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_tuple__14 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__14)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__14); - __Pyx_GIVEREF(__pyx_tuple__14); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_tuple__15 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__15)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__15); - __Pyx_GIVEREF(__pyx_tuple__15); - - /* "View.MemoryView":684 - * if item is Ellipsis: - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) # <<<<<<<<<<<<<< - * seen_ellipsis = True - * else: - */ - __pyx_slice__16 = PySlice_New(Py_None, Py_None, Py_None); if (unlikely(!__pyx_slice__16)) __PYX_ERR(1, 684, __pyx_L1_error) - __Pyx_GOTREF(__pyx_slice__16); - __Pyx_GIVEREF(__pyx_slice__16); - - /* "View.MemoryView":705 - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - * raise ValueError("Indirect dimensions not supported") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__17 = PyTuple_Pack(1, __pyx_kp_s_Indirect_dimensions_not_supporte); if (unlikely(!__pyx_tuple__17)) __PYX_ERR(1, 705, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__17); - __Pyx_GIVEREF(__pyx_tuple__17); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_tuple__18 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__18)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__18); - __Pyx_GIVEREF(__pyx_tuple__18); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_tuple__19 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__19)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__19); - __Pyx_GIVEREF(__pyx_tuple__19); - __pyx_tuple__20 = PyTuple_Pack(3, __pyx_int_184977713, __pyx_int_136983863, __pyx_int_112105877); if (unlikely(!__pyx_tuple__20)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__20); - __Pyx_GIVEREF(__pyx_tuple__20); - - /* "View.MemoryView":287 - * return self.name - * - * cdef generic = Enum("") # <<<<<<<<<<<<<< - * cdef strided = Enum("") # default - * cdef indirect = Enum("") - */ - __pyx_tuple__21 = PyTuple_Pack(1, __pyx_kp_s_strided_and_direct_or_indirect); if (unlikely(!__pyx_tuple__21)) __PYX_ERR(1, 287, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__21); - __Pyx_GIVEREF(__pyx_tuple__21); - - /* "View.MemoryView":288 - * - * cdef generic = Enum("") - * cdef strided = Enum("") # default # <<<<<<<<<<<<<< - * cdef indirect = Enum("") - * - */ - __pyx_tuple__22 = PyTuple_Pack(1, __pyx_kp_s_strided_and_direct); if (unlikely(!__pyx_tuple__22)) __PYX_ERR(1, 288, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__22); - __Pyx_GIVEREF(__pyx_tuple__22); - - /* "View.MemoryView":289 - * cdef generic = Enum("") - * cdef strided = Enum("") # default - * cdef indirect = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__23 = PyTuple_Pack(1, __pyx_kp_s_strided_and_indirect); if (unlikely(!__pyx_tuple__23)) __PYX_ERR(1, 289, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__23); - __Pyx_GIVEREF(__pyx_tuple__23); - - /* "View.MemoryView":292 - * - * - * cdef contiguous = Enum("") # <<<<<<<<<<<<<< - * cdef indirect_contiguous = Enum("") - * - */ - __pyx_tuple__24 = PyTuple_Pack(1, __pyx_kp_s_contiguous_and_direct); if (unlikely(!__pyx_tuple__24)) __PYX_ERR(1, 292, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__24); - __Pyx_GIVEREF(__pyx_tuple__24); - - /* "View.MemoryView":293 - * - * cdef contiguous = Enum("") - * cdef indirect_contiguous = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__25 = PyTuple_Pack(1, __pyx_kp_s_contiguous_and_indirect); if (unlikely(!__pyx_tuple__25)) __PYX_ERR(1, 293, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__25); - __Pyx_GIVEREF(__pyx_tuple__25); - - /* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - __pyx_tuple__26 = PyTuple_Pack(5, __pyx_n_s_pyx_type, __pyx_n_s_pyx_checksum, __pyx_n_s_pyx_state, __pyx_n_s_pyx_PickleError, __pyx_n_s_pyx_result); if (unlikely(!__pyx_tuple__26)) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__26); - __Pyx_GIVEREF(__pyx_tuple__26); - __pyx_codeobj__27 = (PyObject*)__Pyx_PyCode_New(3, 0, 5, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__26, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_stringsource, __pyx_n_s_pyx_unpickle_Enum, 1, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__27)) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_RefNannyFinishContext(); - return 0; - __pyx_L1_error:; - __Pyx_RefNannyFinishContext(); - return -1; -} - -static CYTHON_SMALL_CODE int __Pyx_InitGlobals(void) { - /* InitThreads.init */ - #if defined(WITH_THREAD) && PY_VERSION_HEX < 0x030700F0 -PyEval_InitThreads(); -#endif - -if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1, __pyx_L1_error) - - if (__Pyx_InitStrings(__pyx_string_tab) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - __pyx_int_0 = PyInt_FromLong(0); if (unlikely(!__pyx_int_0)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_1 = PyInt_FromLong(1); if (unlikely(!__pyx_int_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_112105877 = PyInt_FromLong(112105877L); if (unlikely(!__pyx_int_112105877)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_136983863 = PyInt_FromLong(136983863L); if (unlikely(!__pyx_int_136983863)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_184977713 = PyInt_FromLong(184977713L); if (unlikely(!__pyx_int_184977713)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_neg_1 = PyInt_FromLong(-1); if (unlikely(!__pyx_int_neg_1)) __PYX_ERR(0, 1, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} - -static CYTHON_SMALL_CODE int __Pyx_modinit_global_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_import_code(void); /*proto*/ - -static int __Pyx_modinit_global_init_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_global_init_code", 0); - /*--- Global init code ---*/ - generic = Py_None; Py_INCREF(Py_None); - strided = Py_None; Py_INCREF(Py_None); - indirect = Py_None; Py_INCREF(Py_None); - contiguous = Py_None; Py_INCREF(Py_None); - indirect_contiguous = Py_None; Py_INCREF(Py_None); - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_export_code", 0); - /*--- Variable export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_export_code", 0); - /*--- Function export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_type_init_code(void) { - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__Pyx_modinit_type_init_code", 0); - /*--- Type init code ---*/ - __pyx_vtabptr_array = &__pyx_vtable_array; - __pyx_vtable_array.get_memview = (PyObject *(*)(struct __pyx_array_obj *))__pyx_array_get_memview; - if (PyType_Ready(&__pyx_type___pyx_array) < 0) __PYX_ERR(1, 106, __pyx_L1_error) - #if PY_VERSION_HEX < 0x030800B1 - __pyx_type___pyx_array.tp_print = 0; - #endif - if (__Pyx_SetVtable(__pyx_type___pyx_array.tp_dict, __pyx_vtabptr_array) < 0) __PYX_ERR(1, 106, __pyx_L1_error) - if (__Pyx_setup_reduce((PyObject*)&__pyx_type___pyx_array) < 0) __PYX_ERR(1, 106, __pyx_L1_error) - __pyx_array_type = &__pyx_type___pyx_array; - if (PyType_Ready(&__pyx_type___pyx_MemviewEnum) < 0) __PYX_ERR(1, 280, __pyx_L1_error) - #if PY_VERSION_HEX < 0x030800B1 - __pyx_type___pyx_MemviewEnum.tp_print = 0; - #endif - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type___pyx_MemviewEnum.tp_dictoffset && __pyx_type___pyx_MemviewEnum.tp_getattro == PyObject_GenericGetAttr)) { - __pyx_type___pyx_MemviewEnum.tp_getattro = __Pyx_PyObject_GenericGetAttr; - } - if (__Pyx_setup_reduce((PyObject*)&__pyx_type___pyx_MemviewEnum) < 0) __PYX_ERR(1, 280, __pyx_L1_error) - __pyx_MemviewEnum_type = &__pyx_type___pyx_MemviewEnum; - __pyx_vtabptr_memoryview = &__pyx_vtable_memoryview; - __pyx_vtable_memoryview.get_item_pointer = (char *(*)(struct __pyx_memoryview_obj *, PyObject *))__pyx_memoryview_get_item_pointer; - __pyx_vtable_memoryview.is_slice = (PyObject *(*)(struct __pyx_memoryview_obj *, PyObject *))__pyx_memoryview_is_slice; - __pyx_vtable_memoryview.setitem_slice_assignment = (PyObject *(*)(struct __pyx_memoryview_obj *, PyObject *, PyObject *))__pyx_memoryview_setitem_slice_assignment; - __pyx_vtable_memoryview.setitem_slice_assign_scalar = (PyObject *(*)(struct __pyx_memoryview_obj *, struct __pyx_memoryview_obj *, PyObject *))__pyx_memoryview_setitem_slice_assign_scalar; - __pyx_vtable_memoryview.setitem_indexed = (PyObject *(*)(struct __pyx_memoryview_obj *, PyObject *, PyObject *))__pyx_memoryview_setitem_indexed; - __pyx_vtable_memoryview.convert_item_to_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *))__pyx_memoryview_convert_item_to_object; - __pyx_vtable_memoryview.assign_item_from_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *, PyObject *))__pyx_memoryview_assign_item_from_object; - if (PyType_Ready(&__pyx_type___pyx_memoryview) < 0) __PYX_ERR(1, 331, __pyx_L1_error) - #if PY_VERSION_HEX < 0x030800B1 - __pyx_type___pyx_memoryview.tp_print = 0; - #endif - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type___pyx_memoryview.tp_dictoffset && __pyx_type___pyx_memoryview.tp_getattro == PyObject_GenericGetAttr)) { - __pyx_type___pyx_memoryview.tp_getattro = __Pyx_PyObject_GenericGetAttr; - } - if (__Pyx_SetVtable(__pyx_type___pyx_memoryview.tp_dict, __pyx_vtabptr_memoryview) < 0) __PYX_ERR(1, 331, __pyx_L1_error) - if (__Pyx_setup_reduce((PyObject*)&__pyx_type___pyx_memoryview) < 0) __PYX_ERR(1, 331, __pyx_L1_error) - __pyx_memoryview_type = &__pyx_type___pyx_memoryview; - __pyx_vtabptr__memoryviewslice = &__pyx_vtable__memoryviewslice; - __pyx_vtable__memoryviewslice.__pyx_base = *__pyx_vtabptr_memoryview; - __pyx_vtable__memoryviewslice.__pyx_base.convert_item_to_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *))__pyx_memoryviewslice_convert_item_to_object; - __pyx_vtable__memoryviewslice.__pyx_base.assign_item_from_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *, PyObject *))__pyx_memoryviewslice_assign_item_from_object; - __pyx_type___pyx_memoryviewslice.tp_base = __pyx_memoryview_type; - if (PyType_Ready(&__pyx_type___pyx_memoryviewslice) < 0) __PYX_ERR(1, 967, __pyx_L1_error) - #if PY_VERSION_HEX < 0x030800B1 - __pyx_type___pyx_memoryviewslice.tp_print = 0; - #endif - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type___pyx_memoryviewslice.tp_dictoffset && __pyx_type___pyx_memoryviewslice.tp_getattro == PyObject_GenericGetAttr)) { - __pyx_type___pyx_memoryviewslice.tp_getattro = __Pyx_PyObject_GenericGetAttr; - } - if (__Pyx_SetVtable(__pyx_type___pyx_memoryviewslice.tp_dict, __pyx_vtabptr__memoryviewslice) < 0) __PYX_ERR(1, 967, __pyx_L1_error) - if (__Pyx_setup_reduce((PyObject*)&__pyx_type___pyx_memoryviewslice) < 0) __PYX_ERR(1, 967, __pyx_L1_error) - __pyx_memoryviewslice_type = &__pyx_type___pyx_memoryviewslice; - __Pyx_RefNannyFinishContext(); - return 0; - __pyx_L1_error:; - __Pyx_RefNannyFinishContext(); - return -1; -} - -static int __Pyx_modinit_type_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_type_import_code", 0); - /*--- Type import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_import_code", 0); - /*--- Variable import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_import_code", 0); - /*--- Function import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - - -#ifndef CYTHON_NO_PYINIT_EXPORT -#define __Pyx_PyMODINIT_FUNC PyMODINIT_FUNC -#elif PY_MAJOR_VERSION < 3 -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" void -#else -#define __Pyx_PyMODINIT_FUNC void -#endif -#else -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" PyObject * -#else -#define __Pyx_PyMODINIT_FUNC PyObject * -#endif -#endif - - -#if PY_MAJOR_VERSION < 3 -__Pyx_PyMODINIT_FUNC initcore(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC initcore(void) -#else -__Pyx_PyMODINIT_FUNC PyInit_core(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC PyInit_core(void) -#if CYTHON_PEP489_MULTI_PHASE_INIT -{ - return PyModuleDef_Init(&__pyx_moduledef); -} -static CYTHON_SMALL_CODE int __Pyx_check_single_interpreter(void) { - #if PY_VERSION_HEX >= 0x030700A1 - static PY_INT64_T main_interpreter_id = -1; - PY_INT64_T current_id = PyInterpreterState_GetID(PyThreadState_Get()->interp); - if (main_interpreter_id == -1) { - main_interpreter_id = current_id; - return (unlikely(current_id == -1)) ? -1 : 0; - } else if (unlikely(main_interpreter_id != current_id)) - #else - static PyInterpreterState *main_interpreter = NULL; - PyInterpreterState *current_interpreter = PyThreadState_Get()->interp; - if (!main_interpreter) { - main_interpreter = current_interpreter; - } else if (unlikely(main_interpreter != current_interpreter)) - #endif - { - PyErr_SetString( - PyExc_ImportError, - "Interpreter change detected - this module can only be loaded into one interpreter per process."); - return -1; - } - return 0; -} -static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *moddict, const char* from_name, const char* to_name, int allow_none) { - PyObject *value = PyObject_GetAttrString(spec, from_name); - int result = 0; - if (likely(value)) { - if (allow_none || value != Py_None) { - result = PyDict_SetItemString(moddict, to_name, value); - } - Py_DECREF(value); - } else if (PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Clear(); - } else { - result = -1; - } - return result; -} -static CYTHON_SMALL_CODE PyObject* __pyx_pymod_create(PyObject *spec, CYTHON_UNUSED PyModuleDef *def) { - PyObject *module = NULL, *moddict, *modname; - if (__Pyx_check_single_interpreter()) - return NULL; - if (__pyx_m) - return __Pyx_NewRef(__pyx_m); - modname = PyObject_GetAttrString(spec, "name"); - if (unlikely(!modname)) goto bad; - module = PyModule_NewObject(modname); - Py_DECREF(modname); - if (unlikely(!module)) goto bad; - moddict = PyModule_GetDict(module); - if (unlikely(!moddict)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "loader", "__loader__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "origin", "__file__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "parent", "__package__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "submodule_search_locations", "__path__", 0) < 0)) goto bad; - return module; -bad: - Py_XDECREF(module); - return NULL; -} - - -static CYTHON_SMALL_CODE int __pyx_pymod_exec_core(PyObject *__pyx_pyinit_module) -#endif -#endif -{ - PyObject *__pyx_t_1 = NULL; - static PyThread_type_lock __pyx_t_2[8]; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - #if CYTHON_PEP489_MULTI_PHASE_INIT - if (__pyx_m) { - if (__pyx_m == __pyx_pyinit_module) return 0; - PyErr_SetString(PyExc_RuntimeError, "Module 'core' has already been imported. Re-initialisation is not supported."); - return -1; - } - #elif PY_MAJOR_VERSION >= 3 - if (__pyx_m) return __Pyx_NewRef(__pyx_m); - #endif - #if CYTHON_REFNANNY -__Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny"); -if (!__Pyx_RefNanny) { - PyErr_Clear(); - __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny"); - if (!__Pyx_RefNanny) - Py_FatalError("failed to import 'refnanny' module"); -} -#endif - __Pyx_RefNannySetupContext("__Pyx_PyMODINIT_FUNC PyInit_core(void)", 0); - if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pxy_PyFrame_Initialize_Offsets - __Pxy_PyFrame_Initialize_Offsets(); - #endif - __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pyx_CyFunction_USED - if (__pyx_CyFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_FusedFunction_USED - if (__pyx_FusedFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Coroutine_USED - if (__pyx_Coroutine_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Generator_USED - if (__pyx_Generator_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_AsyncGen_USED - if (__pyx_AsyncGen_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_StopAsyncIteration_USED - if (__pyx_StopAsyncIteration_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - /*--- Library function declarations ---*/ - /*--- Threads initialization code ---*/ - #if defined(WITH_THREAD) && PY_VERSION_HEX < 0x030700F0 && defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS - PyEval_InitThreads(); - #endif - /*--- Module creation code ---*/ - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_m = __pyx_pyinit_module; - Py_INCREF(__pyx_m); - #else - #if PY_MAJOR_VERSION < 3 - __pyx_m = Py_InitModule4("core", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m); - #else - __pyx_m = PyModule_Create(&__pyx_moduledef); - #endif - if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - __pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_d); - __pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_b); - __pyx_cython_runtime = PyImport_AddModule((char *) "cython_runtime"); if (unlikely(!__pyx_cython_runtime)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_cython_runtime); - if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - /*--- Initialize various global constants etc. ---*/ - if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT) - if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - if (__pyx_module_is_main_monotonic_align__core) { - if (PyObject_SetAttr(__pyx_m, __pyx_n_s_name_2, __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - } - #if PY_MAJOR_VERSION >= 3 - { - PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error) - if (!PyDict_GetItemString(modules, "monotonic_align.core")) { - if (unlikely(PyDict_SetItemString(modules, "monotonic_align.core", __pyx_m) < 0)) __PYX_ERR(0, 1, __pyx_L1_error) - } - } - #endif - /*--- Builtin init code ---*/ - if (__Pyx_InitCachedBuiltins() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Constants init code ---*/ - if (__Pyx_InitCachedConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Global type/function init code ---*/ - (void)__Pyx_modinit_global_init_code(); - (void)__Pyx_modinit_variable_export_code(); - (void)__Pyx_modinit_function_export_code(); - if (unlikely(__Pyx_modinit_type_init_code() < 0)) __PYX_ERR(0, 1, __pyx_L1_error) - (void)__Pyx_modinit_type_import_code(); - (void)__Pyx_modinit_variable_import_code(); - (void)__Pyx_modinit_function_import_code(); - /*--- Execution code ---*/ - #if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) - if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - - /* "monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ - __pyx_k_ = (-1e9); - - /* "monotonic_align/core.pyx":1 - * cimport cython # <<<<<<<<<<<<<< - * from cython.parallel import prange - * - */ - __pyx_t_1 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_1) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "View.MemoryView":210 - * info.obj = self - * - * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)") # <<<<<<<<<<<<<< - * - * def __dealloc__(array self): - */ - __pyx_t_1 = __pyx_capsule_create(((void *)(&__pyx_array_getbuffer)), ((char *)"getbuffer(obj, view, flags)")); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 210, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem((PyObject *)__pyx_array_type->tp_dict, __pyx_n_s_pyx_getbuffer, __pyx_t_1) < 0) __PYX_ERR(1, 210, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - PyType_Modified(__pyx_array_type); - - /* "View.MemoryView":287 - * return self.name - * - * cdef generic = Enum("") # <<<<<<<<<<<<<< - * cdef strided = Enum("") # default - * cdef indirect = Enum("") - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__21, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 287, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(generic); - __Pyx_DECREF_SET(generic, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":288 - * - * cdef generic = Enum("") - * cdef strided = Enum("") # default # <<<<<<<<<<<<<< - * cdef indirect = Enum("") - * - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__22, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 288, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(strided); - __Pyx_DECREF_SET(strided, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":289 - * cdef generic = Enum("") - * cdef strided = Enum("") # default - * cdef indirect = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__23, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 289, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(indirect); - __Pyx_DECREF_SET(indirect, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":292 - * - * - * cdef contiguous = Enum("") # <<<<<<<<<<<<<< - * cdef indirect_contiguous = Enum("") - * - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__24, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 292, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(contiguous); - __Pyx_DECREF_SET(contiguous, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":293 - * - * cdef contiguous = Enum("") - * cdef indirect_contiguous = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__25, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 293, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(indirect_contiguous); - __Pyx_DECREF_SET(indirect_contiguous, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":317 - * - * DEF THREAD_LOCKS_PREALLOCATED = 8 - * cdef int __pyx_memoryview_thread_locks_used = 0 # <<<<<<<<<<<<<< - * cdef PyThread_type_lock[THREAD_LOCKS_PREALLOCATED] __pyx_memoryview_thread_locks = [ - * PyThread_allocate_lock(), - */ - __pyx_memoryview_thread_locks_used = 0; - - /* "View.MemoryView":318 - * DEF THREAD_LOCKS_PREALLOCATED = 8 - * cdef int __pyx_memoryview_thread_locks_used = 0 - * cdef PyThread_type_lock[THREAD_LOCKS_PREALLOCATED] __pyx_memoryview_thread_locks = [ # <<<<<<<<<<<<<< - * PyThread_allocate_lock(), - * PyThread_allocate_lock(), - */ - __pyx_t_2[0] = PyThread_allocate_lock(); - __pyx_t_2[1] = PyThread_allocate_lock(); - __pyx_t_2[2] = PyThread_allocate_lock(); - __pyx_t_2[3] = PyThread_allocate_lock(); - __pyx_t_2[4] = PyThread_allocate_lock(); - __pyx_t_2[5] = PyThread_allocate_lock(); - __pyx_t_2[6] = PyThread_allocate_lock(); - __pyx_t_2[7] = PyThread_allocate_lock(); - memcpy(&(__pyx_memoryview_thread_locks[0]), __pyx_t_2, sizeof(__pyx_memoryview_thread_locks[0]) * (8)); - - /* "View.MemoryView":551 - * info.obj = self - * - * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __pyx_capsule_create(((void *)(&__pyx_memoryview_getbuffer)), ((char *)"getbuffer(obj, view, flags)")); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 551, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem((PyObject *)__pyx_memoryview_type->tp_dict, __pyx_n_s_pyx_getbuffer, __pyx_t_1) < 0) __PYX_ERR(1, 551, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - PyType_Modified(__pyx_memoryview_type); - - /* "View.MemoryView":997 - * return self.from_object - * - * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __pyx_capsule_create(((void *)(&__pyx_memoryview_getbuffer)), ((char *)"getbuffer(obj, view, flags)")); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 997, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem((PyObject *)__pyx_memoryviewslice_type->tp_dict, __pyx_n_s_pyx_getbuffer, __pyx_t_1) < 0) __PYX_ERR(1, 997, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - PyType_Modified(__pyx_memoryviewslice_type); - - /* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_15View_dot_MemoryView_1__pyx_unpickle_Enum, NULL, __pyx_n_s_View_MemoryView); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_pyx_unpickle_Enum, __pyx_t_1) < 0) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "(tree fragment)":11 - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<< - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - */ - - /*--- Wrapped vars code ---*/ - - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - if (__pyx_m) { - if (__pyx_d) { - __Pyx_AddTraceback("init monotonic_align.core", __pyx_clineno, __pyx_lineno, __pyx_filename); - } - Py_CLEAR(__pyx_m); - } else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_ImportError, "init monotonic_align.core"); - } - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - #if CYTHON_PEP489_MULTI_PHASE_INIT - return (__pyx_m != NULL) ? 0 : -1; - #elif PY_MAJOR_VERSION >= 3 - return __pyx_m; - #else - return; - #endif -} - -/* --- Runtime support code --- */ -/* Refnanny */ -#if CYTHON_REFNANNY -static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) { - PyObject *m = NULL, *p = NULL; - void *r = NULL; - m = PyImport_ImportModule(modname); - if (!m) goto end; - p = PyObject_GetAttrString(m, "RefNannyAPI"); - if (!p) goto end; - r = PyLong_AsVoidPtr(p); -end: - Py_XDECREF(p); - Py_XDECREF(m); - return (__Pyx_RefNannyAPIStruct *)r; -} -#endif - -/* PyObjectGetAttrStr */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) { - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro)) - return tp->tp_getattro(obj, attr_name); -#if PY_MAJOR_VERSION < 3 - if (likely(tp->tp_getattr)) - return tp->tp_getattr(obj, PyString_AS_STRING(attr_name)); -#endif - return PyObject_GetAttr(obj, attr_name); -} -#endif - -/* GetBuiltinName */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name) { - PyObject* result = __Pyx_PyObject_GetAttrStr(__pyx_b, name); - if (unlikely(!result)) { - PyErr_Format(PyExc_NameError, -#if PY_MAJOR_VERSION >= 3 - "name '%U' is not defined", name); -#else - "name '%.200s' is not defined", PyString_AS_STRING(name)); -#endif - } - return result; -} - -/* MemviewSliceInit */ -static int -__Pyx_init_memviewslice(struct __pyx_memoryview_obj *memview, - int ndim, - __Pyx_memviewslice *memviewslice, - int memview_is_new_reference) -{ - __Pyx_RefNannyDeclarations - int i, retval=-1; - Py_buffer *buf = &memview->view; - __Pyx_RefNannySetupContext("init_memviewslice", 0); - if (unlikely(memviewslice->memview || memviewslice->data)) { - PyErr_SetString(PyExc_ValueError, - "memviewslice is already initialized!"); - goto fail; - } - if (buf->strides) { - for (i = 0; i < ndim; i++) { - memviewslice->strides[i] = buf->strides[i]; - } - } else { - Py_ssize_t stride = buf->itemsize; - for (i = ndim - 1; i >= 0; i--) { - memviewslice->strides[i] = stride; - stride *= buf->shape[i]; - } - } - for (i = 0; i < ndim; i++) { - memviewslice->shape[i] = buf->shape[i]; - if (buf->suboffsets) { - memviewslice->suboffsets[i] = buf->suboffsets[i]; - } else { - memviewslice->suboffsets[i] = -1; - } - } - memviewslice->memview = memview; - memviewslice->data = (char *)buf->buf; - if (__pyx_add_acquisition_count(memview) == 0 && !memview_is_new_reference) { - Py_INCREF(memview); - } - retval = 0; - goto no_fail; -fail: - memviewslice->memview = 0; - memviewslice->data = 0; - retval = -1; -no_fail: - __Pyx_RefNannyFinishContext(); - return retval; -} -#ifndef Py_NO_RETURN -#define Py_NO_RETURN -#endif -static void __pyx_fatalerror(const char *fmt, ...) Py_NO_RETURN { - va_list vargs; - char msg[200]; -#if PY_VERSION_HEX >= 0x030A0000 || defined(HAVE_STDARG_PROTOTYPES) - va_start(vargs, fmt); -#else - va_start(vargs); -#endif - vsnprintf(msg, 200, fmt, vargs); - va_end(vargs); - Py_FatalError(msg); -} -static CYTHON_INLINE int -__pyx_add_acquisition_count_locked(__pyx_atomic_int *acquisition_count, - PyThread_type_lock lock) -{ - int result; - PyThread_acquire_lock(lock, 1); - result = (*acquisition_count)++; - PyThread_release_lock(lock); - return result; -} -static CYTHON_INLINE int -__pyx_sub_acquisition_count_locked(__pyx_atomic_int *acquisition_count, - PyThread_type_lock lock) -{ - int result; - PyThread_acquire_lock(lock, 1); - result = (*acquisition_count)--; - PyThread_release_lock(lock); - return result; -} -static CYTHON_INLINE void -__Pyx_INC_MEMVIEW(__Pyx_memviewslice *memslice, int have_gil, int lineno) -{ - int first_time; - struct __pyx_memoryview_obj *memview = memslice->memview; - if (unlikely(!memview || (PyObject *) memview == Py_None)) - return; - if (unlikely(__pyx_get_slice_count(memview) < 0)) - __pyx_fatalerror("Acquisition count is %d (line %d)", - __pyx_get_slice_count(memview), lineno); - first_time = __pyx_add_acquisition_count(memview) == 0; - if (unlikely(first_time)) { - if (have_gil) { - Py_INCREF((PyObject *) memview); - } else { - PyGILState_STATE _gilstate = PyGILState_Ensure(); - Py_INCREF((PyObject *) memview); - PyGILState_Release(_gilstate); - } - } -} -static CYTHON_INLINE void __Pyx_XDEC_MEMVIEW(__Pyx_memviewslice *memslice, - int have_gil, int lineno) { - int last_time; - struct __pyx_memoryview_obj *memview = memslice->memview; - if (unlikely(!memview || (PyObject *) memview == Py_None)) { - memslice->memview = NULL; - return; - } - if (unlikely(__pyx_get_slice_count(memview) <= 0)) - __pyx_fatalerror("Acquisition count is %d (line %d)", - __pyx_get_slice_count(memview), lineno); - last_time = __pyx_sub_acquisition_count(memview) == 1; - memslice->data = NULL; - if (unlikely(last_time)) { - if (have_gil) { - Py_CLEAR(memslice->memview); - } else { - PyGILState_STATE _gilstate = PyGILState_Ensure(); - Py_CLEAR(memslice->memview); - PyGILState_Release(_gilstate); - } - } else { - memslice->memview = NULL; - } -} - -/* RaiseArgTupleInvalid */ -static void __Pyx_RaiseArgtupleInvalid( - const char* func_name, - int exact, - Py_ssize_t num_min, - Py_ssize_t num_max, - Py_ssize_t num_found) -{ - Py_ssize_t num_expected; - const char *more_or_less; - if (num_found < num_min) { - num_expected = num_min; - more_or_less = "at least"; - } else { - num_expected = num_max; - more_or_less = "at most"; - } - if (exact) { - more_or_less = "exactly"; - } - PyErr_Format(PyExc_TypeError, - "%.200s() takes %.8s %" CYTHON_FORMAT_SSIZE_T "d positional argument%.1s (%" CYTHON_FORMAT_SSIZE_T "d given)", - func_name, more_or_less, num_expected, - (num_expected == 1) ? "" : "s", num_found); -} - -/* RaiseDoubleKeywords */ -static void __Pyx_RaiseDoubleKeywordsError( - const char* func_name, - PyObject* kw_name) -{ - PyErr_Format(PyExc_TypeError, - #if PY_MAJOR_VERSION >= 3 - "%s() got multiple values for keyword argument '%U'", func_name, kw_name); - #else - "%s() got multiple values for keyword argument '%s'", func_name, - PyString_AsString(kw_name)); - #endif -} - -/* ParseKeywords */ -static int __Pyx_ParseOptionalKeywords( - PyObject *kwds, - PyObject **argnames[], - PyObject *kwds2, - PyObject *values[], - Py_ssize_t num_pos_args, - const char* function_name) -{ - PyObject *key = 0, *value = 0; - Py_ssize_t pos = 0; - PyObject*** name; - PyObject*** first_kw_arg = argnames + num_pos_args; - while (PyDict_Next(kwds, &pos, &key, &value)) { - name = first_kw_arg; - while (*name && (**name != key)) name++; - if (*name) { - values[name-argnames] = value; - continue; - } - name = first_kw_arg; - #if PY_MAJOR_VERSION < 3 - if (likely(PyString_Check(key))) { - while (*name) { - if ((CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**name) == PyString_GET_SIZE(key)) - && _PyString_Eq(**name, key)) { - values[name-argnames] = value; - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - if ((**argname == key) || ( - (CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**argname) == PyString_GET_SIZE(key)) - && _PyString_Eq(**argname, key))) { - goto arg_passed_twice; - } - argname++; - } - } - } else - #endif - if (likely(PyUnicode_Check(key))) { - while (*name) { - int cmp = (**name == key) ? 0 : - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**name) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**name, key); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) { - values[name-argnames] = value; - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - int cmp = (**argname == key) ? 0 : - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**argname) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**argname, key); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) goto arg_passed_twice; - argname++; - } - } - } else - goto invalid_keyword_type; - if (kwds2) { - if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad; - } else { - goto invalid_keyword; - } - } - return 0; -arg_passed_twice: - __Pyx_RaiseDoubleKeywordsError(function_name, key); - goto bad; -invalid_keyword_type: - PyErr_Format(PyExc_TypeError, - "%.200s() keywords must be strings", function_name); - goto bad; -invalid_keyword: - PyErr_Format(PyExc_TypeError, - #if PY_MAJOR_VERSION < 3 - "%.200s() got an unexpected keyword argument '%.200s'", - function_name, PyString_AsString(key)); - #else - "%s() got an unexpected keyword argument '%U'", - function_name, key); - #endif -bad: - return -1; -} - -/* None */ -static CYTHON_INLINE void __Pyx_RaiseUnboundLocalError(const char *varname) { - PyErr_Format(PyExc_UnboundLocalError, "local variable '%s' referenced before assignment", varname); -} - -/* ArgTypeTest */ -static int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const char *name, int exact) -{ - if (unlikely(!type)) { - PyErr_SetString(PyExc_SystemError, "Missing type object"); - return 0; - } - else if (exact) { - #if PY_MAJOR_VERSION == 2 - if ((type == &PyBaseString_Type) && likely(__Pyx_PyBaseString_CheckExact(obj))) return 1; - #endif - } - else { - if (likely(__Pyx_TypeCheck(obj, type))) return 1; - } - PyErr_Format(PyExc_TypeError, - "Argument '%.200s' has incorrect type (expected %.200s, got %.200s)", - name, type->tp_name, Py_TYPE(obj)->tp_name); - return 0; -} - -/* PyObjectCall */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) { - PyObject *result; - ternaryfunc call = Py_TYPE(func)->tp_call; - if (unlikely(!call)) - return PyObject_Call(func, arg, kw); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = (*call)(func, arg, kw); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyErrFetchRestore */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - tmp_type = tstate->curexc_type; - tmp_value = tstate->curexc_value; - tmp_tb = tstate->curexc_traceback; - tstate->curexc_type = type; - tstate->curexc_value = value; - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -} -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - *type = tstate->curexc_type; - *value = tstate->curexc_value; - *tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; -} -#endif - -/* RaiseException */ -#if PY_MAJOR_VERSION < 3 -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, - CYTHON_UNUSED PyObject *cause) { - __Pyx_PyThreadState_declare - Py_XINCREF(type); - if (!value || value == Py_None) - value = NULL; - else - Py_INCREF(value); - if (!tb || tb == Py_None) - tb = NULL; - else { - Py_INCREF(tb); - if (!PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto raise_error; - } - } - if (PyType_Check(type)) { -#if CYTHON_COMPILING_IN_PYPY - if (!value) { - Py_INCREF(Py_None); - value = Py_None; - } -#endif - PyErr_NormalizeException(&type, &value, &tb); - } else { - if (value) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto raise_error; - } - value = type; - type = (PyObject*) Py_TYPE(type); - Py_INCREF(type); - if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto raise_error; - } - } - __Pyx_PyThreadState_assign - __Pyx_ErrRestore(type, value, tb); - return; -raise_error: - Py_XDECREF(value); - Py_XDECREF(type); - Py_XDECREF(tb); - return; -} -#else -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) { - PyObject* owned_instance = NULL; - if (tb == Py_None) { - tb = 0; - } else if (tb && !PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto bad; - } - if (value == Py_None) - value = 0; - if (PyExceptionInstance_Check(type)) { - if (value) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto bad; - } - value = type; - type = (PyObject*) Py_TYPE(value); - } else if (PyExceptionClass_Check(type)) { - PyObject *instance_class = NULL; - if (value && PyExceptionInstance_Check(value)) { - instance_class = (PyObject*) Py_TYPE(value); - if (instance_class != type) { - int is_subclass = PyObject_IsSubclass(instance_class, type); - if (!is_subclass) { - instance_class = NULL; - } else if (unlikely(is_subclass == -1)) { - goto bad; - } else { - type = instance_class; - } - } - } - if (!instance_class) { - PyObject *args; - if (!value) - args = PyTuple_New(0); - else if (PyTuple_Check(value)) { - Py_INCREF(value); - args = value; - } else - args = PyTuple_Pack(1, value); - if (!args) - goto bad; - owned_instance = PyObject_Call(type, args, NULL); - Py_DECREF(args); - if (!owned_instance) - goto bad; - value = owned_instance; - if (!PyExceptionInstance_Check(value)) { - PyErr_Format(PyExc_TypeError, - "calling %R should have returned an instance of " - "BaseException, not %R", - type, Py_TYPE(value)); - goto bad; - } - } - } else { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto bad; - } - if (cause) { - PyObject *fixed_cause; - if (cause == Py_None) { - fixed_cause = NULL; - } else if (PyExceptionClass_Check(cause)) { - fixed_cause = PyObject_CallObject(cause, NULL); - if (fixed_cause == NULL) - goto bad; - } else if (PyExceptionInstance_Check(cause)) { - fixed_cause = cause; - Py_INCREF(fixed_cause); - } else { - PyErr_SetString(PyExc_TypeError, - "exception causes must derive from " - "BaseException"); - goto bad; - } - PyException_SetCause(value, fixed_cause); - } - PyErr_SetObject(type, value); - if (tb) { -#if CYTHON_COMPILING_IN_PYPY - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_Fetch(&tmp_type, &tmp_value, &tmp_tb); - Py_INCREF(tb); - PyErr_Restore(tmp_type, tmp_value, tb); - Py_XDECREF(tmp_tb); -#else - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject* tmp_tb = tstate->curexc_traceback; - if (tb != tmp_tb) { - Py_INCREF(tb); - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_tb); - } -#endif - } -bad: - Py_XDECREF(owned_instance); - return; -} -#endif - -/* PyCFunctionFastCall */ -#if CYTHON_FAST_PYCCALL -static CYTHON_INLINE PyObject * __Pyx_PyCFunction_FastCall(PyObject *func_obj, PyObject **args, Py_ssize_t nargs) { - PyCFunctionObject *func = (PyCFunctionObject*)func_obj; - PyCFunction meth = PyCFunction_GET_FUNCTION(func); - PyObject *self = PyCFunction_GET_SELF(func); - int flags = PyCFunction_GET_FLAGS(func); - assert(PyCFunction_Check(func)); - assert(METH_FASTCALL == (flags & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS))); - assert(nargs >= 0); - assert(nargs == 0 || args != NULL); - /* _PyCFunction_FastCallDict() must not be called with an exception set, - because it may clear it (directly or indirectly) and so the - caller loses its exception */ - assert(!PyErr_Occurred()); - if ((PY_VERSION_HEX < 0x030700A0) || unlikely(flags & METH_KEYWORDS)) { - return (*((__Pyx_PyCFunctionFastWithKeywords)(void*)meth)) (self, args, nargs, NULL); - } else { - return (*((__Pyx_PyCFunctionFast)(void*)meth)) (self, args, nargs); - } -} -#endif - -/* PyFunctionFastCall */ -#if CYTHON_FAST_PYCALL -static PyObject* __Pyx_PyFunction_FastCallNoKw(PyCodeObject *co, PyObject **args, Py_ssize_t na, - PyObject *globals) { - PyFrameObject *f; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject **fastlocals; - Py_ssize_t i; - PyObject *result; - assert(globals != NULL); - /* XXX Perhaps we should create a specialized - PyFrame_New() that doesn't take locals, but does - take builtins without sanity checking them. - */ - assert(tstate != NULL); - f = PyFrame_New(tstate, co, globals, NULL); - if (f == NULL) { - return NULL; - } - fastlocals = __Pyx_PyFrame_GetLocalsplus(f); - for (i = 0; i < na; i++) { - Py_INCREF(*args); - fastlocals[i] = *args++; - } - result = PyEval_EvalFrameEx(f,0); - ++tstate->recursion_depth; - Py_DECREF(f); - --tstate->recursion_depth; - return result; -} -#if 1 || PY_VERSION_HEX < 0x030600B1 -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs) { - PyCodeObject *co = (PyCodeObject *)PyFunction_GET_CODE(func); - PyObject *globals = PyFunction_GET_GLOBALS(func); - PyObject *argdefs = PyFunction_GET_DEFAULTS(func); - PyObject *closure; -#if PY_MAJOR_VERSION >= 3 - PyObject *kwdefs; -#endif - PyObject *kwtuple, **k; - PyObject **d; - Py_ssize_t nd; - Py_ssize_t nk; - PyObject *result; - assert(kwargs == NULL || PyDict_Check(kwargs)); - nk = kwargs ? PyDict_Size(kwargs) : 0; - if (Py_EnterRecursiveCall((char*)" while calling a Python object")) { - return NULL; - } - if ( -#if PY_MAJOR_VERSION >= 3 - co->co_kwonlyargcount == 0 && -#endif - likely(kwargs == NULL || nk == 0) && - co->co_flags == (CO_OPTIMIZED | CO_NEWLOCALS | CO_NOFREE)) { - if (argdefs == NULL && co->co_argcount == nargs) { - result = __Pyx_PyFunction_FastCallNoKw(co, args, nargs, globals); - goto done; - } - else if (nargs == 0 && argdefs != NULL - && co->co_argcount == Py_SIZE(argdefs)) { - /* function called with no arguments, but all parameters have - a default value: use default values as arguments .*/ - args = &PyTuple_GET_ITEM(argdefs, 0); - result =__Pyx_PyFunction_FastCallNoKw(co, args, Py_SIZE(argdefs), globals); - goto done; - } - } - if (kwargs != NULL) { - Py_ssize_t pos, i; - kwtuple = PyTuple_New(2 * nk); - if (kwtuple == NULL) { - result = NULL; - goto done; - } - k = &PyTuple_GET_ITEM(kwtuple, 0); - pos = i = 0; - while (PyDict_Next(kwargs, &pos, &k[i], &k[i+1])) { - Py_INCREF(k[i]); - Py_INCREF(k[i+1]); - i += 2; - } - nk = i / 2; - } - else { - kwtuple = NULL; - k = NULL; - } - closure = PyFunction_GET_CLOSURE(func); -#if PY_MAJOR_VERSION >= 3 - kwdefs = PyFunction_GET_KW_DEFAULTS(func); -#endif - if (argdefs != NULL) { - d = &PyTuple_GET_ITEM(argdefs, 0); - nd = Py_SIZE(argdefs); - } - else { - d = NULL; - nd = 0; - } -#if PY_MAJOR_VERSION >= 3 - result = PyEval_EvalCodeEx((PyObject*)co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, kwdefs, closure); -#else - result = PyEval_EvalCodeEx(co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, closure); -#endif - Py_XDECREF(kwtuple); -done: - Py_LeaveRecursiveCall(); - return result; -} -#endif -#endif - -/* PyObjectCall2Args */ -static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2) { - PyObject *args, *result = NULL; - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(function)) { - PyObject *args[2] = {arg1, arg2}; - return __Pyx_PyFunction_FastCall(function, args, 2); - } - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(function)) { - PyObject *args[2] = {arg1, arg2}; - return __Pyx_PyCFunction_FastCall(function, args, 2); - } - #endif - args = PyTuple_New(2); - if (unlikely(!args)) goto done; - Py_INCREF(arg1); - PyTuple_SET_ITEM(args, 0, arg1); - Py_INCREF(arg2); - PyTuple_SET_ITEM(args, 1, arg2); - Py_INCREF(function); - result = __Pyx_PyObject_Call(function, args, NULL); - Py_DECREF(args); - Py_DECREF(function); -done: - return result; -} - -/* PyObjectCallMethO */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg) { - PyObject *self, *result; - PyCFunction cfunc; - cfunc = PyCFunction_GET_FUNCTION(func); - self = PyCFunction_GET_SELF(func); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = cfunc(self, arg); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyObjectCallOneArg */ -#if CYTHON_COMPILING_IN_CPYTHON -static PyObject* __Pyx__PyObject_CallOneArg(PyObject *func, PyObject *arg) { - PyObject *result; - PyObject *args = PyTuple_New(1); - if (unlikely(!args)) return NULL; - Py_INCREF(arg); - PyTuple_SET_ITEM(args, 0, arg); - result = __Pyx_PyObject_Call(func, args, NULL); - Py_DECREF(args); - return result; -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { -#if CYTHON_FAST_PYCALL - if (PyFunction_Check(func)) { - return __Pyx_PyFunction_FastCall(func, &arg, 1); - } -#endif - if (likely(PyCFunction_Check(func))) { - if (likely(PyCFunction_GET_FLAGS(func) & METH_O)) { - return __Pyx_PyObject_CallMethO(func, arg); -#if CYTHON_FAST_PYCCALL - } else if (__Pyx_PyFastCFunction_Check(func)) { - return __Pyx_PyCFunction_FastCall(func, &arg, 1); -#endif - } - } - return __Pyx__PyObject_CallOneArg(func, arg); -} -#else -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { - PyObject *result; - PyObject *args = PyTuple_Pack(1, arg); - if (unlikely(!args)) return NULL; - result = __Pyx_PyObject_Call(func, args, NULL); - Py_DECREF(args); - return result; -} -#endif - -/* BytesEquals */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY - return PyObject_RichCompareBool(s1, s2, equals); -#else - if (s1 == s2) { - return (equals == Py_EQ); - } else if (PyBytes_CheckExact(s1) & PyBytes_CheckExact(s2)) { - const char *ps1, *ps2; - Py_ssize_t length = PyBytes_GET_SIZE(s1); - if (length != PyBytes_GET_SIZE(s2)) - return (equals == Py_NE); - ps1 = PyBytes_AS_STRING(s1); - ps2 = PyBytes_AS_STRING(s2); - if (ps1[0] != ps2[0]) { - return (equals == Py_NE); - } else if (length == 1) { - return (equals == Py_EQ); - } else { - int result; -#if CYTHON_USE_UNICODE_INTERNALS && (PY_VERSION_HEX < 0x030B0000) - Py_hash_t hash1, hash2; - hash1 = ((PyBytesObject*)s1)->ob_shash; - hash2 = ((PyBytesObject*)s2)->ob_shash; - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - return (equals == Py_NE); - } -#endif - result = memcmp(ps1, ps2, (size_t)length); - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & PyBytes_CheckExact(s2)) { - return (equals == Py_NE); - } else if ((s2 == Py_None) & PyBytes_CheckExact(s1)) { - return (equals == Py_NE); - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -#endif -} - -/* UnicodeEquals */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY - return PyObject_RichCompareBool(s1, s2, equals); -#else -#if PY_MAJOR_VERSION < 3 - PyObject* owned_ref = NULL; -#endif - int s1_is_unicode, s2_is_unicode; - if (s1 == s2) { - goto return_eq; - } - s1_is_unicode = PyUnicode_CheckExact(s1); - s2_is_unicode = PyUnicode_CheckExact(s2); -#if PY_MAJOR_VERSION < 3 - if ((s1_is_unicode & (!s2_is_unicode)) && PyString_CheckExact(s2)) { - owned_ref = PyUnicode_FromObject(s2); - if (unlikely(!owned_ref)) - return -1; - s2 = owned_ref; - s2_is_unicode = 1; - } else if ((s2_is_unicode & (!s1_is_unicode)) && PyString_CheckExact(s1)) { - owned_ref = PyUnicode_FromObject(s1); - if (unlikely(!owned_ref)) - return -1; - s1 = owned_ref; - s1_is_unicode = 1; - } else if (((!s2_is_unicode) & (!s1_is_unicode))) { - return __Pyx_PyBytes_Equals(s1, s2, equals); - } -#endif - if (s1_is_unicode & s2_is_unicode) { - Py_ssize_t length; - int kind; - void *data1, *data2; - if (unlikely(__Pyx_PyUnicode_READY(s1) < 0) || unlikely(__Pyx_PyUnicode_READY(s2) < 0)) - return -1; - length = __Pyx_PyUnicode_GET_LENGTH(s1); - if (length != __Pyx_PyUnicode_GET_LENGTH(s2)) { - goto return_ne; - } -#if CYTHON_USE_UNICODE_INTERNALS - { - Py_hash_t hash1, hash2; - #if CYTHON_PEP393_ENABLED - hash1 = ((PyASCIIObject*)s1)->hash; - hash2 = ((PyASCIIObject*)s2)->hash; - #else - hash1 = ((PyUnicodeObject*)s1)->hash; - hash2 = ((PyUnicodeObject*)s2)->hash; - #endif - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - goto return_ne; - } - } -#endif - kind = __Pyx_PyUnicode_KIND(s1); - if (kind != __Pyx_PyUnicode_KIND(s2)) { - goto return_ne; - } - data1 = __Pyx_PyUnicode_DATA(s1); - data2 = __Pyx_PyUnicode_DATA(s2); - if (__Pyx_PyUnicode_READ(kind, data1, 0) != __Pyx_PyUnicode_READ(kind, data2, 0)) { - goto return_ne; - } else if (length == 1) { - goto return_eq; - } else { - int result = memcmp(data1, data2, (size_t)(length * kind)); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & s2_is_unicode) { - goto return_ne; - } else if ((s2 == Py_None) & s1_is_unicode) { - goto return_ne; - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -return_eq: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ); -return_ne: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_NE); -#endif -} - -/* DivInt[Py_ssize_t] */ -static CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t a, Py_ssize_t b) { - Py_ssize_t q = a / b; - Py_ssize_t r = a - q*b; - q -= ((r != 0) & ((r ^ b) < 0)); - return q; -} - -/* GetAttr */ -static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *o, PyObject *n) { -#if CYTHON_USE_TYPE_SLOTS -#if PY_MAJOR_VERSION >= 3 - if (likely(PyUnicode_Check(n))) -#else - if (likely(PyString_Check(n))) -#endif - return __Pyx_PyObject_GetAttrStr(o, n); -#endif - return PyObject_GetAttr(o, n); -} - -/* GetItemInt */ -static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j) { - PyObject *r; - if (!j) return NULL; - r = PyObject_GetItem(o, j); - Py_DECREF(j); - return r; -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - Py_ssize_t wrapped_i = i; - if (wraparound & unlikely(i < 0)) { - wrapped_i += PyList_GET_SIZE(o); - } - if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyList_GET_SIZE(o)))) { - PyObject *r = PyList_GET_ITEM(o, wrapped_i); - Py_INCREF(r); - return r; - } - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -#else - return PySequence_GetItem(o, i); -#endif -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - Py_ssize_t wrapped_i = i; - if (wraparound & unlikely(i < 0)) { - wrapped_i += PyTuple_GET_SIZE(o); - } - if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, wrapped_i); - Py_INCREF(r); - return r; - } - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -#else - return PySequence_GetItem(o, i); -#endif -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, int is_list, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS && CYTHON_USE_TYPE_SLOTS - if (is_list || PyList_CheckExact(o)) { - Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyList_GET_SIZE(o); - if ((!boundscheck) || (likely(__Pyx_is_valid_index(n, PyList_GET_SIZE(o))))) { - PyObject *r = PyList_GET_ITEM(o, n); - Py_INCREF(r); - return r; - } - } - else if (PyTuple_CheckExact(o)) { - Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyTuple_GET_SIZE(o); - if ((!boundscheck) || likely(__Pyx_is_valid_index(n, PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, n); - Py_INCREF(r); - return r; - } - } else { - PySequenceMethods *m = Py_TYPE(o)->tp_as_sequence; - if (likely(m && m->sq_item)) { - if (wraparound && unlikely(i < 0) && likely(m->sq_length)) { - Py_ssize_t l = m->sq_length(o); - if (likely(l >= 0)) { - i += l; - } else { - if (!PyErr_ExceptionMatches(PyExc_OverflowError)) - return NULL; - PyErr_Clear(); - } - } - return m->sq_item(o, i); - } - } -#else - if (is_list || PySequence_Check(o)) { - return PySequence_GetItem(o, i); - } -#endif - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -} - -/* ObjectGetItem */ -#if CYTHON_USE_TYPE_SLOTS -static PyObject *__Pyx_PyObject_GetIndex(PyObject *obj, PyObject* index) { - PyObject *runerr; - Py_ssize_t key_value; - PySequenceMethods *m = Py_TYPE(obj)->tp_as_sequence; - if (unlikely(!(m && m->sq_item))) { - PyErr_Format(PyExc_TypeError, "'%.200s' object is not subscriptable", Py_TYPE(obj)->tp_name); - return NULL; - } - key_value = __Pyx_PyIndex_AsSsize_t(index); - if (likely(key_value != -1 || !(runerr = PyErr_Occurred()))) { - return __Pyx_GetItemInt_Fast(obj, key_value, 0, 1, 1); - } - if (PyErr_GivenExceptionMatches(runerr, PyExc_OverflowError)) { - PyErr_Clear(); - PyErr_Format(PyExc_IndexError, "cannot fit '%.200s' into an index-sized integer", Py_TYPE(index)->tp_name); - } - return NULL; -} -static PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject* key) { - PyMappingMethods *m = Py_TYPE(obj)->tp_as_mapping; - if (likely(m && m->mp_subscript)) { - return m->mp_subscript(obj, key); - } - return __Pyx_PyObject_GetIndex(obj, key); -} -#endif - -/* decode_c_string */ -static CYTHON_INLINE PyObject* __Pyx_decode_c_string( - const char* cstring, Py_ssize_t start, Py_ssize_t stop, - const char* encoding, const char* errors, - PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)) { - Py_ssize_t length; - if (unlikely((start < 0) | (stop < 0))) { - size_t slen = strlen(cstring); - if (unlikely(slen > (size_t) PY_SSIZE_T_MAX)) { - PyErr_SetString(PyExc_OverflowError, - "c-string too long to convert to Python"); - return NULL; - } - length = (Py_ssize_t) slen; - if (start < 0) { - start += length; - if (start < 0) - start = 0; - } - if (stop < 0) - stop += length; - } - if (unlikely(stop <= start)) - return __Pyx_NewRef(__pyx_empty_unicode); - length = stop - start; - cstring += start; - if (decode_func) { - return decode_func(cstring, length, errors); - } else { - return PyUnicode_Decode(cstring, length, encoding, errors); - } -} - -/* PyErrExceptionMatches */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx_PyErr_ExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; icurexc_type; - if (exc_type == err) return 1; - if (unlikely(!exc_type)) return 0; - if (unlikely(PyTuple_Check(err))) - return __Pyx_PyErr_ExceptionMatchesTuple(exc_type, err); - return __Pyx_PyErr_GivenExceptionMatches(exc_type, err); -} -#endif - -/* GetAttr3 */ -static PyObject *__Pyx_GetAttr3Default(PyObject *d) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - if (unlikely(!__Pyx_PyErr_ExceptionMatches(PyExc_AttributeError))) - return NULL; - __Pyx_PyErr_Clear(); - Py_INCREF(d); - return d; -} -static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *o, PyObject *n, PyObject *d) { - PyObject *r = __Pyx_GetAttr(o, n); - return (likely(r)) ? r : __Pyx_GetAttr3Default(d); -} - -/* PyDictVersioning */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - return likely(dict) ? __PYX_GET_DICT_VERSION(dict) : 0; -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj) { - PyObject **dictptr = NULL; - Py_ssize_t offset = Py_TYPE(obj)->tp_dictoffset; - if (offset) { -#if CYTHON_COMPILING_IN_CPYTHON - dictptr = (likely(offset > 0)) ? (PyObject **) ((char *)obj + offset) : _PyObject_GetDictPtr(obj); -#else - dictptr = _PyObject_GetDictPtr(obj); -#endif - } - return (dictptr && *dictptr) ? __PYX_GET_DICT_VERSION(*dictptr) : 0; -} -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - if (unlikely(!dict) || unlikely(tp_dict_version != __PYX_GET_DICT_VERSION(dict))) - return 0; - return obj_dict_version == __Pyx_get_object_dict_version(obj); -} -#endif - -/* GetModuleGlobalName */ -#if CYTHON_USE_DICT_VERSIONS -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value) -#else -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name) -#endif -{ - PyObject *result; -#if !CYTHON_AVOID_BORROWED_REFS -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 - result = _PyDict_GetItem_KnownHash(__pyx_d, name, ((PyASCIIObject *) name)->hash); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } else if (unlikely(PyErr_Occurred())) { - return NULL; - } -#else - result = PyDict_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } -#endif -#else - result = PyObject_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } - PyErr_Clear(); -#endif - return __Pyx_GetBuiltinName(name); -} - -/* RaiseTooManyValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected) { - PyErr_Format(PyExc_ValueError, - "too many values to unpack (expected %" CYTHON_FORMAT_SSIZE_T "d)", expected); -} - -/* RaiseNeedMoreValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) { - PyErr_Format(PyExc_ValueError, - "need more than %" CYTHON_FORMAT_SSIZE_T "d value%.1s to unpack", - index, (index == 1) ? "" : "s"); -} - -/* RaiseNoneIterError */ -static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void) { - PyErr_SetString(PyExc_TypeError, "'NoneType' object is not iterable"); -} - -/* ExtTypeTest */ -static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type) { - if (unlikely(!type)) { - PyErr_SetString(PyExc_SystemError, "Missing type object"); - return 0; - } - if (likely(__Pyx_TypeCheck(obj, type))) - return 1; - PyErr_Format(PyExc_TypeError, "Cannot convert %.200s to %.200s", - Py_TYPE(obj)->tp_name, type->tp_name); - return 0; -} - -/* GetTopmostException */ -#if CYTHON_USE_EXC_INFO_STACK -static _PyErr_StackItem * -__Pyx_PyErr_GetTopmostException(PyThreadState *tstate) -{ - _PyErr_StackItem *exc_info = tstate->exc_info; - while ((exc_info->exc_type == NULL || exc_info->exc_type == Py_None) && - exc_info->previous_item != NULL) - { - exc_info = exc_info->previous_item; - } - return exc_info; -} -#endif - -/* SaveResetException */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = __Pyx_PyErr_GetTopmostException(tstate); - *type = exc_info->exc_type; - *value = exc_info->exc_value; - *tb = exc_info->exc_traceback; - #else - *type = tstate->exc_type; - *value = tstate->exc_value; - *tb = tstate->exc_traceback; - #endif - Py_XINCREF(*type); - Py_XINCREF(*value); - Py_XINCREF(*tb); -} -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = type; - exc_info->exc_value = value; - exc_info->exc_traceback = tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = type; - tstate->exc_value = value; - tstate->exc_traceback = tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -} -#endif - -/* GetException */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb) -#endif -{ - PyObject *local_type, *local_value, *local_tb; -#if CYTHON_FAST_THREAD_STATE - PyObject *tmp_type, *tmp_value, *tmp_tb; - local_type = tstate->curexc_type; - local_value = tstate->curexc_value; - local_tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; -#else - PyErr_Fetch(&local_type, &local_value, &local_tb); -#endif - PyErr_NormalizeException(&local_type, &local_value, &local_tb); -#if CYTHON_FAST_THREAD_STATE - if (unlikely(tstate->curexc_type)) -#else - if (unlikely(PyErr_Occurred())) -#endif - goto bad; - #if PY_MAJOR_VERSION >= 3 - if (local_tb) { - if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0)) - goto bad; - } - #endif - Py_XINCREF(local_tb); - Py_XINCREF(local_type); - Py_XINCREF(local_value); - *type = local_type; - *value = local_value; - *tb = local_tb; -#if CYTHON_FAST_THREAD_STATE - #if CYTHON_USE_EXC_INFO_STACK - { - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = local_type; - exc_info->exc_value = local_value; - exc_info->exc_traceback = local_tb; - } - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = local_type; - tstate->exc_value = local_value; - tstate->exc_traceback = local_tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -#else - PyErr_SetExcInfo(local_type, local_value, local_tb); -#endif - return 0; -bad: - *type = 0; - *value = 0; - *tb = 0; - Py_XDECREF(local_type); - Py_XDECREF(local_value); - Py_XDECREF(local_tb); - return -1; -} - -/* SwapException */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = *type; - exc_info->exc_value = *value; - exc_info->exc_traceback = *tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = *type; - tstate->exc_value = *value; - tstate->exc_traceback = *tb; - #endif - *type = tmp_type; - *value = tmp_value; - *tb = tmp_tb; -} -#else -static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_GetExcInfo(&tmp_type, &tmp_value, &tmp_tb); - PyErr_SetExcInfo(*type, *value, *tb); - *type = tmp_type; - *value = tmp_value; - *tb = tmp_tb; -} -#endif - -/* Import */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) { - PyObject *empty_list = 0; - PyObject *module = 0; - PyObject *global_dict = 0; - PyObject *empty_dict = 0; - PyObject *list; - #if PY_MAJOR_VERSION < 3 - PyObject *py_import; - py_import = __Pyx_PyObject_GetAttrStr(__pyx_b, __pyx_n_s_import); - if (!py_import) - goto bad; - #endif - if (from_list) - list = from_list; - else { - empty_list = PyList_New(0); - if (!empty_list) - goto bad; - list = empty_list; - } - global_dict = PyModule_GetDict(__pyx_m); - if (!global_dict) - goto bad; - empty_dict = PyDict_New(); - if (!empty_dict) - goto bad; - { - #if PY_MAJOR_VERSION >= 3 - if (level == -1) { - if ((1) && (strchr(__Pyx_MODULE_NAME, '.'))) { - module = PyImport_ImportModuleLevelObject( - name, global_dict, empty_dict, list, 1); - if (!module) { - if (!PyErr_ExceptionMatches(PyExc_ImportError)) - goto bad; - PyErr_Clear(); - } - } - level = 0; - } - #endif - if (!module) { - #if PY_MAJOR_VERSION < 3 - PyObject *py_level = PyInt_FromLong(level); - if (!py_level) - goto bad; - module = PyObject_CallFunctionObjArgs(py_import, - name, global_dict, empty_dict, list, py_level, (PyObject *)NULL); - Py_DECREF(py_level); - #else - module = PyImport_ImportModuleLevelObject( - name, global_dict, empty_dict, list, level); - #endif - } - } -bad: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(py_import); - #endif - Py_XDECREF(empty_list); - Py_XDECREF(empty_dict); - return module; -} - -/* FastTypeChecks */ -#if CYTHON_COMPILING_IN_CPYTHON -static int __Pyx_InBases(PyTypeObject *a, PyTypeObject *b) { - while (a) { - a = a->tp_base; - if (a == b) - return 1; - } - return b == &PyBaseObject_Type; -} -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b) { - PyObject *mro; - if (a == b) return 1; - mro = a->tp_mro; - if (likely(mro)) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(mro); - for (i = 0; i < n; i++) { - if (PyTuple_GET_ITEM(mro, i) == (PyObject *)b) - return 1; - } - return 0; - } - return __Pyx_InBases(a, b); -} -#if PY_MAJOR_VERSION == 2 -static int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject* exc_type2) { - PyObject *exception, *value, *tb; - int res; - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ErrFetch(&exception, &value, &tb); - res = exc_type1 ? PyObject_IsSubclass(err, exc_type1) : 0; - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - if (!res) { - res = PyObject_IsSubclass(err, exc_type2); - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - } - __Pyx_ErrRestore(exception, value, tb); - return res; -} -#else -static CYTHON_INLINE int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject *exc_type2) { - int res = exc_type1 ? __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type1) : 0; - if (!res) { - res = __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type2); - } - return res; -} -#endif -static int __Pyx_PyErr_GivenExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - assert(PyExceptionClass_Check(exc_type)); - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; i= 0 || (x^b) >= 0)) - return PyInt_FromLong(x); - return PyLong_Type.tp_as_number->nb_add(op1, op2); - } - #endif - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(op1))) { - const long b = intval; - long a, x; -#ifdef HAVE_LONG_LONG - const PY_LONG_LONG llb = intval; - PY_LONG_LONG lla, llx; -#endif - const digit* digits = ((PyLongObject*)op1)->ob_digit; - const Py_ssize_t size = Py_SIZE(op1); - if (likely(__Pyx_sst_abs(size) <= 1)) { - a = likely(size) ? digits[0] : 0; - if (size == -1) a = -a; - } else { - switch (size) { - case -2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - a = -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case 2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - a = (long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case -3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - a = -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case 3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - a = (long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case -4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - a = -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case 4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - a = (long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - default: return PyLong_Type.tp_as_number->nb_add(op1, op2); - } - } - x = a + b; - return PyLong_FromLong(x); -#ifdef HAVE_LONG_LONG - long_long: - llx = lla + llb; - return PyLong_FromLongLong(llx); -#endif - - - } - #endif - if (PyFloat_CheckExact(op1)) { - const long b = intval; - double a = PyFloat_AS_DOUBLE(op1); - double result; - PyFPE_START_PROTECT("add", return NULL) - result = ((double)a) + (double)b; - PyFPE_END_PROTECT(result) - return PyFloat_FromDouble(result); - } - return (inplace ? PyNumber_InPlaceAdd : PyNumber_Add)(op1, op2); -} -#endif - -/* DivInt[long] */ -static CYTHON_INLINE long __Pyx_div_long(long a, long b) { - long q = a / b; - long r = a - q*b; - q -= ((r != 0) & ((r ^ b) < 0)); - return q; -} - -/* ImportFrom */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name) { - PyObject* value = __Pyx_PyObject_GetAttrStr(module, name); - if (unlikely(!value) && PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Format(PyExc_ImportError, - #if PY_MAJOR_VERSION < 3 - "cannot import name %.230s", PyString_AS_STRING(name)); - #else - "cannot import name %S", name); - #endif - } - return value; -} - -/* HasAttr */ -static CYTHON_INLINE int __Pyx_HasAttr(PyObject *o, PyObject *n) { - PyObject *r; - if (unlikely(!__Pyx_PyBaseString_Check(n))) { - PyErr_SetString(PyExc_TypeError, - "hasattr(): attribute name must be string"); - return -1; - } - r = __Pyx_GetAttr(o, n); - if (unlikely(!r)) { - PyErr_Clear(); - return 0; - } else { - Py_DECREF(r); - return 1; - } -} - -/* PyObject_GenericGetAttrNoDict */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static PyObject *__Pyx_RaiseGenericGetAttributeError(PyTypeObject *tp, PyObject *attr_name) { - PyErr_Format(PyExc_AttributeError, -#if PY_MAJOR_VERSION >= 3 - "'%.50s' object has no attribute '%U'", - tp->tp_name, attr_name); -#else - "'%.50s' object has no attribute '%.400s'", - tp->tp_name, PyString_AS_STRING(attr_name)); -#endif - return NULL; -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name) { - PyObject *descr; - PyTypeObject *tp = Py_TYPE(obj); - if (unlikely(!PyString_Check(attr_name))) { - return PyObject_GenericGetAttr(obj, attr_name); - } - assert(!tp->tp_dictoffset); - descr = _PyType_Lookup(tp, attr_name); - if (unlikely(!descr)) { - return __Pyx_RaiseGenericGetAttributeError(tp, attr_name); - } - Py_INCREF(descr); - #if PY_MAJOR_VERSION < 3 - if (likely(PyType_HasFeature(Py_TYPE(descr), Py_TPFLAGS_HAVE_CLASS))) - #endif - { - descrgetfunc f = Py_TYPE(descr)->tp_descr_get; - if (unlikely(f)) { - PyObject *res = f(descr, obj, (PyObject *)tp); - Py_DECREF(descr); - return res; - } - } - return descr; -} -#endif - -/* PyObject_GenericGetAttr */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* attr_name) { - if (unlikely(Py_TYPE(obj)->tp_dictoffset)) { - return PyObject_GenericGetAttr(obj, attr_name); - } - return __Pyx_PyObject_GenericGetAttrNoDict(obj, attr_name); -} -#endif - -/* SetVTable */ -static int __Pyx_SetVtable(PyObject *dict, void *vtable) { -#if PY_VERSION_HEX >= 0x02070000 - PyObject *ob = PyCapsule_New(vtable, 0, 0); -#else - PyObject *ob = PyCObject_FromVoidPtr(vtable, 0); -#endif - if (!ob) - goto bad; - if (PyDict_SetItem(dict, __pyx_n_s_pyx_vtable, ob) < 0) - goto bad; - Py_DECREF(ob); - return 0; -bad: - Py_XDECREF(ob); - return -1; -} - -/* PyObjectGetAttrStrNoError */ -static void __Pyx_PyObject_GetAttrStr_ClearAttributeError(void) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - if (likely(__Pyx_PyErr_ExceptionMatches(PyExc_AttributeError))) - __Pyx_PyErr_Clear(); -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name) { - PyObject *result; -#if CYTHON_COMPILING_IN_CPYTHON && CYTHON_USE_TYPE_SLOTS && PY_VERSION_HEX >= 0x030700B1 - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro == PyObject_GenericGetAttr)) { - return _PyObject_GenericGetAttrWithDict(obj, attr_name, NULL, 1); - } -#endif - result = __Pyx_PyObject_GetAttrStr(obj, attr_name); - if (unlikely(!result)) { - __Pyx_PyObject_GetAttrStr_ClearAttributeError(); - } - return result; -} - -/* SetupReduce */ -static int __Pyx_setup_reduce_is_named(PyObject* meth, PyObject* name) { - int ret; - PyObject *name_attr; - name_attr = __Pyx_PyObject_GetAttrStr(meth, __pyx_n_s_name_2); - if (likely(name_attr)) { - ret = PyObject_RichCompareBool(name_attr, name, Py_EQ); - } else { - ret = -1; - } - if (unlikely(ret < 0)) { - PyErr_Clear(); - ret = 0; - } - Py_XDECREF(name_attr); - return ret; -} -static int __Pyx_setup_reduce(PyObject* type_obj) { - int ret = 0; - PyObject *object_reduce = NULL; - PyObject *object_getstate = NULL; - PyObject *object_reduce_ex = NULL; - PyObject *reduce = NULL; - PyObject *reduce_ex = NULL; - PyObject *reduce_cython = NULL; - PyObject *setstate = NULL; - PyObject *setstate_cython = NULL; - PyObject *getstate = NULL; -#if CYTHON_USE_PYTYPE_LOOKUP - getstate = _PyType_Lookup((PyTypeObject*)type_obj, __pyx_n_s_getstate); -#else - getstate = __Pyx_PyObject_GetAttrStrNoError(type_obj, __pyx_n_s_getstate); - if (!getstate && PyErr_Occurred()) { - goto __PYX_BAD; - } -#endif - if (getstate) { -#if CYTHON_USE_PYTYPE_LOOKUP - object_getstate = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_getstate); -#else - object_getstate = __Pyx_PyObject_GetAttrStrNoError((PyObject*)&PyBaseObject_Type, __pyx_n_s_getstate); - if (!object_getstate && PyErr_Occurred()) { - goto __PYX_BAD; - } -#endif - if (object_getstate != getstate) { - goto __PYX_GOOD; - } - } -#if CYTHON_USE_PYTYPE_LOOKUP - object_reduce_ex = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_reduce_ex); if (!object_reduce_ex) goto __PYX_BAD; -#else - object_reduce_ex = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, __pyx_n_s_reduce_ex); if (!object_reduce_ex) goto __PYX_BAD; -#endif - reduce_ex = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce_ex); if (unlikely(!reduce_ex)) goto __PYX_BAD; - if (reduce_ex == object_reduce_ex) { -#if CYTHON_USE_PYTYPE_LOOKUP - object_reduce = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_reduce); if (!object_reduce) goto __PYX_BAD; -#else - object_reduce = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, __pyx_n_s_reduce); if (!object_reduce) goto __PYX_BAD; -#endif - reduce = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce); if (unlikely(!reduce)) goto __PYX_BAD; - if (reduce == object_reduce || __Pyx_setup_reduce_is_named(reduce, __pyx_n_s_reduce_cython)) { - reduce_cython = __Pyx_PyObject_GetAttrStrNoError(type_obj, __pyx_n_s_reduce_cython); - if (likely(reduce_cython)) { - ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_reduce, reduce_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_reduce_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - } else if (reduce == object_reduce || PyErr_Occurred()) { - goto __PYX_BAD; - } - setstate = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_setstate); - if (!setstate) PyErr_Clear(); - if (!setstate || __Pyx_setup_reduce_is_named(setstate, __pyx_n_s_setstate_cython)) { - setstate_cython = __Pyx_PyObject_GetAttrStrNoError(type_obj, __pyx_n_s_setstate_cython); - if (likely(setstate_cython)) { - ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_setstate, setstate_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_setstate_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - } else if (!setstate || PyErr_Occurred()) { - goto __PYX_BAD; - } - } - PyType_Modified((PyTypeObject*)type_obj); - } - } - goto __PYX_GOOD; -__PYX_BAD: - if (!PyErr_Occurred()) - PyErr_Format(PyExc_RuntimeError, "Unable to initialize pickling for %s", ((PyTypeObject*)type_obj)->tp_name); - ret = -1; -__PYX_GOOD: -#if !CYTHON_USE_PYTYPE_LOOKUP - Py_XDECREF(object_reduce); - Py_XDECREF(object_reduce_ex); - Py_XDECREF(object_getstate); - Py_XDECREF(getstate); -#endif - Py_XDECREF(reduce); - Py_XDECREF(reduce_ex); - Py_XDECREF(reduce_cython); - Py_XDECREF(setstate); - Py_XDECREF(setstate_cython); - return ret; -} - -/* CLineInTraceback */ -#ifndef CYTHON_CLINE_IN_TRACEBACK -static int __Pyx_CLineForTraceback(CYTHON_NCP_UNUSED PyThreadState *tstate, int c_line) { - PyObject *use_cline; - PyObject *ptype, *pvalue, *ptraceback; -#if CYTHON_COMPILING_IN_CPYTHON - PyObject **cython_runtime_dict; -#endif - if (unlikely(!__pyx_cython_runtime)) { - return c_line; - } - __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback); -#if CYTHON_COMPILING_IN_CPYTHON - cython_runtime_dict = _PyObject_GetDictPtr(__pyx_cython_runtime); - if (likely(cython_runtime_dict)) { - __PYX_PY_DICT_LOOKUP_IF_MODIFIED( - use_cline, *cython_runtime_dict, - __Pyx_PyDict_GetItemStr(*cython_runtime_dict, __pyx_n_s_cline_in_traceback)) - } else -#endif - { - PyObject *use_cline_obj = __Pyx_PyObject_GetAttrStr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback); - if (use_cline_obj) { - use_cline = PyObject_Not(use_cline_obj) ? Py_False : Py_True; - Py_DECREF(use_cline_obj); - } else { - PyErr_Clear(); - use_cline = NULL; - } - } - if (!use_cline) { - c_line = 0; - (void) PyObject_SetAttr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback, Py_False); - } - else if (use_cline == Py_False || (use_cline != Py_True && PyObject_Not(use_cline) != 0)) { - c_line = 0; - } - __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback); - return c_line; -} -#endif - -/* CodeObjectCache */ -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) { - int start = 0, mid = 0, end = count - 1; - if (end >= 0 && code_line > entries[end].code_line) { - return count; - } - while (start < end) { - mid = start + (end - start) / 2; - if (code_line < entries[mid].code_line) { - end = mid; - } else if (code_line > entries[mid].code_line) { - start = mid + 1; - } else { - return mid; - } - } - if (code_line <= entries[mid].code_line) { - return mid; - } else { - return mid + 1; - } -} -static PyCodeObject *__pyx_find_code_object(int code_line) { - PyCodeObject* code_object; - int pos; - if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) { - return NULL; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) { - return NULL; - } - code_object = __pyx_code_cache.entries[pos].code_object; - Py_INCREF(code_object); - return code_object; -} -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) { - int pos, i; - __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries; - if (unlikely(!code_line)) { - return; - } - if (unlikely(!entries)) { - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry)); - if (likely(entries)) { - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = 64; - __pyx_code_cache.count = 1; - entries[0].code_line = code_line; - entries[0].code_object = code_object; - Py_INCREF(code_object); - } - return; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) { - PyCodeObject* tmp = entries[pos].code_object; - entries[pos].code_object = code_object; - Py_DECREF(tmp); - return; - } - if (__pyx_code_cache.count == __pyx_code_cache.max_count) { - int new_max = __pyx_code_cache.max_count + 64; - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc( - __pyx_code_cache.entries, ((size_t)new_max) * sizeof(__Pyx_CodeObjectCacheEntry)); - if (unlikely(!entries)) { - return; - } - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = new_max; - } - for (i=__pyx_code_cache.count; i>pos; i--) { - entries[i] = entries[i-1]; - } - entries[pos].code_line = code_line; - entries[pos].code_object = code_object; - __pyx_code_cache.count++; - Py_INCREF(code_object); -} - -/* AddTraceback */ -#include "compile.h" -#include "frameobject.h" -#include "traceback.h" -#if PY_VERSION_HEX >= 0x030b00a6 - #ifndef Py_BUILD_CORE - #define Py_BUILD_CORE 1 - #endif - #include "internal/pycore_frame.h" -#endif -static PyCodeObject* __Pyx_CreateCodeObjectForTraceback( - const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = NULL; - PyObject *py_funcname = NULL; - #if PY_MAJOR_VERSION < 3 - PyObject *py_srcfile = NULL; - py_srcfile = PyString_FromString(filename); - if (!py_srcfile) goto bad; - #endif - if (c_line) { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - if (!py_funcname) goto bad; - #else - py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - if (!py_funcname) goto bad; - funcname = PyUnicode_AsUTF8(py_funcname); - if (!funcname) goto bad; - #endif - } - else { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromString(funcname); - if (!py_funcname) goto bad; - #endif - } - #if PY_MAJOR_VERSION < 3 - py_code = __Pyx_PyCode_New( - 0, - 0, - 0, - 0, - 0, - __pyx_empty_bytes, /*PyObject *code,*/ - __pyx_empty_tuple, /*PyObject *consts,*/ - __pyx_empty_tuple, /*PyObject *names,*/ - __pyx_empty_tuple, /*PyObject *varnames,*/ - __pyx_empty_tuple, /*PyObject *freevars,*/ - __pyx_empty_tuple, /*PyObject *cellvars,*/ - py_srcfile, /*PyObject *filename,*/ - py_funcname, /*PyObject *name,*/ - py_line, - __pyx_empty_bytes /*PyObject *lnotab*/ - ); - Py_DECREF(py_srcfile); - #else - py_code = PyCode_NewEmpty(filename, funcname, py_line); - #endif - Py_XDECREF(py_funcname); // XDECREF since it's only set on Py3 if cline - return py_code; -bad: - Py_XDECREF(py_funcname); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(py_srcfile); - #endif - return NULL; -} -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = 0; - PyFrameObject *py_frame = 0; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject *ptype, *pvalue, *ptraceback; - if (c_line) { - c_line = __Pyx_CLineForTraceback(tstate, c_line); - } - py_code = __pyx_find_code_object(c_line ? -c_line : py_line); - if (!py_code) { - __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback); - py_code = __Pyx_CreateCodeObjectForTraceback( - funcname, c_line, py_line, filename); - if (!py_code) { - /* If the code object creation fails, then we should clear the - fetched exception references and propagate the new exception */ - Py_XDECREF(ptype); - Py_XDECREF(pvalue); - Py_XDECREF(ptraceback); - goto bad; - } - __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback); - __pyx_insert_code_object(c_line ? -c_line : py_line, py_code); - } - py_frame = PyFrame_New( - tstate, /*PyThreadState *tstate,*/ - py_code, /*PyCodeObject *code,*/ - __pyx_d, /*PyObject *globals,*/ - 0 /*PyObject *locals*/ - ); - if (!py_frame) goto bad; - __Pyx_PyFrame_SetLineNumber(py_frame, py_line); - PyTraceBack_Here(py_frame); -bad: - Py_XDECREF(py_code); - Py_XDECREF(py_frame); -} - -#if PY_MAJOR_VERSION < 3 -static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags) { - if (PyObject_CheckBuffer(obj)) return PyObject_GetBuffer(obj, view, flags); - if (__Pyx_TypeCheck(obj, __pyx_array_type)) return __pyx_array_getbuffer(obj, view, flags); - if (__Pyx_TypeCheck(obj, __pyx_memoryview_type)) return __pyx_memoryview_getbuffer(obj, view, flags); - PyErr_Format(PyExc_TypeError, "'%.200s' does not have the buffer interface", Py_TYPE(obj)->tp_name); - return -1; -} -static void __Pyx_ReleaseBuffer(Py_buffer *view) { - PyObject *obj = view->obj; - if (!obj) return; - if (PyObject_CheckBuffer(obj)) { - PyBuffer_Release(view); - return; - } - if ((0)) {} - view->obj = NULL; - Py_DECREF(obj); -} -#endif - - -/* MemviewSliceIsContig */ -static int -__pyx_memviewslice_is_contig(const __Pyx_memviewslice mvs, char order, int ndim) -{ - int i, index, step, start; - Py_ssize_t itemsize = mvs.memview->view.itemsize; - if (order == 'F') { - step = 1; - start = 0; - } else { - step = -1; - start = ndim - 1; - } - for (i = 0; i < ndim; i++) { - index = start + step * i; - if (mvs.suboffsets[index] >= 0 || mvs.strides[index] != itemsize) - return 0; - itemsize *= mvs.shape[index]; - } - return 1; -} - -/* OverlappingSlices */ -static void -__pyx_get_array_memory_extents(__Pyx_memviewslice *slice, - void **out_start, void **out_end, - int ndim, size_t itemsize) -{ - char *start, *end; - int i; - start = end = slice->data; - for (i = 0; i < ndim; i++) { - Py_ssize_t stride = slice->strides[i]; - Py_ssize_t extent = slice->shape[i]; - if (extent == 0) { - *out_start = *out_end = start; - return; - } else { - if (stride > 0) - end += stride * (extent - 1); - else - start += stride * (extent - 1); - } - } - *out_start = start; - *out_end = end + itemsize; -} -static int -__pyx_slices_overlap(__Pyx_memviewslice *slice1, - __Pyx_memviewslice *slice2, - int ndim, size_t itemsize) -{ - void *start1, *end1, *start2, *end2; - __pyx_get_array_memory_extents(slice1, &start1, &end1, ndim, itemsize); - __pyx_get_array_memory_extents(slice2, &start2, &end2, ndim, itemsize); - return (start1 < end2) && (start2 < end1); -} - -/* Capsule */ -static CYTHON_INLINE PyObject * -__pyx_capsule_create(void *p, CYTHON_UNUSED const char *sig) -{ - PyObject *cobj; -#if PY_VERSION_HEX >= 0x02070000 - cobj = PyCapsule_New(p, sig, NULL); -#else - cobj = PyCObject_FromVoidPtr(p, NULL); -#endif - return cobj; -} - -/* IsLittleEndian */ -static CYTHON_INLINE int __Pyx_Is_Little_Endian(void) -{ - union { - uint32_t u32; - uint8_t u8[4]; - } S; - S.u32 = 0x01020304; - return S.u8[0] == 4; -} - -/* BufferFormatCheck */ -static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx, - __Pyx_BufFmt_StackElem* stack, - __Pyx_TypeInfo* type) { - stack[0].field = &ctx->root; - stack[0].parent_offset = 0; - ctx->root.type = type; - ctx->root.name = "buffer dtype"; - ctx->root.offset = 0; - ctx->head = stack; - ctx->head->field = &ctx->root; - ctx->fmt_offset = 0; - ctx->head->parent_offset = 0; - ctx->new_packmode = '@'; - ctx->enc_packmode = '@'; - ctx->new_count = 1; - ctx->enc_count = 0; - ctx->enc_type = 0; - ctx->is_complex = 0; - ctx->is_valid_array = 0; - ctx->struct_alignment = 0; - while (type->typegroup == 'S') { - ++ctx->head; - ctx->head->field = type->fields; - ctx->head->parent_offset = 0; - type = type->fields->type; - } -} -static int __Pyx_BufFmt_ParseNumber(const char** ts) { - int count; - const char* t = *ts; - if (*t < '0' || *t > '9') { - return -1; - } else { - count = *t++ - '0'; - while (*t >= '0' && *t <= '9') { - count *= 10; - count += *t++ - '0'; - } - } - *ts = t; - return count; -} -static int __Pyx_BufFmt_ExpectNumber(const char **ts) { - int number = __Pyx_BufFmt_ParseNumber(ts); - if (number == -1) - PyErr_Format(PyExc_ValueError,\ - "Does not understand character buffer dtype format string ('%c')", **ts); - return number; -} -static void __Pyx_BufFmt_RaiseUnexpectedChar(char ch) { - PyErr_Format(PyExc_ValueError, - "Unexpected format string character: '%c'", ch); -} -static const char* __Pyx_BufFmt_DescribeTypeChar(char ch, int is_complex) { - switch (ch) { - case '?': return "'bool'"; - case 'c': return "'char'"; - case 'b': return "'signed char'"; - case 'B': return "'unsigned char'"; - case 'h': return "'short'"; - case 'H': return "'unsigned short'"; - case 'i': return "'int'"; - case 'I': return "'unsigned int'"; - case 'l': return "'long'"; - case 'L': return "'unsigned long'"; - case 'q': return "'long long'"; - case 'Q': return "'unsigned long long'"; - case 'f': return (is_complex ? "'complex float'" : "'float'"); - case 'd': return (is_complex ? "'complex double'" : "'double'"); - case 'g': return (is_complex ? "'complex long double'" : "'long double'"); - case 'T': return "a struct"; - case 'O': return "Python object"; - case 'P': return "a pointer"; - case 's': case 'p': return "a string"; - case 0: return "end"; - default: return "unparseable format string"; - } -} -static size_t __Pyx_BufFmt_TypeCharToStandardSize(char ch, int is_complex) { - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return 2; - case 'i': case 'I': case 'l': case 'L': return 4; - case 'q': case 'Q': return 8; - case 'f': return (is_complex ? 8 : 4); - case 'd': return (is_complex ? 16 : 8); - case 'g': { - PyErr_SetString(PyExc_ValueError, "Python does not define a standard format string size for long double ('g').."); - return 0; - } - case 'O': case 'P': return sizeof(void*); - default: - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } -} -static size_t __Pyx_BufFmt_TypeCharToNativeSize(char ch, int is_complex) { - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return sizeof(short); - case 'i': case 'I': return sizeof(int); - case 'l': case 'L': return sizeof(long); - #ifdef HAVE_LONG_LONG - case 'q': case 'Q': return sizeof(PY_LONG_LONG); - #endif - case 'f': return sizeof(float) * (is_complex ? 2 : 1); - case 'd': return sizeof(double) * (is_complex ? 2 : 1); - case 'g': return sizeof(long double) * (is_complex ? 2 : 1); - case 'O': case 'P': return sizeof(void*); - default: { - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } - } -} -typedef struct { char c; short x; } __Pyx_st_short; -typedef struct { char c; int x; } __Pyx_st_int; -typedef struct { char c; long x; } __Pyx_st_long; -typedef struct { char c; float x; } __Pyx_st_float; -typedef struct { char c; double x; } __Pyx_st_double; -typedef struct { char c; long double x; } __Pyx_st_longdouble; -typedef struct { char c; void *x; } __Pyx_st_void_p; -#ifdef HAVE_LONG_LONG -typedef struct { char c; PY_LONG_LONG x; } __Pyx_st_longlong; -#endif -static size_t __Pyx_BufFmt_TypeCharToAlignment(char ch, CYTHON_UNUSED int is_complex) { - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return sizeof(__Pyx_st_short) - sizeof(short); - case 'i': case 'I': return sizeof(__Pyx_st_int) - sizeof(int); - case 'l': case 'L': return sizeof(__Pyx_st_long) - sizeof(long); -#ifdef HAVE_LONG_LONG - case 'q': case 'Q': return sizeof(__Pyx_st_longlong) - sizeof(PY_LONG_LONG); -#endif - case 'f': return sizeof(__Pyx_st_float) - sizeof(float); - case 'd': return sizeof(__Pyx_st_double) - sizeof(double); - case 'g': return sizeof(__Pyx_st_longdouble) - sizeof(long double); - case 'P': case 'O': return sizeof(__Pyx_st_void_p) - sizeof(void*); - default: - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } -} -/* These are for computing the padding at the end of the struct to align - on the first member of the struct. This will probably the same as above, - but we don't have any guarantees. - */ -typedef struct { short x; char c; } __Pyx_pad_short; -typedef struct { int x; char c; } __Pyx_pad_int; -typedef struct { long x; char c; } __Pyx_pad_long; -typedef struct { float x; char c; } __Pyx_pad_float; -typedef struct { double x; char c; } __Pyx_pad_double; -typedef struct { long double x; char c; } __Pyx_pad_longdouble; -typedef struct { void *x; char c; } __Pyx_pad_void_p; -#ifdef HAVE_LONG_LONG -typedef struct { PY_LONG_LONG x; char c; } __Pyx_pad_longlong; -#endif -static size_t __Pyx_BufFmt_TypeCharToPadding(char ch, CYTHON_UNUSED int is_complex) { - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return sizeof(__Pyx_pad_short) - sizeof(short); - case 'i': case 'I': return sizeof(__Pyx_pad_int) - sizeof(int); - case 'l': case 'L': return sizeof(__Pyx_pad_long) - sizeof(long); -#ifdef HAVE_LONG_LONG - case 'q': case 'Q': return sizeof(__Pyx_pad_longlong) - sizeof(PY_LONG_LONG); -#endif - case 'f': return sizeof(__Pyx_pad_float) - sizeof(float); - case 'd': return sizeof(__Pyx_pad_double) - sizeof(double); - case 'g': return sizeof(__Pyx_pad_longdouble) - sizeof(long double); - case 'P': case 'O': return sizeof(__Pyx_pad_void_p) - sizeof(void*); - default: - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } -} -static char __Pyx_BufFmt_TypeCharToGroup(char ch, int is_complex) { - switch (ch) { - case 'c': - return 'H'; - case 'b': case 'h': case 'i': - case 'l': case 'q': case 's': case 'p': - return 'I'; - case '?': case 'B': case 'H': case 'I': case 'L': case 'Q': - return 'U'; - case 'f': case 'd': case 'g': - return (is_complex ? 'C' : 'R'); - case 'O': - return 'O'; - case 'P': - return 'P'; - default: { - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } - } -} -static void __Pyx_BufFmt_RaiseExpected(__Pyx_BufFmt_Context* ctx) { - if (ctx->head == NULL || ctx->head->field == &ctx->root) { - const char* expected; - const char* quote; - if (ctx->head == NULL) { - expected = "end"; - quote = ""; - } else { - expected = ctx->head->field->type->name; - quote = "'"; - } - PyErr_Format(PyExc_ValueError, - "Buffer dtype mismatch, expected %s%s%s but got %s", - quote, expected, quote, - __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex)); - } else { - __Pyx_StructField* field = ctx->head->field; - __Pyx_StructField* parent = (ctx->head - 1)->field; - PyErr_Format(PyExc_ValueError, - "Buffer dtype mismatch, expected '%s' but got %s in '%s.%s'", - field->type->name, __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex), - parent->type->name, field->name); - } -} -static int __Pyx_BufFmt_ProcessTypeChunk(__Pyx_BufFmt_Context* ctx) { - char group; - size_t size, offset, arraysize = 1; - if (ctx->enc_type == 0) return 0; - if (ctx->head->field->type->arraysize[0]) { - int i, ndim = 0; - if (ctx->enc_type == 's' || ctx->enc_type == 'p') { - ctx->is_valid_array = ctx->head->field->type->ndim == 1; - ndim = 1; - if (ctx->enc_count != ctx->head->field->type->arraysize[0]) { - PyErr_Format(PyExc_ValueError, - "Expected a dimension of size %zu, got %zu", - ctx->head->field->type->arraysize[0], ctx->enc_count); - return -1; - } - } - if (!ctx->is_valid_array) { - PyErr_Format(PyExc_ValueError, "Expected %d dimensions, got %d", - ctx->head->field->type->ndim, ndim); - return -1; - } - for (i = 0; i < ctx->head->field->type->ndim; i++) { - arraysize *= ctx->head->field->type->arraysize[i]; - } - ctx->is_valid_array = 0; - ctx->enc_count = 1; - } - group = __Pyx_BufFmt_TypeCharToGroup(ctx->enc_type, ctx->is_complex); - do { - __Pyx_StructField* field = ctx->head->field; - __Pyx_TypeInfo* type = field->type; - if (ctx->enc_packmode == '@' || ctx->enc_packmode == '^') { - size = __Pyx_BufFmt_TypeCharToNativeSize(ctx->enc_type, ctx->is_complex); - } else { - size = __Pyx_BufFmt_TypeCharToStandardSize(ctx->enc_type, ctx->is_complex); - } - if (ctx->enc_packmode == '@') { - size_t align_at = __Pyx_BufFmt_TypeCharToAlignment(ctx->enc_type, ctx->is_complex); - size_t align_mod_offset; - if (align_at == 0) return -1; - align_mod_offset = ctx->fmt_offset % align_at; - if (align_mod_offset > 0) ctx->fmt_offset += align_at - align_mod_offset; - if (ctx->struct_alignment == 0) - ctx->struct_alignment = __Pyx_BufFmt_TypeCharToPadding(ctx->enc_type, - ctx->is_complex); - } - if (type->size != size || type->typegroup != group) { - if (type->typegroup == 'C' && type->fields != NULL) { - size_t parent_offset = ctx->head->parent_offset + field->offset; - ++ctx->head; - ctx->head->field = type->fields; - ctx->head->parent_offset = parent_offset; - continue; - } - if ((type->typegroup == 'H' || group == 'H') && type->size == size) { - } else { - __Pyx_BufFmt_RaiseExpected(ctx); - return -1; - } - } - offset = ctx->head->parent_offset + field->offset; - if (ctx->fmt_offset != offset) { - PyErr_Format(PyExc_ValueError, - "Buffer dtype mismatch; next field is at offset %" CYTHON_FORMAT_SSIZE_T "d but %" CYTHON_FORMAT_SSIZE_T "d expected", - (Py_ssize_t)ctx->fmt_offset, (Py_ssize_t)offset); - return -1; - } - ctx->fmt_offset += size; - if (arraysize) - ctx->fmt_offset += (arraysize - 1) * size; - --ctx->enc_count; - while (1) { - if (field == &ctx->root) { - ctx->head = NULL; - if (ctx->enc_count != 0) { - __Pyx_BufFmt_RaiseExpected(ctx); - return -1; - } - break; - } - ctx->head->field = ++field; - if (field->type == NULL) { - --ctx->head; - field = ctx->head->field; - continue; - } else if (field->type->typegroup == 'S') { - size_t parent_offset = ctx->head->parent_offset + field->offset; - if (field->type->fields->type == NULL) continue; - field = field->type->fields; - ++ctx->head; - ctx->head->field = field; - ctx->head->parent_offset = parent_offset; - break; - } else { - break; - } - } - } while (ctx->enc_count); - ctx->enc_type = 0; - ctx->is_complex = 0; - return 0; -} -static PyObject * -__pyx_buffmt_parse_array(__Pyx_BufFmt_Context* ctx, const char** tsp) -{ - const char *ts = *tsp; - int i = 0, number, ndim; - ++ts; - if (ctx->new_count != 1) { - PyErr_SetString(PyExc_ValueError, - "Cannot handle repeated arrays in format string"); - return NULL; - } - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ndim = ctx->head->field->type->ndim; - while (*ts && *ts != ')') { - switch (*ts) { - case ' ': case '\f': case '\r': case '\n': case '\t': case '\v': continue; - default: break; - } - number = __Pyx_BufFmt_ExpectNumber(&ts); - if (number == -1) return NULL; - if (i < ndim && (size_t) number != ctx->head->field->type->arraysize[i]) - return PyErr_Format(PyExc_ValueError, - "Expected a dimension of size %zu, got %d", - ctx->head->field->type->arraysize[i], number); - if (*ts != ',' && *ts != ')') - return PyErr_Format(PyExc_ValueError, - "Expected a comma in format string, got '%c'", *ts); - if (*ts == ',') ts++; - i++; - } - if (i != ndim) - return PyErr_Format(PyExc_ValueError, "Expected %d dimension(s), got %d", - ctx->head->field->type->ndim, i); - if (!*ts) { - PyErr_SetString(PyExc_ValueError, - "Unexpected end of format string, expected ')'"); - return NULL; - } - ctx->is_valid_array = 1; - ctx->new_count = 1; - *tsp = ++ts; - return Py_None; -} -static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts) { - int got_Z = 0; - while (1) { - switch(*ts) { - case 0: - if (ctx->enc_type != 0 && ctx->head == NULL) { - __Pyx_BufFmt_RaiseExpected(ctx); - return NULL; - } - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - if (ctx->head != NULL) { - __Pyx_BufFmt_RaiseExpected(ctx); - return NULL; - } - return ts; - case ' ': - case '\r': - case '\n': - ++ts; - break; - case '<': - if (!__Pyx_Is_Little_Endian()) { - PyErr_SetString(PyExc_ValueError, "Little-endian buffer not supported on big-endian compiler"); - return NULL; - } - ctx->new_packmode = '='; - ++ts; - break; - case '>': - case '!': - if (__Pyx_Is_Little_Endian()) { - PyErr_SetString(PyExc_ValueError, "Big-endian buffer not supported on little-endian compiler"); - return NULL; - } - ctx->new_packmode = '='; - ++ts; - break; - case '=': - case '@': - case '^': - ctx->new_packmode = *ts++; - break; - case 'T': - { - const char* ts_after_sub; - size_t i, struct_count = ctx->new_count; - size_t struct_alignment = ctx->struct_alignment; - ctx->new_count = 1; - ++ts; - if (*ts != '{') { - PyErr_SetString(PyExc_ValueError, "Buffer acquisition: Expected '{' after 'T'"); - return NULL; - } - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->enc_type = 0; - ctx->enc_count = 0; - ctx->struct_alignment = 0; - ++ts; - ts_after_sub = ts; - for (i = 0; i != struct_count; ++i) { - ts_after_sub = __Pyx_BufFmt_CheckString(ctx, ts); - if (!ts_after_sub) return NULL; - } - ts = ts_after_sub; - if (struct_alignment) ctx->struct_alignment = struct_alignment; - } - break; - case '}': - { - size_t alignment = ctx->struct_alignment; - ++ts; - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->enc_type = 0; - if (alignment && ctx->fmt_offset % alignment) { - ctx->fmt_offset += alignment - (ctx->fmt_offset % alignment); - } - } - return ts; - case 'x': - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->fmt_offset += ctx->new_count; - ctx->new_count = 1; - ctx->enc_count = 0; - ctx->enc_type = 0; - ctx->enc_packmode = ctx->new_packmode; - ++ts; - break; - case 'Z': - got_Z = 1; - ++ts; - if (*ts != 'f' && *ts != 'd' && *ts != 'g') { - __Pyx_BufFmt_RaiseUnexpectedChar('Z'); - return NULL; - } - CYTHON_FALLTHROUGH; - case '?': case 'c': case 'b': case 'B': case 'h': case 'H': case 'i': case 'I': - case 'l': case 'L': case 'q': case 'Q': - case 'f': case 'd': case 'g': - case 'O': case 'p': - if ((ctx->enc_type == *ts) && (got_Z == ctx->is_complex) && - (ctx->enc_packmode == ctx->new_packmode) && (!ctx->is_valid_array)) { - ctx->enc_count += ctx->new_count; - ctx->new_count = 1; - got_Z = 0; - ++ts; - break; - } - CYTHON_FALLTHROUGH; - case 's': - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->enc_count = ctx->new_count; - ctx->enc_packmode = ctx->new_packmode; - ctx->enc_type = *ts; - ctx->is_complex = got_Z; - ++ts; - ctx->new_count = 1; - got_Z = 0; - break; - case ':': - ++ts; - while(*ts != ':') ++ts; - ++ts; - break; - case '(': - if (!__pyx_buffmt_parse_array(ctx, &ts)) return NULL; - break; - default: - { - int number = __Pyx_BufFmt_ExpectNumber(&ts); - if (number == -1) return NULL; - ctx->new_count = (size_t)number; - } - } - } -} - -/* TypeInfoCompare */ - static int -__pyx_typeinfo_cmp(__Pyx_TypeInfo *a, __Pyx_TypeInfo *b) -{ - int i; - if (!a || !b) - return 0; - if (a == b) - return 1; - if (a->size != b->size || a->typegroup != b->typegroup || - a->is_unsigned != b->is_unsigned || a->ndim != b->ndim) { - if (a->typegroup == 'H' || b->typegroup == 'H') { - return a->size == b->size; - } else { - return 0; - } - } - if (a->ndim) { - for (i = 0; i < a->ndim; i++) - if (a->arraysize[i] != b->arraysize[i]) - return 0; - } - if (a->typegroup == 'S') { - if (a->flags != b->flags) - return 0; - if (a->fields || b->fields) { - if (!(a->fields && b->fields)) - return 0; - for (i = 0; a->fields[i].type && b->fields[i].type; i++) { - __Pyx_StructField *field_a = a->fields + i; - __Pyx_StructField *field_b = b->fields + i; - if (field_a->offset != field_b->offset || - !__pyx_typeinfo_cmp(field_a->type, field_b->type)) - return 0; - } - return !a->fields[i].type && !b->fields[i].type; - } - } - return 1; -} - -/* MemviewSliceValidateAndInit */ - static int -__pyx_check_strides(Py_buffer *buf, int dim, int ndim, int spec) -{ - if (buf->shape[dim] <= 1) - return 1; - if (buf->strides) { - if (spec & __Pyx_MEMVIEW_CONTIG) { - if (spec & (__Pyx_MEMVIEW_PTR|__Pyx_MEMVIEW_FULL)) { - if (unlikely(buf->strides[dim] != sizeof(void *))) { - PyErr_Format(PyExc_ValueError, - "Buffer is not indirectly contiguous " - "in dimension %d.", dim); - goto fail; - } - } else if (unlikely(buf->strides[dim] != buf->itemsize)) { - PyErr_SetString(PyExc_ValueError, - "Buffer and memoryview are not contiguous " - "in the same dimension."); - goto fail; - } - } - if (spec & __Pyx_MEMVIEW_FOLLOW) { - Py_ssize_t stride = buf->strides[dim]; - if (stride < 0) - stride = -stride; - if (unlikely(stride < buf->itemsize)) { - PyErr_SetString(PyExc_ValueError, - "Buffer and memoryview are not contiguous " - "in the same dimension."); - goto fail; - } - } - } else { - if (unlikely(spec & __Pyx_MEMVIEW_CONTIG && dim != ndim - 1)) { - PyErr_Format(PyExc_ValueError, - "C-contiguous buffer is not contiguous in " - "dimension %d", dim); - goto fail; - } else if (unlikely(spec & (__Pyx_MEMVIEW_PTR))) { - PyErr_Format(PyExc_ValueError, - "C-contiguous buffer is not indirect in " - "dimension %d", dim); - goto fail; - } else if (unlikely(buf->suboffsets)) { - PyErr_SetString(PyExc_ValueError, - "Buffer exposes suboffsets but no strides"); - goto fail; - } - } - return 1; -fail: - return 0; -} -static int -__pyx_check_suboffsets(Py_buffer *buf, int dim, CYTHON_UNUSED int ndim, int spec) -{ - if (spec & __Pyx_MEMVIEW_DIRECT) { - if (unlikely(buf->suboffsets && buf->suboffsets[dim] >= 0)) { - PyErr_Format(PyExc_ValueError, - "Buffer not compatible with direct access " - "in dimension %d.", dim); - goto fail; - } - } - if (spec & __Pyx_MEMVIEW_PTR) { - if (unlikely(!buf->suboffsets || (buf->suboffsets[dim] < 0))) { - PyErr_Format(PyExc_ValueError, - "Buffer is not indirectly accessible " - "in dimension %d.", dim); - goto fail; - } - } - return 1; -fail: - return 0; -} -static int -__pyx_verify_contig(Py_buffer *buf, int ndim, int c_or_f_flag) -{ - int i; - if (c_or_f_flag & __Pyx_IS_F_CONTIG) { - Py_ssize_t stride = 1; - for (i = 0; i < ndim; i++) { - if (unlikely(stride * buf->itemsize != buf->strides[i] && buf->shape[i] > 1)) { - PyErr_SetString(PyExc_ValueError, - "Buffer not fortran contiguous."); - goto fail; - } - stride = stride * buf->shape[i]; - } - } else if (c_or_f_flag & __Pyx_IS_C_CONTIG) { - Py_ssize_t stride = 1; - for (i = ndim - 1; i >- 1; i--) { - if (unlikely(stride * buf->itemsize != buf->strides[i] && buf->shape[i] > 1)) { - PyErr_SetString(PyExc_ValueError, - "Buffer not C contiguous."); - goto fail; - } - stride = stride * buf->shape[i]; - } - } - return 1; -fail: - return 0; -} -static int __Pyx_ValidateAndInit_memviewslice( - int *axes_specs, - int c_or_f_flag, - int buf_flags, - int ndim, - __Pyx_TypeInfo *dtype, - __Pyx_BufFmt_StackElem stack[], - __Pyx_memviewslice *memviewslice, - PyObject *original_obj) -{ - struct __pyx_memoryview_obj *memview, *new_memview; - __Pyx_RefNannyDeclarations - Py_buffer *buf; - int i, spec = 0, retval = -1; - __Pyx_BufFmt_Context ctx; - int from_memoryview = __pyx_memoryview_check(original_obj); - __Pyx_RefNannySetupContext("ValidateAndInit_memviewslice", 0); - if (from_memoryview && __pyx_typeinfo_cmp(dtype, ((struct __pyx_memoryview_obj *) - original_obj)->typeinfo)) { - memview = (struct __pyx_memoryview_obj *) original_obj; - new_memview = NULL; - } else { - memview = (struct __pyx_memoryview_obj *) __pyx_memoryview_new( - original_obj, buf_flags, 0, dtype); - new_memview = memview; - if (unlikely(!memview)) - goto fail; - } - buf = &memview->view; - if (unlikely(buf->ndim != ndim)) { - PyErr_Format(PyExc_ValueError, - "Buffer has wrong number of dimensions (expected %d, got %d)", - ndim, buf->ndim); - goto fail; - } - if (new_memview) { - __Pyx_BufFmt_Init(&ctx, stack, dtype); - if (unlikely(!__Pyx_BufFmt_CheckString(&ctx, buf->format))) goto fail; - } - if (unlikely((unsigned) buf->itemsize != dtype->size)) { - PyErr_Format(PyExc_ValueError, - "Item size of buffer (%" CYTHON_FORMAT_SSIZE_T "u byte%s) " - "does not match size of '%s' (%" CYTHON_FORMAT_SSIZE_T "u byte%s)", - buf->itemsize, - (buf->itemsize > 1) ? "s" : "", - dtype->name, - dtype->size, - (dtype->size > 1) ? "s" : ""); - goto fail; - } - if (buf->len > 0) { - for (i = 0; i < ndim; i++) { - spec = axes_specs[i]; - if (unlikely(!__pyx_check_strides(buf, i, ndim, spec))) - goto fail; - if (unlikely(!__pyx_check_suboffsets(buf, i, ndim, spec))) - goto fail; - } - if (unlikely(buf->strides && !__pyx_verify_contig(buf, ndim, c_or_f_flag))) - goto fail; - } - if (unlikely(__Pyx_init_memviewslice(memview, ndim, memviewslice, - new_memview != NULL) == -1)) { - goto fail; - } - retval = 0; - goto no_fail; -fail: - Py_XDECREF(new_memview); - retval = -1; -no_fail: - __Pyx_RefNannyFinishContext(); - return retval; -} - -/* ObjectToMemviewSlice */ - static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(PyObject *obj, int writable_flag) { - __Pyx_memviewslice result = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_BufFmt_StackElem stack[1]; - int axes_specs[] = { (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_CONTIG) }; - int retcode; - if (obj == Py_None) { - result.memview = (struct __pyx_memoryview_obj *) Py_None; - return result; - } - retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, __Pyx_IS_C_CONTIG, - (PyBUF_C_CONTIGUOUS | PyBUF_FORMAT) | writable_flag, 3, - &__Pyx_TypeInfo_int, stack, - &result, obj); - if (unlikely(retcode == -1)) - goto __pyx_fail; - return result; -__pyx_fail: - result.memview = NULL; - result.data = NULL; - return result; -} - -/* ObjectToMemviewSlice */ - static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(PyObject *obj, int writable_flag) { - __Pyx_memviewslice result = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_BufFmt_StackElem stack[1]; - int axes_specs[] = { (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_CONTIG) }; - int retcode; - if (obj == Py_None) { - result.memview = (struct __pyx_memoryview_obj *) Py_None; - return result; - } - retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, __Pyx_IS_C_CONTIG, - (PyBUF_C_CONTIGUOUS | PyBUF_FORMAT) | writable_flag, 3, - &__Pyx_TypeInfo_float, stack, - &result, obj); - if (unlikely(retcode == -1)) - goto __pyx_fail; - return result; -__pyx_fail: - result.memview = NULL; - result.data = NULL; - return result; -} - -/* ObjectToMemviewSlice */ - static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_dc_int(PyObject *obj, int writable_flag) { - __Pyx_memviewslice result = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_BufFmt_StackElem stack[1]; - int axes_specs[] = { (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_CONTIG) }; - int retcode; - if (obj == Py_None) { - result.memview = (struct __pyx_memoryview_obj *) Py_None; - return result; - } - retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, __Pyx_IS_C_CONTIG, - (PyBUF_C_CONTIGUOUS | PyBUF_FORMAT) | writable_flag, 1, - &__Pyx_TypeInfo_int, stack, - &result, obj); - if (unlikely(retcode == -1)) - goto __pyx_fail; - return result; -__pyx_fail: - result.memview = NULL; - result.data = NULL; - return result; -} - -/* CIntFromPyVerify */ - #define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0) -#define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1) -#define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\ - {\ - func_type value = func_value;\ - if (sizeof(target_type) < sizeof(func_type)) {\ - if (unlikely(value != (func_type) (target_type) value)) {\ - func_type zero = 0;\ - if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\ - return (target_type) -1;\ - if (is_unsigned && unlikely(value < zero))\ - goto raise_neg_overflow;\ - else\ - goto raise_overflow;\ - }\ - }\ - return (target_type) value;\ - } - -/* MemviewSliceCopyTemplate */ - static __Pyx_memviewslice -__pyx_memoryview_copy_new_contig(const __Pyx_memviewslice *from_mvs, - const char *mode, int ndim, - size_t sizeof_dtype, int contig_flag, - int dtype_is_object) -{ - __Pyx_RefNannyDeclarations - int i; - __Pyx_memviewslice new_mvs = { 0, 0, { 0 }, { 0 }, { 0 } }; - struct __pyx_memoryview_obj *from_memview = from_mvs->memview; - Py_buffer *buf = &from_memview->view; - PyObject *shape_tuple = NULL; - PyObject *temp_int = NULL; - struct __pyx_array_obj *array_obj = NULL; - struct __pyx_memoryview_obj *memview_obj = NULL; - __Pyx_RefNannySetupContext("__pyx_memoryview_copy_new_contig", 0); - for (i = 0; i < ndim; i++) { - if (unlikely(from_mvs->suboffsets[i] >= 0)) { - PyErr_Format(PyExc_ValueError, "Cannot copy memoryview slice with " - "indirect dimensions (axis %d)", i); - goto fail; - } - } - shape_tuple = PyTuple_New(ndim); - if (unlikely(!shape_tuple)) { - goto fail; - } - __Pyx_GOTREF(shape_tuple); - for(i = 0; i < ndim; i++) { - temp_int = PyInt_FromSsize_t(from_mvs->shape[i]); - if(unlikely(!temp_int)) { - goto fail; - } else { - PyTuple_SET_ITEM(shape_tuple, i, temp_int); - temp_int = NULL; - } - } - array_obj = __pyx_array_new(shape_tuple, sizeof_dtype, buf->format, (char *) mode, NULL); - if (unlikely(!array_obj)) { - goto fail; - } - __Pyx_GOTREF(array_obj); - memview_obj = (struct __pyx_memoryview_obj *) __pyx_memoryview_new( - (PyObject *) array_obj, contig_flag, - dtype_is_object, - from_mvs->memview->typeinfo); - if (unlikely(!memview_obj)) - goto fail; - if (unlikely(__Pyx_init_memviewslice(memview_obj, ndim, &new_mvs, 1) < 0)) - goto fail; - if (unlikely(__pyx_memoryview_copy_contents(*from_mvs, new_mvs, ndim, ndim, - dtype_is_object) < 0)) - goto fail; - goto no_fail; -fail: - __Pyx_XDECREF(new_mvs.memview); - new_mvs.memview = NULL; - new_mvs.data = NULL; -no_fail: - __Pyx_XDECREF(shape_tuple); - __Pyx_XDECREF(temp_int); - __Pyx_XDECREF(array_obj); - __Pyx_RefNannyFinishContext(); - return new_mvs; -} - -/* CIntToPy */ - static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const int neg_one = (int) -1, const_zero = (int) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; - if (is_unsigned) { - if (sizeof(int) < sizeof(long)) { - return PyInt_FromLong((long) value); - } else if (sizeof(int) <= sizeof(unsigned long)) { - return PyLong_FromUnsignedLong((unsigned long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { - return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); -#endif - } - } else { - if (sizeof(int) <= sizeof(long)) { - return PyInt_FromLong((long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { - return PyLong_FromLongLong((PY_LONG_LONG) value); -#endif - } - } - { - int one = 1; int little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&value; - return _PyLong_FromByteArray(bytes, sizeof(int), - little, !is_unsigned); - } -} - -/* CIntFromPy */ - static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const int neg_one = (int) -1, const_zero = (int) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if (sizeof(int) < sizeof(long)) { - __PYX_VERIFY_RETURN_INT(int, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (int) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (int) 0; - case 1: __PYX_VERIFY_RETURN_INT(int, digit, digits[0]) - case 2: - if (8 * sizeof(int) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) >= 2 * PyLong_SHIFT) { - return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 3: - if (8 * sizeof(int) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) >= 3 * PyLong_SHIFT) { - return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 4: - if (8 * sizeof(int) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) >= 4 * PyLong_SHIFT) { - return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (int) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if (sizeof(int) <= sizeof(unsigned long)) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (int) 0; - case -1: __PYX_VERIFY_RETURN_INT(int, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT(int, digit, +digits[0]) - case -2: - if (8 * sizeof(int) - 1 > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { - return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 2: - if (8 * sizeof(int) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { - return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -3: - if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { - return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 3: - if (8 * sizeof(int) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { - return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -4: - if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { - return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 4: - if (8 * sizeof(int) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { - return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - } -#endif - if (sizeof(int) <= sizeof(long)) { - __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); -#else - int val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return (int) -1; - } - } else { - int val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (int) -1; - val = __Pyx_PyInt_As_int(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to int"); - return (int) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to int"); - return (int) -1; -} - -/* CIntToPy */ - static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const long neg_one = (long) -1, const_zero = (long) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; - if (is_unsigned) { - if (sizeof(long) < sizeof(long)) { - return PyInt_FromLong((long) value); - } else if (sizeof(long) <= sizeof(unsigned long)) { - return PyLong_FromUnsignedLong((unsigned long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { - return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); -#endif - } - } else { - if (sizeof(long) <= sizeof(long)) { - return PyInt_FromLong((long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { - return PyLong_FromLongLong((PY_LONG_LONG) value); -#endif - } - } - { - int one = 1; int little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&value; - return _PyLong_FromByteArray(bytes, sizeof(long), - little, !is_unsigned); - } -} - -/* CIntFromPy */ - static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const long neg_one = (long) -1, const_zero = (long) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if (sizeof(long) < sizeof(long)) { - __PYX_VERIFY_RETURN_INT(long, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (long) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (long) 0; - case 1: __PYX_VERIFY_RETURN_INT(long, digit, digits[0]) - case 2: - if (8 * sizeof(long) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) >= 2 * PyLong_SHIFT) { - return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 3: - if (8 * sizeof(long) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) >= 3 * PyLong_SHIFT) { - return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 4: - if (8 * sizeof(long) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) >= 4 * PyLong_SHIFT) { - return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (long) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if (sizeof(long) <= sizeof(unsigned long)) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (long) 0; - case -1: __PYX_VERIFY_RETURN_INT(long, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT(long, digit, +digits[0]) - case -2: - if (8 * sizeof(long) - 1 > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 2: - if (8 * sizeof(long) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -3: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 3: - if (8 * sizeof(long) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -4: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 4: - if (8 * sizeof(long) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - } -#endif - if (sizeof(long) <= sizeof(long)) { - __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); -#else - long val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return (long) -1; - } - } else { - long val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (long) -1; - val = __Pyx_PyInt_As_long(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to long"); - return (long) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to long"); - return (long) -1; -} - -/* CIntFromPy */ - static CYTHON_INLINE char __Pyx_PyInt_As_char(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const char neg_one = (char) -1, const_zero = (char) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if (sizeof(char) < sizeof(long)) { - __PYX_VERIFY_RETURN_INT(char, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (char) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (char) 0; - case 1: __PYX_VERIFY_RETURN_INT(char, digit, digits[0]) - case 2: - if (8 * sizeof(char) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) >= 2 * PyLong_SHIFT) { - return (char) (((((char)digits[1]) << PyLong_SHIFT) | (char)digits[0])); - } - } - break; - case 3: - if (8 * sizeof(char) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) >= 3 * PyLong_SHIFT) { - return (char) (((((((char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0])); - } - } - break; - case 4: - if (8 * sizeof(char) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) >= 4 * PyLong_SHIFT) { - return (char) (((((((((char)digits[3]) << PyLong_SHIFT) | (char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0])); - } - } - break; - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (char) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if (sizeof(char) <= sizeof(unsigned long)) { - __PYX_VERIFY_RETURN_INT_EXC(char, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(char) <= sizeof(unsigned PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(char, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (char) 0; - case -1: __PYX_VERIFY_RETURN_INT(char, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT(char, digit, +digits[0]) - case -2: - if (8 * sizeof(char) - 1 > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 2 * PyLong_SHIFT) { - return (char) (((char)-1)*(((((char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case 2: - if (8 * sizeof(char) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 2 * PyLong_SHIFT) { - return (char) ((((((char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case -3: - if (8 * sizeof(char) - 1 > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 3 * PyLong_SHIFT) { - return (char) (((char)-1)*(((((((char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case 3: - if (8 * sizeof(char) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 3 * PyLong_SHIFT) { - return (char) ((((((((char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case -4: - if (8 * sizeof(char) - 1 > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 4 * PyLong_SHIFT) { - return (char) (((char)-1)*(((((((((char)digits[3]) << PyLong_SHIFT) | (char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case 4: - if (8 * sizeof(char) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 4 * PyLong_SHIFT) { - return (char) ((((((((((char)digits[3]) << PyLong_SHIFT) | (char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - } -#endif - if (sizeof(char) <= sizeof(long)) { - __PYX_VERIFY_RETURN_INT_EXC(char, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(char) <= sizeof(PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(char, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); -#else - char val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return (char) -1; - } - } else { - char val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (char) -1; - val = __Pyx_PyInt_As_char(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to char"); - return (char) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to char"); - return (char) -1; -} - -/* CheckBinaryVersion */ - static int __Pyx_check_binary_version(void) { - char ctversion[5]; - int same=1, i, found_dot; - const char* rt_from_call = Py_GetVersion(); - PyOS_snprintf(ctversion, 5, "%d.%d", PY_MAJOR_VERSION, PY_MINOR_VERSION); - found_dot = 0; - for (i = 0; i < 4; i++) { - if (!ctversion[i]) { - same = (rt_from_call[i] < '0' || rt_from_call[i] > '9'); - break; - } - if (rt_from_call[i] != ctversion[i]) { - same = 0; - break; - } - } - if (!same) { - char rtversion[5] = {'\0'}; - char message[200]; - for (i=0; i<4; ++i) { - if (rt_from_call[i] == '.') { - if (found_dot) break; - found_dot = 1; - } else if (rt_from_call[i] < '0' || rt_from_call[i] > '9') { - break; - } - rtversion[i] = rt_from_call[i]; - } - PyOS_snprintf(message, sizeof(message), - "compiletime version %s of module '%.100s' " - "does not match runtime version %s", - ctversion, __Pyx_MODULE_NAME, rtversion); - return PyErr_WarnEx(NULL, message, 1); - } - return 0; -} - -/* InitStrings */ - static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) { - while (t->p) { - #if PY_MAJOR_VERSION < 3 - if (t->is_unicode) { - *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL); - } else if (t->intern) { - *t->p = PyString_InternFromString(t->s); - } else { - *t->p = PyString_FromStringAndSize(t->s, t->n - 1); - } - #else - if (t->is_unicode | t->is_str) { - if (t->intern) { - *t->p = PyUnicode_InternFromString(t->s); - } else if (t->encoding) { - *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL); - } else { - *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1); - } - } else { - *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1); - } - #endif - if (!*t->p) - return -1; - if (PyObject_Hash(*t->p) == -1) - return -1; - ++t; - } - return 0; -} - -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) { - return __Pyx_PyUnicode_FromStringAndSize(c_str, (Py_ssize_t)strlen(c_str)); -} -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject* o) { - Py_ssize_t ignore; - return __Pyx_PyObject_AsStringAndSize(o, &ignore); -} -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -#if !CYTHON_PEP393_ENABLED -static const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - char* defenc_c; - PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL); - if (!defenc) return NULL; - defenc_c = PyBytes_AS_STRING(defenc); -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - { - char* end = defenc_c + PyBytes_GET_SIZE(defenc); - char* c; - for (c = defenc_c; c < end; c++) { - if ((unsigned char) (*c) >= 128) { - PyUnicode_AsASCIIString(o); - return NULL; - } - } - } -#endif - *length = PyBytes_GET_SIZE(defenc); - return defenc_c; -} -#else -static CYTHON_INLINE const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - if (unlikely(__Pyx_PyUnicode_READY(o) == -1)) return NULL; -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - if (likely(PyUnicode_IS_ASCII(o))) { - *length = PyUnicode_GET_LENGTH(o); - return PyUnicode_AsUTF8(o); - } else { - PyUnicode_AsASCIIString(o); - return NULL; - } -#else - return PyUnicode_AsUTF8AndSize(o, length); -#endif -} -#endif -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) { -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT - if ( -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - __Pyx_sys_getdefaultencoding_not_ascii && -#endif - PyUnicode_Check(o)) { - return __Pyx_PyUnicode_AsStringAndSize(o, length); - } else -#endif -#if (!CYTHON_COMPILING_IN_PYPY) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE)) - if (PyByteArray_Check(o)) { - *length = PyByteArray_GET_SIZE(o); - return PyByteArray_AS_STRING(o); - } else -#endif - { - char* result; - int r = PyBytes_AsStringAndSize(o, &result, length); - if (unlikely(r < 0)) { - return NULL; - } else { - return result; - } - } -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { - int is_true = x == Py_True; - if (is_true | (x == Py_False) | (x == Py_None)) return is_true; - else return PyObject_IsTrue(x); -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject* x) { - int retval; - if (unlikely(!x)) return -1; - retval = __Pyx_PyObject_IsTrue(x); - Py_DECREF(x); - return retval; -} -static PyObject* __Pyx_PyNumber_IntOrLongWrongResultType(PyObject* result, const char* type_name) { -#if PY_MAJOR_VERSION >= 3 - if (PyLong_Check(result)) { - if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1, - "__int__ returned non-int (type %.200s). " - "The ability to return an instance of a strict subclass of int " - "is deprecated, and may be removed in a future version of Python.", - Py_TYPE(result)->tp_name)) { - Py_DECREF(result); - return NULL; - } - return result; - } -#endif - PyErr_Format(PyExc_TypeError, - "__%.4s__ returned non-%.4s (type %.200s)", - type_name, type_name, Py_TYPE(result)->tp_name); - Py_DECREF(result); - return NULL; -} -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) { -#if CYTHON_USE_TYPE_SLOTS - PyNumberMethods *m; -#endif - const char *name = NULL; - PyObject *res = NULL; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x) || PyLong_Check(x))) -#else - if (likely(PyLong_Check(x))) -#endif - return __Pyx_NewRef(x); -#if CYTHON_USE_TYPE_SLOTS - m = Py_TYPE(x)->tp_as_number; - #if PY_MAJOR_VERSION < 3 - if (m && m->nb_int) { - name = "int"; - res = m->nb_int(x); - } - else if (m && m->nb_long) { - name = "long"; - res = m->nb_long(x); - } - #else - if (likely(m && m->nb_int)) { - name = "int"; - res = m->nb_int(x); - } - #endif -#else - if (!PyBytes_CheckExact(x) && !PyUnicode_CheckExact(x)) { - res = PyNumber_Int(x); - } -#endif - if (likely(res)) { -#if PY_MAJOR_VERSION < 3 - if (unlikely(!PyInt_Check(res) && !PyLong_Check(res))) { -#else - if (unlikely(!PyLong_CheckExact(res))) { -#endif - return __Pyx_PyNumber_IntOrLongWrongResultType(res, name); - } - } - else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_TypeError, - "an integer is required"); - } - return res; -} -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { - Py_ssize_t ival; - PyObject *x; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(b))) { - if (sizeof(Py_ssize_t) >= sizeof(long)) - return PyInt_AS_LONG(b); - else - return PyInt_AsSsize_t(b); - } -#endif - if (likely(PyLong_CheckExact(b))) { - #if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)b)->ob_digit; - const Py_ssize_t size = Py_SIZE(b); - if (likely(__Pyx_sst_abs(size) <= 1)) { - ival = likely(size) ? digits[0] : 0; - if (size == -1) ival = -ival; - return ival; - } else { - switch (size) { - case 2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - } - } - #endif - return PyLong_AsSsize_t(b); - } - x = PyNumber_Index(b); - if (!x) return -1; - ival = PyInt_AsSsize_t(x); - Py_DECREF(x); - return ival; -} -static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject* o) { - if (sizeof(Py_hash_t) == sizeof(Py_ssize_t)) { - return (Py_hash_t) __Pyx_PyIndex_AsSsize_t(o); -#if PY_MAJOR_VERSION < 3 - } else if (likely(PyInt_CheckExact(o))) { - return PyInt_AS_LONG(o); -#endif - } else { - Py_ssize_t ival; - PyObject *x; - x = PyNumber_Index(o); - if (!x) return -1; - ival = PyInt_AsLong(x); - Py_DECREF(x); - return ival; - } -} -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b) { - return b ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False); -} -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) { - return PyInt_FromSize_t(ival); -} - - -#endif /* Py_PYTHON_H */ diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/removeOverlaps.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/removeOverlaps.py deleted file mode 100644 index 624cd47b4076a95cbc7c2124550371f6ffa5ea37..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/removeOverlaps.py +++ /dev/null @@ -1,248 +0,0 @@ -""" Simplify TrueType glyphs by merging overlapping contours/components. - -Requires https://github.com/fonttools/skia-pathops -""" - -import itertools -import logging -from typing import Callable, Iterable, Optional, Mapping - -from fontTools.misc.roundTools import otRound -from fontTools.ttLib import ttFont -from fontTools.ttLib.tables import _g_l_y_f -from fontTools.ttLib.tables import _h_m_t_x -from fontTools.pens.ttGlyphPen import TTGlyphPen - -import pathops - - -__all__ = ["removeOverlaps"] - - -class RemoveOverlapsError(Exception): - pass - - -log = logging.getLogger("fontTools.ttLib.removeOverlaps") - -_TTGlyphMapping = Mapping[str, ttFont._TTGlyph] - - -def skPathFromGlyph(glyphName: str, glyphSet: _TTGlyphMapping) -> pathops.Path: - path = pathops.Path() - pathPen = path.getPen(glyphSet=glyphSet) - glyphSet[glyphName].draw(pathPen) - return path - - -def skPathFromGlyphComponent( - component: _g_l_y_f.GlyphComponent, glyphSet: _TTGlyphMapping -): - baseGlyphName, transformation = component.getComponentInfo() - path = skPathFromGlyph(baseGlyphName, glyphSet) - return path.transform(*transformation) - - -def componentsOverlap(glyph: _g_l_y_f.Glyph, glyphSet: _TTGlyphMapping) -> bool: - if not glyph.isComposite(): - raise ValueError("This method only works with TrueType composite glyphs") - if len(glyph.components) < 2: - return False # single component, no overlaps - - component_paths = {} - - def _get_nth_component_path(index: int) -> pathops.Path: - if index not in component_paths: - component_paths[index] = skPathFromGlyphComponent( - glyph.components[index], glyphSet - ) - return component_paths[index] - - return any( - pathops.op( - _get_nth_component_path(i), - _get_nth_component_path(j), - pathops.PathOp.INTERSECTION, - fix_winding=False, - keep_starting_points=False, - ) - for i, j in itertools.combinations(range(len(glyph.components)), 2) - ) - - -def ttfGlyphFromSkPath(path: pathops.Path) -> _g_l_y_f.Glyph: - # Skia paths have no 'components', no need for glyphSet - ttPen = TTGlyphPen(glyphSet=None) - path.draw(ttPen) - glyph = ttPen.glyph() - assert not glyph.isComposite() - # compute glyph.xMin (glyfTable parameter unused for non composites) - glyph.recalcBounds(glyfTable=None) - return glyph - - -def _round_path( - path: pathops.Path, round: Callable[[float], float] = otRound -) -> pathops.Path: - rounded_path = pathops.Path() - for verb, points in path: - rounded_path.add(verb, *((round(p[0]), round(p[1])) for p in points)) - return rounded_path - - -def _simplify(path: pathops.Path, debugGlyphName: str) -> pathops.Path: - # skia-pathops has a bug where it sometimes fails to simplify paths when there - # are float coordinates and control points are very close to one another. - # Rounding coordinates to integers works around the bug. - # Since we are going to round glyf coordinates later on anyway, here it is - # ok(-ish) to also round before simplify. Better than failing the whole process - # for the entire font. - # https://bugs.chromium.org/p/skia/issues/detail?id=11958 - # https://github.com/google/fonts/issues/3365 - # TODO(anthrotype): remove once this Skia bug is fixed - try: - return pathops.simplify(path, clockwise=path.clockwise) - except pathops.PathOpsError: - pass - - path = _round_path(path) - try: - path = pathops.simplify(path, clockwise=path.clockwise) - log.debug( - "skia-pathops failed to simplify '%s' with float coordinates, " - "but succeded using rounded integer coordinates", - debugGlyphName, - ) - return path - except pathops.PathOpsError as e: - if log.isEnabledFor(logging.DEBUG): - path.dump() - raise RemoveOverlapsError( - f"Failed to remove overlaps from glyph {debugGlyphName!r}" - ) from e - - raise AssertionError("Unreachable") - - -def removeTTGlyphOverlaps( - glyphName: str, - glyphSet: _TTGlyphMapping, - glyfTable: _g_l_y_f.table__g_l_y_f, - hmtxTable: _h_m_t_x.table__h_m_t_x, - removeHinting: bool = True, -) -> bool: - glyph = glyfTable[glyphName] - # decompose composite glyphs only if components overlap each other - if ( - glyph.numberOfContours > 0 - or glyph.isComposite() - and componentsOverlap(glyph, glyphSet) - ): - path = skPathFromGlyph(glyphName, glyphSet) - - # remove overlaps - path2 = _simplify(path, glyphName) - - # replace TTGlyph if simplified path is different (ignoring contour order) - if {tuple(c) for c in path.contours} != {tuple(c) for c in path2.contours}: - glyfTable[glyphName] = glyph = ttfGlyphFromSkPath(path2) - # simplified glyph is always unhinted - assert not glyph.program - # also ensure hmtx LSB == glyph.xMin so glyph origin is at x=0 - width, lsb = hmtxTable[glyphName] - if lsb != glyph.xMin: - hmtxTable[glyphName] = (width, glyph.xMin) - return True - - if removeHinting: - glyph.removeHinting() - return False - - -def removeOverlaps( - font: ttFont.TTFont, - glyphNames: Optional[Iterable[str]] = None, - removeHinting: bool = True, - ignoreErrors=False, -) -> None: - """Simplify glyphs in TTFont by merging overlapping contours. - - Overlapping components are first decomposed to simple contours, then merged. - - Currently this only works with TrueType fonts with 'glyf' table. - Raises NotImplementedError if 'glyf' table is absent. - - Note that removing overlaps invalidates the hinting. By default we drop hinting - from all glyphs whether or not overlaps are removed from a given one, as it would - look weird if only some glyphs are left (un)hinted. - - Args: - font: input TTFont object, modified in place. - glyphNames: optional iterable of glyph names (str) to remove overlaps from. - By default, all glyphs in the font are processed. - removeHinting (bool): set to False to keep hinting for unmodified glyphs. - ignoreErrors (bool): set to True to ignore errors while removing overlaps, - thus keeping the tricky glyphs unchanged (fonttools/fonttools#2363). - """ - try: - glyfTable = font["glyf"] - except KeyError: - raise NotImplementedError("removeOverlaps currently only works with TTFs") - - hmtxTable = font["hmtx"] - # wraps the underlying glyf Glyphs, takes care of interfacing with drawing pens - glyphSet = font.getGlyphSet() - - if glyphNames is None: - glyphNames = font.getGlyphOrder() - - # process all simple glyphs first, then composites with increasing component depth, - # so that by the time we test for component intersections the respective base glyphs - # have already been simplified - glyphNames = sorted( - glyphNames, - key=lambda name: ( - glyfTable[name].getCompositeMaxpValues(glyfTable).maxComponentDepth - if glyfTable[name].isComposite() - else 0, - name, - ), - ) - modified = set() - for glyphName in glyphNames: - try: - if removeTTGlyphOverlaps( - glyphName, glyphSet, glyfTable, hmtxTable, removeHinting - ): - modified.add(glyphName) - except RemoveOverlapsError: - if not ignoreErrors: - raise - log.error("Failed to remove overlaps for '%s'", glyphName) - - log.debug("Removed overlaps for %s glyphs:\n%s", len(modified), " ".join(modified)) - - -def main(args=None): - import sys - - if args is None: - args = sys.argv[1:] - - if len(args) < 2: - print( - f"usage: fonttools ttLib.removeOverlaps INPUT.ttf OUTPUT.ttf [GLYPHS ...]" - ) - sys.exit(1) - - src = args[0] - dst = args[1] - glyphNames = args[2:] or None - - with ttFont.TTFont(src) as f: - removeOverlaps(f, glyphNames) - f.save(dst) - - -if __name__ == "__main__": - main() diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_h_m_t_x.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_h_m_t_x.py deleted file mode 100644 index 2dafe617a061880d93ab91b981ef26a09365728e..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_h_m_t_x.py +++ /dev/null @@ -1,152 +0,0 @@ -from fontTools.misc.roundTools import otRound -from fontTools import ttLib -from fontTools.misc.textTools import safeEval -from . import DefaultTable -import sys -import struct -import array -import logging - - -log = logging.getLogger(__name__) - - -class table__h_m_t_x(DefaultTable.DefaultTable): - - headerTag = "hhea" - advanceName = "width" - sideBearingName = "lsb" - numberOfMetricsName = "numberOfHMetrics" - longMetricFormat = "Hh" - - def decompile(self, data, ttFont): - numGlyphs = ttFont["maxp"].numGlyphs - headerTable = ttFont.get(self.headerTag) - if headerTable is not None: - numberOfMetrics = int(getattr(headerTable, self.numberOfMetricsName)) - else: - numberOfMetrics = numGlyphs - if numberOfMetrics > numGlyphs: - log.warning( - "The %s.%s exceeds the maxp.numGlyphs" - % (self.headerTag, self.numberOfMetricsName) - ) - numberOfMetrics = numGlyphs - if len(data) < 4 * numberOfMetrics: - raise ttLib.TTLibError("not enough '%s' table data" % self.tableTag) - # Note: advanceWidth is unsigned, but some font editors might - # read/write as signed. We can't be sure whether it was a mistake - # or not, so we read as unsigned but also issue a warning... - metricsFmt = ">" + self.longMetricFormat * numberOfMetrics - metrics = struct.unpack(metricsFmt, data[: 4 * numberOfMetrics]) - data = data[4 * numberOfMetrics :] - numberOfSideBearings = numGlyphs - numberOfMetrics - sideBearings = array.array("h", data[: 2 * numberOfSideBearings]) - data = data[2 * numberOfSideBearings :] - - if sys.byteorder != "big": - sideBearings.byteswap() - if data: - log.warning("too much '%s' table data" % self.tableTag) - self.metrics = {} - glyphOrder = ttFont.getGlyphOrder() - for i in range(numberOfMetrics): - glyphName = glyphOrder[i] - advanceWidth, lsb = metrics[i * 2 : i * 2 + 2] - if advanceWidth > 32767: - log.warning( - "Glyph %r has a huge advance %s (%d); is it intentional or " - "an (invalid) negative value?", - glyphName, - self.advanceName, - advanceWidth, - ) - self.metrics[glyphName] = (advanceWidth, lsb) - lastAdvance = metrics[-2] - for i in range(numberOfSideBearings): - glyphName = glyphOrder[i + numberOfMetrics] - self.metrics[glyphName] = (lastAdvance, sideBearings[i]) - - def compile(self, ttFont): - metrics = [] - hasNegativeAdvances = False - for glyphName in ttFont.getGlyphOrder(): - advanceWidth, sideBearing = self.metrics[glyphName] - if advanceWidth < 0: - log.error( - "Glyph %r has negative advance %s" % (glyphName, self.advanceName) - ) - hasNegativeAdvances = True - metrics.append([advanceWidth, sideBearing]) - - headerTable = ttFont.get(self.headerTag) - if headerTable is not None: - lastAdvance = metrics[-1][0] - lastIndex = len(metrics) - while metrics[lastIndex - 2][0] == lastAdvance: - lastIndex -= 1 - if lastIndex <= 1: - # all advances are equal - lastIndex = 1 - break - additionalMetrics = metrics[lastIndex:] - additionalMetrics = [otRound(sb) for _, sb in additionalMetrics] - metrics = metrics[:lastIndex] - numberOfMetrics = len(metrics) - setattr(headerTable, self.numberOfMetricsName, numberOfMetrics) - else: - # no hhea/vhea, can't store numberOfMetrics; assume == numGlyphs - numberOfMetrics = ttFont["maxp"].numGlyphs - additionalMetrics = [] - - allMetrics = [] - for advance, sb in metrics: - allMetrics.extend([otRound(advance), otRound(sb)]) - metricsFmt = ">" + self.longMetricFormat * numberOfMetrics - try: - data = struct.pack(metricsFmt, *allMetrics) - except struct.error as e: - if "out of range" in str(e) and hasNegativeAdvances: - raise ttLib.TTLibError( - "'%s' table can't contain negative advance %ss" - % (self.tableTag, self.advanceName) - ) - else: - raise - additionalMetrics = array.array("h", additionalMetrics) - if sys.byteorder != "big": - additionalMetrics.byteswap() - data = data + additionalMetrics.tobytes() - return data - - def toXML(self, writer, ttFont): - names = sorted(self.metrics.keys()) - for glyphName in names: - advance, sb = self.metrics[glyphName] - writer.simpletag( - "mtx", - [ - ("name", glyphName), - (self.advanceName, advance), - (self.sideBearingName, sb), - ], - ) - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if not hasattr(self, "metrics"): - self.metrics = {} - if name == "mtx": - self.metrics[attrs["name"]] = ( - safeEval(attrs[self.advanceName]), - safeEval(attrs[self.sideBearingName]), - ) - - def __delitem__(self, glyphName): - del self.metrics[glyphName] - - def __getitem__(self, glyphName): - return self.metrics[glyphName] - - def __setitem__(self, glyphName, advance_sb_pair): - self.metrics[glyphName] = tuple(advance_sb_pair) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-6c26d1f1.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-6c26d1f1.js deleted file mode 100644 index 502e1329c4e2fbac916924187acb5f50fd322832..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-6c26d1f1.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as z,e as E,s as K,a9 as L,N as p,P,O as B,K as q,L as k,U as j,p as w,M as v,Q,R,ab as M,ac as N,ad as O,z as g,v as b,A,k as C,o as h,x as S,E as T,ae as U,q as D,r as F}from"./index-3370be2a.js";import{B as G}from"./Button-89624748.js";import{C as H}from"./Column-61895400.js";/* empty css */function I(a){let e,l,t,s,o,u,n,f,d,_;const r=a[3].default,c=L(r,a,a[2],null);return{c(){e=p("div"),l=p("span"),t=P(a[1]),s=B(),o=p("span"),o.textContent="▼",u=B(),n=p("div"),c&&c.c(),q(l,"class","svelte-s1r2yt"),q(o,"class","icon svelte-s1r2yt"),k(o,"transform",a[0]?"rotate(0)":"rotate(90deg)"),q(e,"class","label-wrap svelte-s1r2yt"),j(e,"open",a[0]),k(n,"display",a[0]?"block":"none")},m(i,m){w(i,e,m),v(e,l),v(l,t),v(e,s),v(e,o),w(i,u,m),w(i,n,m),c&&c.m(n,null),f=!0,d||(_=Q(e,"click",a[4]),d=!0)},p(i,[m]){(!f||m&2)&&R(t,i[1]),m&1&&k(o,"transform",i[0]?"rotate(0)":"rotate(90deg)"),(!f||m&1)&&j(e,"open",i[0]),c&&c.p&&(!f||m&4)&&M(c,r,i,i[2],f?O(r,i[2],m,null):N(i[2]),null),m&1&&k(n,"display",i[0]?"block":"none")},i(i){f||(g(c,i),f=!0)},o(i){b(c,i),f=!1},d(i){i&&(A(e),A(u),A(n)),c&&c.d(i),d=!1,_()}}}function J(a,e,l){let{$$slots:t={},$$scope:s}=e,{label:o=""}=e,{open:u=!0}=e;const n=()=>l(0,u=!u);return a.$$set=f=>{"label"in f&&l(1,o=f.label),"open"in f&&l(0,u=f.open),"$$scope"in f&&l(2,s=f.$$scope)},[u,o,s,t,n]}class V extends z{constructor(e){super(),E(this,e,J,I,K,{label:1,open:0})}}function W(a){let e;const l=a[6].default,t=L(l,a,a[7],null);return{c(){t&&t.c()},m(s,o){t&&t.m(s,o),e=!0},p(s,o){t&&t.p&&(!e||o&128)&&M(t,l,s,s[7],e?O(l,s[7],o,null):N(s[7]),null)},i(s){e||(g(t,s),e=!0)},o(s){b(t,s),e=!1},d(s){t&&t.d(s)}}}function X(a){let e,l;return e=new H({props:{$$slots:{default:[W]},$$scope:{ctx:a}}}),{c(){C(e.$$.fragment)},m(t,s){h(e,t,s),l=!0},p(t,s){const o={};s&128&&(o.$$scope={dirty:s,ctx:t}),e.$set(o)},i(t){l||(g(e.$$.fragment,t),l=!0)},o(t){b(e.$$.fragment,t),l=!1},d(t){S(e,t)}}}function Y(a){let e,l,t,s;const o=[a[5]];let u={};for(let n=0;n{"label"in r&&l(0,o=r.label),"elem_id"in r&&l(1,u=r.elem_id),"elem_classes"in r&&l(2,n=r.elem_classes),"visible"in r&&l(3,f=r.visible),"open"in r&&l(4,d=r.open),"loading_status"in r&&l(5,_=r.loading_status),"$$scope"in r&&l(7,s=r.$$scope)},[o,u,n,f,d,_,t,s]}class y extends z{constructor(e){super(),E(this,e,$,Z,K,{label:0,elem_id:1,elem_classes:2,visible:3,open:4,loading_status:5})}}const le=y,ne=["static"];export{le as Component,ne as modes}; -//# sourceMappingURL=index-6c26d1f1.js.map diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/themes/soft.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/themes/soft.py deleted file mode 100644 index a3365267689769caf3d76926cd2cd81eb7986f75..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/themes/soft.py +++ /dev/null @@ -1,111 +0,0 @@ -from __future__ import annotations - -from typing import Iterable - -from gradio.themes.base import Base -from gradio.themes.utils import colors, fonts, sizes - - -class Soft(Base): - def __init__( - self, - *, - primary_hue: colors.Color | str = colors.indigo, - secondary_hue: colors.Color | str = colors.indigo, - neutral_hue: colors.Color | str = colors.gray, - spacing_size: sizes.Size | str = sizes.spacing_md, - radius_size: sizes.Size | str = sizes.radius_md, - text_size: sizes.Size | str = sizes.text_md, - font: fonts.Font - | str - | Iterable[fonts.Font | str] = ( - fonts.GoogleFont("Montserrat"), - "ui-sans-serif", - "system-ui", - "sans-serif", - ), - font_mono: fonts.Font - | str - | Iterable[fonts.Font | str] = ( - fonts.GoogleFont("IBM Plex Mono"), - "ui-monospace", - "Consolas", - "monospace", - ), - ): - super().__init__( - primary_hue=primary_hue, - secondary_hue=secondary_hue, - neutral_hue=neutral_hue, - spacing_size=spacing_size, - radius_size=radius_size, - text_size=text_size, - font=font, - font_mono=font_mono, - ) - self.name = "soft" - super().set( - # Colors - background_fill_primary="*neutral_50", - slider_color="*primary_500", - slider_color_dark="*primary_600", - # Shadows - shadow_drop="0 1px 4px 0 rgb(0 0 0 / 0.1)", - shadow_drop_lg="0 2px 5px 0 rgb(0 0 0 / 0.1)", - # Block Labels - block_background_fill="white", - block_label_padding="*spacing_sm *spacing_md", - block_label_background_fill="*primary_100", - block_label_background_fill_dark="*primary_600", - block_label_radius="*radius_md", - block_label_text_size="*text_md", - block_label_text_weight="600", - block_label_text_color="*primary_500", - block_label_text_color_dark="white", - block_title_radius="*block_label_radius", - block_title_padding="*block_label_padding", - block_title_background_fill="*block_label_background_fill", - block_title_text_weight="600", - block_title_text_color="*primary_500", - block_title_text_color_dark="white", - block_label_margin="*spacing_md", - # Inputs - input_background_fill="white", - input_border_color="*neutral_50", - input_shadow="*shadow_drop", - input_shadow_focus="*shadow_drop_lg", - checkbox_shadow="none", - # Buttons - shadow_spread="6px", - button_shadow="*shadow_drop_lg", - button_shadow_hover="*shadow_drop_lg", - checkbox_label_shadow="*shadow_drop_lg", - button_shadow_active="*shadow_inset", - button_primary_background_fill="*primary_500", - button_primary_background_fill_hover="*primary_400", - button_primary_background_fill_hover_dark="*primary_500", - button_primary_text_color="white", - button_secondary_background_fill="white", - button_secondary_background_fill_hover="*neutral_100", - button_secondary_background_fill_hover_dark="*primary_500", - button_secondary_text_color="*neutral_800", - button_cancel_background_fill="*button_secondary_background_fill", - button_cancel_background_fill_hover="*button_secondary_background_fill_hover", - button_cancel_background_fill_hover_dark="*button_secondary_background_fill_hover", - button_cancel_text_color="*button_secondary_text_color", - checkbox_label_background_fill_selected="*primary_500", - checkbox_label_background_fill_selected_dark="*primary_600", - checkbox_border_width="1px", - checkbox_border_color="*neutral_100", - checkbox_border_color_dark="*neutral_600", - checkbox_background_color_selected="*primary_600", - checkbox_background_color_selected_dark="*primary_700", - checkbox_border_color_focus="*primary_500", - checkbox_border_color_focus_dark="*primary_600", - checkbox_border_color_selected="*primary_600", - checkbox_border_color_selected_dark="*primary_700", - checkbox_label_text_color_selected="white", - # Borders - block_border_width="0px", - panel_border_width="1px", - ) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/utils/_http.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/utils/_http.py deleted file mode 100644 index aa7c181a4cb457249aaa7bd8980b9dbd4c7baf9d..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/utils/_http.py +++ /dev/null @@ -1,281 +0,0 @@ -# coding=utf-8 -# Copyright 2022-present, the HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Contains utilities to handle HTTP requests in Huggingface Hub.""" -import io -import os -import threading -import time -import uuid -from functools import lru_cache -from http import HTTPStatus -from typing import Callable, Tuple, Type, Union - -import requests -from requests import Response -from requests.adapters import HTTPAdapter -from requests.exceptions import ProxyError, Timeout -from requests.models import PreparedRequest - -from . import logging -from ._typing import HTTP_METHOD_T - - -logger = logging.get_logger(__name__) - -# Both headers are used by the Hub to debug failed requests. -# `X_AMZN_TRACE_ID` is better as it also works to debug on Cloudfront and ALB. -# If `X_AMZN_TRACE_ID` is set, the Hub will use it as well. -X_AMZN_TRACE_ID = "X-Amzn-Trace-Id" -X_REQUEST_ID = "x-request-id" - - -class UniqueRequestIdAdapter(HTTPAdapter): - X_AMZN_TRACE_ID = "X-Amzn-Trace-Id" - - def add_headers(self, request, **kwargs): - super().add_headers(request, **kwargs) - - # Add random request ID => easier for server-side debug - if X_AMZN_TRACE_ID not in request.headers: - request.headers[X_AMZN_TRACE_ID] = request.headers.get(X_REQUEST_ID) or str(uuid.uuid4()) - - # Add debug log - has_token = str(request.headers.get("authorization", "")).startswith("Bearer hf_") - logger.debug( - f"Request {request.headers[X_AMZN_TRACE_ID]}: {request.method} {request.url} (authenticated: {has_token})" - ) - - def send(self, request: PreparedRequest, *args, **kwargs) -> Response: - """Catch any RequestException to append request id to the error message for debugging.""" - try: - return super().send(request, *args, **kwargs) - except requests.RequestException as e: - request_id = request.headers.get(X_AMZN_TRACE_ID) - if request_id is not None: - # Taken from https://stackoverflow.com/a/58270258 - e.args = (*e.args, f"(Request ID: {request_id})") - raise - - -def _default_backend_factory() -> requests.Session: - session = requests.Session() - session.mount("http://", UniqueRequestIdAdapter()) - session.mount("https://", UniqueRequestIdAdapter()) - return session - - -BACKEND_FACTORY_T = Callable[[], requests.Session] -_GLOBAL_BACKEND_FACTORY: BACKEND_FACTORY_T = _default_backend_factory - - -def configure_http_backend(backend_factory: BACKEND_FACTORY_T = _default_backend_factory) -> None: - """ - Configure the HTTP backend by providing a `backend_factory`. Any HTTP calls made by `huggingface_hub` will use a - Session object instantiated by this factory. This can be useful if you are running your scripts in a specific - environment requiring custom configuration (e.g. custom proxy or certifications). - - Use [`get_session`] to get a configured Session. Since `requests.Session` is not guaranteed to be thread-safe, - `huggingface_hub` creates 1 Session instance per thread. They are all instantiated using the same `backend_factory` - set in [`configure_http_backend`]. A LRU cache is used to cache the created sessions (and connections) between - calls. Max size is 128 to avoid memory leaks if thousands of threads are spawned. - - See [this issue](https://github.com/psf/requests/issues/2766) to know more about thread-safety in `requests`. - - Example: - ```py - import requests - from huggingface_hub import configure_http_backend, get_session - - # Create a factory function that returns a Session with configured proxies - def backend_factory() -> requests.Session: - session = requests.Session() - session.proxies = {"http": "http://10.10.1.10:3128", "https": "https://10.10.1.11:1080"} - return session - - # Set it as the default session factory - configure_http_backend(backend_factory=backend_factory) - - # In practice, this is mostly done internally in `huggingface_hub` - session = get_session() - ``` - """ - global _GLOBAL_BACKEND_FACTORY - _GLOBAL_BACKEND_FACTORY = backend_factory - _get_session_from_cache.cache_clear() - - -def get_session() -> requests.Session: - """ - Get a `requests.Session` object, using the session factory from the user. - - Use [`get_session`] to get a configured Session. Since `requests.Session` is not guaranteed to be thread-safe, - `huggingface_hub` creates 1 Session instance per thread. They are all instantiated using the same `backend_factory` - set in [`configure_http_backend`]. A LRU cache is used to cache the created sessions (and connections) between - calls. Max size is 128 to avoid memory leaks if thousands of threads are spawned. - - See [this issue](https://github.com/psf/requests/issues/2766) to know more about thread-safety in `requests`. - - Example: - ```py - import requests - from huggingface_hub import configure_http_backend, get_session - - # Create a factory function that returns a Session with configured proxies - def backend_factory() -> requests.Session: - session = requests.Session() - session.proxies = {"http": "http://10.10.1.10:3128", "https": "https://10.10.1.11:1080"} - return session - - # Set it as the default session factory - configure_http_backend(backend_factory=backend_factory) - - # In practice, this is mostly done internally in `huggingface_hub` - session = get_session() - ``` - """ - return _get_session_from_cache(process_id=os.getpid(), thread_id=threading.get_ident()) - - -@lru_cache(maxsize=128) # default value for Python>=3.8. Let's keep the same for Python3.7 -def _get_session_from_cache(process_id: int, thread_id: int) -> requests.Session: - """ - Create a new session per thread using global factory. Using LRU cache (maxsize 128) to avoid memory leaks when - using thousands of threads. Cache is cleared when `configure_http_backend` is called. - """ - return _GLOBAL_BACKEND_FACTORY() - - -def http_backoff( - method: HTTP_METHOD_T, - url: str, - *, - max_retries: int = 5, - base_wait_time: float = 1, - max_wait_time: float = 8, - retry_on_exceptions: Union[Type[Exception], Tuple[Type[Exception], ...]] = ( - Timeout, - ProxyError, - ), - retry_on_status_codes: Union[int, Tuple[int, ...]] = HTTPStatus.SERVICE_UNAVAILABLE, - **kwargs, -) -> Response: - """Wrapper around requests to retry calls on an endpoint, with exponential backoff. - - Endpoint call is retried on exceptions (ex: connection timeout, proxy error,...) - and/or on specific status codes (ex: service unavailable). If the call failed more - than `max_retries`, the exception is thrown or `raise_for_status` is called on the - response object. - - Re-implement mechanisms from the `backoff` library to avoid adding an external - dependencies to `hugging_face_hub`. See https://github.com/litl/backoff. - - Args: - method (`Literal["GET", "OPTIONS", "HEAD", "POST", "PUT", "PATCH", "DELETE"]`): - HTTP method to perform. - url (`str`): - The URL of the resource to fetch. - max_retries (`int`, *optional*, defaults to `5`): - Maximum number of retries, defaults to 5 (no retries). - base_wait_time (`float`, *optional*, defaults to `1`): - Duration (in seconds) to wait before retrying the first time. - Wait time between retries then grows exponentially, capped by - `max_wait_time`. - max_wait_time (`float`, *optional*, defaults to `8`): - Maximum duration (in seconds) to wait before retrying. - retry_on_exceptions (`Type[Exception]` or `Tuple[Type[Exception]]`, *optional*, defaults to `(Timeout, ProxyError,)`): - Define which exceptions must be caught to retry the request. Can be a single - type or a tuple of types. - By default, retry on `Timeout` and `ProxyError`. - retry_on_status_codes (`int` or `Tuple[int]`, *optional*, defaults to `503`): - Define on which status codes the request must be retried. By default, only - HTTP 503 Service Unavailable is retried. - **kwargs (`dict`, *optional*): - kwargs to pass to `requests.request`. - - Example: - ``` - >>> from huggingface_hub.utils import http_backoff - - # Same usage as "requests.request". - >>> response = http_backoff("GET", "https://www.google.com") - >>> response.raise_for_status() - - # If you expect a Gateway Timeout from time to time - >>> http_backoff("PUT", upload_url, data=data, retry_on_status_codes=504) - >>> response.raise_for_status() - ``` - - - - When using `requests` it is possible to stream data by passing an iterator to the - `data` argument. On http backoff this is a problem as the iterator is not reset - after a failed call. This issue is mitigated for file objects or any IO streams - by saving the initial position of the cursor (with `data.tell()`) and resetting the - cursor between each call (with `data.seek()`). For arbitrary iterators, http backoff - will fail. If this is a hard constraint for you, please let us know by opening an - issue on [Github](https://github.com/huggingface/huggingface_hub). - - - """ - if isinstance(retry_on_exceptions, type): # Tuple from single exception type - retry_on_exceptions = (retry_on_exceptions,) - - if isinstance(retry_on_status_codes, int): # Tuple from single status code - retry_on_status_codes = (retry_on_status_codes,) - - nb_tries = 0 - sleep_time = base_wait_time - - # If `data` is used and is a file object (or any IO), it will be consumed on the - # first HTTP request. We need to save the initial position so that the full content - # of the file is re-sent on http backoff. See warning tip in docstring. - io_obj_initial_pos = None - if "data" in kwargs and isinstance(kwargs["data"], io.IOBase): - io_obj_initial_pos = kwargs["data"].tell() - - session = get_session() - while True: - nb_tries += 1 - try: - # If `data` is used and is a file object (or any IO), set back cursor to - # initial position. - if io_obj_initial_pos is not None: - kwargs["data"].seek(io_obj_initial_pos) - - # Perform request and return if status_code is not in the retry list. - response = session.request(method=method, url=url, **kwargs) - if response.status_code not in retry_on_status_codes: - return response - - # Wrong status code returned (HTTP 503 for instance) - logger.warning(f"HTTP Error {response.status_code} thrown while requesting {method} {url}") - if nb_tries > max_retries: - response.raise_for_status() # Will raise uncaught exception - # We return response to avoid infinite loop in the corner case where the - # user ask for retry on a status code that doesn't raise_for_status. - return response - - except retry_on_exceptions as err: - logger.warning(f"'{err}' thrown while requesting {method} {url}") - - if nb_tries > max_retries: - raise err - - # Sleep for X seconds - logger.warning(f"Retrying in {sleep_time}s [Retry {nb_tries}/{max_retries}].") - time.sleep(sleep_time) - - # Update sleep time for next retry - sleep_time = min(max_wait_time, sleep_time * 2) # Exponential backoff diff --git a/spaces/DShrimp/PoseMaker/app.py b/spaces/DShrimp/PoseMaker/app.py deleted file mode 100644 index 72680d091b192446cdc1feebaea75c6dc6c6ebc9..0000000000000000000000000000000000000000 --- a/spaces/DShrimp/PoseMaker/app.py +++ /dev/null @@ -1,127 +0,0 @@ -import gradio as gr -import numpy as np -import cv2 -from fastapi import FastAPI, Request, Response -from src.body import Body - -body_estimation = Body('model/body_pose_model.pth') - -def pil2cv(image): - ''' PIL型 -> OpenCV型 ''' - new_image = np.array(image, dtype=np.uint8) - if new_image.ndim == 2: # モノクロ - pass - elif new_image.shape[2] == 3: # カラー - new_image = cv2.cvtColor(new_image, cv2.COLOR_RGB2BGR) - elif new_image.shape[2] == 4: # 透過 - new_image = cv2.cvtColor(new_image, cv2.COLOR_RGBA2BGRA) - return new_image - -with open("static/poseEditor.js", "r") as f: - file_contents = f.read() - -app = FastAPI() - -@app.middleware("http") -async def some_fastapi_middleware(request: Request, call_next): - path = request.scope['path'] # get the request route - response = await call_next(request) - - if path == "/": - response_body = "" - async for chunk in response.body_iterator: - response_body += chunk.decode() - - some_javascript = f""" - - """ - - response_body = response_body.replace("", some_javascript + "") - - del response.headers["content-length"] - - return Response( - content=response_body, - status_code=response.status_code, - headers=dict(response.headers), - media_type=response.media_type - ) - - return response - -# make cndidate to json -def candidate_to_json_string(arr): - a = [f'[{x:.2f}, {y:.2f}]' for x, y, *_ in arr] - return '[' + ', '.join(a) + ']' - -# make subset to json -def subset_to_json_string(arr): - arr_str = ','.join(['[' + ','.join([f'{num:.2f}' for num in row]) + ']' for row in arr]) - return '[' + arr_str + ']' - -def estimate_body(source): - if source == None: - return None - - candidate, subset = body_estimation(pil2cv(source)) - return "{ \"candidate\": " + candidate_to_json_string(candidate) + ", \"subset\": " + subset_to_json_string(subset) + " }" - -def image_changed(image): - if (image == None): - return {}, 512, 512 - json = estimate_body(image) - return json, image.width, image.height - -html_text = f""" - - - """ - -with gr.Blocks() as demo: - gr.Markdown("""### Usage - -Choose one of the following methods to edit the pose: - -| Style | Description | -| -----------------| ----------------------------------------------------------------------------------------- | -| Pose recognition | Upload an image and click "Start edit". | -| Input json | Input json to "Json source" and click "Input Json", edit the width/height, then click "Start edit". | -| Free style | Edit the width/height, then click "Start edit". | - -To save the pose image, click "Save". -To export the pose data, click "Save" and "Copy to clipboard" of "Json" section. -""") - with gr.Row(): - with gr.Column(scale=1): - source = gr.Image(type="pil") - width = gr.Slider(label="Width", mininmum=512, maximum=1024, step=64, value=512, key="Width", interactive=True) - height = gr.Slider(label="Height", mininmum=512, maximum=1024, step=64, value=512, key="Height", interactive=True) - startBtn = gr.Button(value="Start edit") - json = gr.JSON(label="Json", lines=10) - jsonInput = gr.Textbox(label="Json source", lines=10) - jsonInputBtn = gr.Button(value="Input Json") - with gr.Column(scale=2): - html = gr.HTML(html_text) - saveBtn = gr.Button(value="Save") - gr.HTML("
  • ctrl + drag to scale
  • alt + drag to translate
  • shift + drag to rotate(move right first, then up or down)
") - - source.change( - fn = image_changed, - inputs = [source], - outputs = [json, width, height]) - startBtn.click( - fn = None, - inputs = [json, width, height], - outputs = [], - _js="(json, w, h) => { initializePose(json,w,h); return []; }") - saveBtn.click( - fn = None, - inputs = [], outputs = [json], - _js="() => { return [savePose()]; }") - jsonInputBtn.click( - fn = lambda x: x, - inputs = [jsonInput], outputs = [json]) - -gr.mount_gradio_app(app, demo, path="/") diff --git a/spaces/Daffa/image-classification/app.py b/spaces/Daffa/image-classification/app.py deleted file mode 100644 index 754d61ad22d6113b57e224489e5713f20ad3cfc0..0000000000000000000000000000000000000000 --- a/spaces/Daffa/image-classification/app.py +++ /dev/null @@ -1,4 +0,0 @@ -import gradio as gr -from transformers import pipeline -pipe = pipeline("image-classification") -gr.Interface.from_pipeline(pipe).launch() \ No newline at end of file diff --git a/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/mapper/__init__.py b/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/mapper/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/utils/models_utils.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/utils/models_utils.py deleted file mode 100644 index 53b2c3fa9d7035364dd34384fcdab78c1ae5c6af..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/stylegan_human/utils/models_utils.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - - -import pickle -import functools -import torch -from pti.pti_configs import paths_config, global_config - - -def toogle_grad(model, flag=True): - for p in model.parameters(): - p.requires_grad = flag - - -def load_tuned_G(run_id, type): - new_G_path = f'{paths_config.checkpoints_dir}/model_{run_id}_{type}.pt' - with open(new_G_path, 'rb') as f: - new_G = torch.load(f).to(global_config.device).eval() - new_G = new_G.float() - toogle_grad(new_G, False) - return new_G - - -def load_old_G(): - with open(paths_config.stylegan2_ada_shhq, 'rb') as f: - old_G = pickle.load(f)['G_ema'].to(global_config.device).eval() - old_G = old_G.float() - return old_G diff --git a/spaces/Dusan/clickbaitonator/README.md b/spaces/Dusan/clickbaitonator/README.md deleted file mode 100644 index c3494dadac3f2458669f2fc25f1eb0e0d1367c49..0000000000000000000000000000000000000000 --- a/spaces/Dusan/clickbaitonator/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Clickbaitonator -emoji: 💩 -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.0.24 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/EDGAhab/VITS-Aatrox-AI/text/symbols.py b/spaces/EDGAhab/VITS-Aatrox-AI/text/symbols.py deleted file mode 100644 index eddca6a4fc8c8bef72a4467fa3dc916a6ec667c3..0000000000000000000000000000000000000000 --- a/spaces/EDGAhab/VITS-Aatrox-AI/text/symbols.py +++ /dev/null @@ -1,25 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Defines the set of symbols used in text input to the model. -''' -_pad = '_' -_punctuation = ';:,.!?¡¿—…"«»“” ' - -_punctuation_zh = ';:,。!?-“”《》、()BP…—~.\·『』・ ' -_letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz' - -_numbers = '1234567890' -_others = '' - -_letters_ipa = "ɑɐɒæɓʙβɔɕçɗɖðʤəɘɚɛɜɝɞɟʄɡɠɢʛɦɧħɥʜɨɪʝɭɬɫɮʟɱɯɰŋɳɲɴøɵɸθœɶʘɹɺɾɻʀʁɽʂʃʈʧʉʊʋⱱʌɣɤʍχʎʏʑʐʒʔʡʕʢǀǁǂǃˈˌːˑʼʴʰʱʲʷˠˤ˞↓↑→↗↘'̩'ᵻ" - - -# Export all symbols: -symbols = [_pad] + list(_punctuation) + list(_letters) + list(_letters_ipa) - -symbols_zh = [_pad] + list(_punctuation_zh) + list(_letters) + list(_numbers) - -# Special symbol ids -SPACE_ID = symbols.index(" ") - diff --git a/spaces/EPFL-VILAB/MultiMAE/mask2former/data/datasets/__init__.py b/spaces/EPFL-VILAB/MultiMAE/mask2former/data/datasets/__init__.py deleted file mode 100644 index 403a678e3ba6655135f36e788ad53587f05d6d1e..0000000000000000000000000000000000000000 --- a/spaces/EPFL-VILAB/MultiMAE/mask2former/data/datasets/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from . import ( - register_ade20k_full, - register_ade20k_panoptic, - register_coco_stuff_10k, - register_mapillary_vistas, - register_coco_panoptic_annos_semseg, - register_ade20k_instance, - register_mapillary_vistas_panoptic, -) diff --git a/spaces/Ekimetrics/Biomap/biomap/utils copy.py b/spaces/Ekimetrics/Biomap/biomap/utils copy.py deleted file mode 100644 index 217ad7e4131fc1e6c88dc395cdadb28f3d832f92..0000000000000000000000000000000000000000 --- a/spaces/Ekimetrics/Biomap/biomap/utils copy.py +++ /dev/null @@ -1,675 +0,0 @@ -import collections -import os -from os.path import join -import io - -import matplotlib.pyplot as plt -import numpy as np -import torch.multiprocessing -import torch.nn as nn -import torch.nn.functional as F -import wget - -import datetime - -from dateutil.relativedelta import relativedelta -from PIL import Image -from scipy.optimize import linear_sum_assignment -from torch._six import string_classes -from torch.utils.data._utils.collate import np_str_obj_array_pattern, default_collate_err_msg_format -from torchmetrics import Metric -from torchvision import models -from torchvision import transforms as T -from torch.utils.tensorboard.summary import hparams -import matplotlib as mpl -from PIL import Image - -import matplotlib as mpl - -import torch.multiprocessing -import torchvision.transforms as T - -import plotly.graph_objects as go -import plotly.express as px -import numpy as np -from plotly.subplots import make_subplots - -import os -os.environ['KMP_DUPLICATE_LIB_OK'] = 'True' - -colors = ("red", "palegreen", "green", "steelblue", "blue", "yellow", "lightgrey") -class_names = ('Buildings', 'Cultivation', 'Natural green', 'Wetland', 'Water', 'Infrastructure', 'Background') -mapping_class = { - "Buildings": 1, - "Cultivation": 2, - "Natural green": 3, - "Wetland": 4, - "Water": 5, - "Infrastructure": 6, - "Background": 0, -} - -score_attribution = { - "Buildings" : 0., - "Cultivation": 0.3, - "Natural green": 1., - "Wetland": 0.9, - "Water": 0.9, - "Infrastructure": 0., - "Background": 0. -} -bounds = list(np.arange(len(mapping_class.keys()) + 1) + 1) -cmap = mpl.colors.ListedColormap(colors) -norm = mpl.colors.BoundaryNorm(bounds, cmap.N) - -def compute_biodiv_score(class_image): - """Compute the biodiversity score of an image - - Args: - image (_type_): _description_ - - Returns: - biodiversity_score: the biodiversity score associated to the landscape of the image - """ - score_matrice = class_image.copy().astype(int) - for key in mapping_class.keys(): - score_matrice = np.where(score_matrice==mapping_class[key], score_attribution[key], score_matrice) - number_of_pixel = np.prod(list(score_matrice.shape)) - score = np.sum(score_matrice)/number_of_pixel - score_details = { - key: np.sum(np.where(class_image == mapping_class[key], 1, 0)) - for key in mapping_class.keys() - if key not in ["background"] - } - return score, score_details - -def plot_imgs_labels(months, imgs, imgs_label, nb_values, scores) : - scores = [0.89, 0.70, 0.3, 0.2] - - # fig2 = px.imshow(np.array(imgs), animation_frame=0, binary_string=True) - # fig3 = px.imshow(np.array(imgs_label), animation_frame=0, binary_string=True) - - # # Scores - # scatters = [go.Scatter( - # x=months[:i+1], - # y=scores[:i+1], - # mode="lines+markers+text", - # marker_color="black", - # text = [f"{score:.4f}" for score in scores[:i+1]], - # textposition="top center", - - # ) for i in range(len(scores))] - - - - # fig.add_trace(fig2["frames"][0]["data"][0], row=1, col=1) - # fig.add_trace(fig3["frames"][0]["data"][0], row=1, col=2) - - # fig.add_trace(go.Pie(labels = class_names, - # values = [nb_values[0][key] for key in mapping_class.keys()], - # marker_colors = colors, - # name="Segment repartition", - # textposition='inside', - # texttemplate = "%{percent:.0%}", - # textfont_size=14 - # ), - # row=1, col=3) - - - # fig.add_trace(scatters[0], row=1, col=4) - # # fig.update_traces(selector=dict(type='scatter')) - - # number_frames = len(imgs) - # frames = [dict( - # name = k, - # data = [ fig2["frames"][k]["data"][0], - # fig3["frames"][k]["data"][0], - # go.Pie(labels = class_names, - # values = [nb_values[k][key] for key in mapping_class.keys()], - # marker_colors = colors, - # name="Segment repartition", - # textposition='inside', - # texttemplate = "%{percent:.0%}", - # textfont_size=14 - # ), - # scatters[k] - # ], - # traces=[0, 1, 2, 3] - # ) for k in range(number_frames)] - - # updatemenus = [dict(type='buttons', - # buttons=[dict( - # label='Play', - # method='animate', - # args=[ - # [f'{k}' for k in range(number_frames)], - # dict( - # frame=dict(duration=500, redraw=False), - # transition=dict(duration=0), - # # easing='linear', - # # fromcurrent=True, - # # mode='immediate' - # ) - # ]) - # ], - # direction= 'left', - # pad=dict(r= 10, t=85), - # showactive=True, x= 0.1, y= 0.1, xanchor= 'right', yanchor= 'bottom') - # ] - - # sliders = [{'yanchor': 'top', - # 'xanchor': 'left', - # 'currentvalue': {'font': {'size': 16}, 'prefix': 'Frame: ', 'visible': False, 'xanchor': 'right'}, - # 'transition': {'duration': 500.0, 'easing': 'linear'}, - # 'pad': {'b': 10, 't': 50}, - # 'len': 0.9, 'x': 0.1, 'y': 0, - # 'steps': [{'args': [[k], {'frame': {'duration': 500.0, 'easing': 'linear', 'redraw': False}, - # 'transition': {'duration': 0, 'easing': 'linear'}}], - # 'label': months[k], 'method': 'animate'} for k in range(number_frames) - # ]}] - - - # fig.update(frames=frames, - # layout={ - # "xaxis1": { - # "autorange":True, - # 'showgrid': False, - # 'zeroline': False, # thick line at x=0 - # 'visible': False, # numbers below - # }, - - # "yaxis1": { - # "autorange":True, - # 'showgrid': False, - # 'zeroline': False, - # 'visible': False,}, - - # "xaxis2": { - # "autorange":True, - # 'showgrid': False, - # 'zeroline': False, - # 'visible': False, - # }, - - # "yaxis2": { - # "autorange":True, - # 'showgrid': False, - # 'zeroline': False, - # 'visible': False,}, - - - # "xaxis4": { - # "ticktext": months, - # "tickvals": months, - # "tickangle": 90, - # }, - # "yaxis4": { - # 'range': [min(scores)*0.9, max(scores)* 1.1], - # 'showgrid': False, - # 'zeroline': False, - # 'visible': True - # }, - # }) - # fig.update_layout( - # updatemenus=updatemenus, - # sliders=sliders, - # # legend=dict( - # # yanchor= 'bottom', - # # xanchor= 'center', - # # orientation="h"), - - # ) - # Scores - fig = make_subplots( - rows=1, cols=4, - specs=[[{"type": "image"},{"type": "image"}, {"type": "pie"}, {"type": "scatter"}]], - subplot_titles=("Localisation visualization", "Labeled visualisation", "Segments repartition", "Biodiversity scores") - ) - - fig2 = px.imshow(np.array(imgs), animation_frame=0, binary_string=True) - fig3 = px.imshow(np.array(imgs_label), animation_frame=0, binary_string=True) - pie_charts = [go.Pie(labels = class_names, - values = [nb_values[k][key] for key in mapping_class.keys()], - marker_colors = colors, - name="Segment repartition", - textposition='inside', - texttemplate = "%{percent:.0%}", - textfont_size=14, - ) - for k in range(len(scores))] - scatters = [go.Scatter( - x=months[:i+1], - y=scores[:i+1], - mode="lines+markers+text", - marker_color="black", - text = [f"{score:.4f}" for score in scores[:i+1]], - textposition="top center", - ) for i in range(len(scores))] - - fig.add_trace(fig2["frames"][0]["data"][0], row=1, col=1) - fig.add_trace(fig3["frames"][0]["data"][0], row=1, col=2) - fig.add_trace(pie_charts[0], row=1, col=3) - fig.add_trace(scatters[0], row=1, col=4) - - start_date = datetime.datetime.strptime(months[0], "%Y-%m-%d") - relativedelta(months=1) - end_date = datetime.datetime.strptime(months[-1], "%Y-%m-%d") + relativedelta(months=1) - interval = [start_date.strftime("%Y-%m-%d"),end_date.strftime("%Y-%m-%d")] - fig.update_layout({ - "xaxis": { - "autorange":True, - 'showgrid': False, - 'zeroline': False, # thick line at x=0 - 'visible': False, # numbers below - }, - - "yaxis": { - "autorange":True, - 'showgrid': False, - 'zeroline': False, - 'visible': False,}, - - "xaxis1": { - "range":[0,imgs[0].shape[1]], - 'showgrid': False, - 'zeroline': False, - 'visible': False, - }, - - "yaxis1": { - "range":[imgs[0].shape[0],0], - 'showgrid': False, - 'zeroline': False, - 'visible': False,}, - - - "xaxis3": { - "dtick":"M3", - "range":interval - }, - "yaxis3": { - 'range': [min(scores)*0.9, max(scores)* 1.1], - 'showgrid': False, - 'zeroline': False, - 'visible': True - }} - ) - - frames = [dict( - name = k, - data = [ fig2["frames"][k]["data"][0], - fig3["frames"][k]["data"][0], - pie_charts[k], - scatters[k] - ], - - traces=[0,1,2,3] - ) for k in range(len(scores))] - - - updatemenus = [dict(type='buttons', - buttons=[dict(label='Play', - method='animate', - args=[ - [f'{k}' for k in range(len(scores))], - dict( - frame=dict(duration=500, redraw=False), - transition=dict(duration=0), - # easing='linear', - # fromcurrent=True, - # mode='immediate' - ) - ] - - )], - direction= 'left', - pad=dict(r= 10, t=85), - showactive =True, x= 0.1, y= 0, xanchor= 'right', yanchor= 'top') - ] - - sliders = [{'yanchor': 'top', - 'xanchor': 'left', - 'currentvalue': { - 'font': {'size': 16}, - 'visible': True, - 'xanchor': 'right'}, - 'transition': { - 'duration': 500.0, - 'easing': 'linear'}, - 'pad': {'b': 10, 't': 50}, - 'len': 0.9, 'x': 0.1, 'y': 0, - 'steps': [{'args': [None, {'frame': {'duration': 500.0,'redraw': False}, - 'transition': {'duration': 0}}], - 'label': k, 'method': 'animate'} for k in range(len(scores)) - ] - }] - - fig.update_layout(updatemenus=updatemenus, - sliders=sliders, - ) - fig.update(frames=frames) - return fig - - -def transform_to_pil(output, alpha=0.3): - # Transform img with torch - img = torch.moveaxis(prep_for_plot(output['img']),-1,0) - img=T.ToPILImage()(img) - - cmaplist = np.array([np.array(cmap(i)) for i in range(cmap.N)]) - labels = np.array(output['linear_preds'])-1 - label = T.ToPILImage()((cmaplist[labels]*255).astype(np.uint8)) - - # Overlay labels with img wit alpha - background = img.convert("RGBA") - overlay = label.convert("RGBA") - - labeled_img = Image.blend(background, overlay, alpha) - - return img, label, labeled_img - - -def prep_for_plot(img, rescale=True, resize=None): - if resize is not None: - img = F.interpolate(img.unsqueeze(0), resize, mode="bilinear") - else: - img = img.unsqueeze(0) - - plot_img = unnorm(img).squeeze(0).cpu().permute(1, 2, 0) - if rescale: - plot_img = (plot_img - plot_img.min()) / (plot_img.max() - plot_img.min()) - return plot_img - - -def add_plot(writer, name, step): - buf = io.BytesIO() - plt.savefig(buf, format='jpeg', dpi=100) - buf.seek(0) - image = Image.open(buf) - image = T.ToTensor()(image) - writer.add_image(name, image, step) - plt.clf() - plt.close() - - -@torch.jit.script -def shuffle(x): - return x[torch.randperm(x.shape[0])] - - -def add_hparams_fixed(writer, hparam_dict, metric_dict, global_step): - exp, ssi, sei = hparams(hparam_dict, metric_dict) - writer.file_writer.add_summary(exp) - writer.file_writer.add_summary(ssi) - writer.file_writer.add_summary(sei) - for k, v in metric_dict.items(): - writer.add_scalar(k, v, global_step) - - -@torch.jit.script -def resize(classes: torch.Tensor, size: int): - return F.interpolate(classes, (size, size), mode="bilinear", align_corners=False) - - -def one_hot_feats(labels, n_classes): - return F.one_hot(labels, n_classes).permute(0, 3, 1, 2).to(torch.float32) - - -def load_model(model_type, data_dir): - if model_type == "robust_resnet50": - model = models.resnet50(pretrained=False) - model_file = join(data_dir, 'imagenet_l2_3_0.pt') - if not os.path.exists(model_file): - wget.download("http://6.869.csail.mit.edu/fa19/psets19/pset6/imagenet_l2_3_0.pt", - model_file) - model_weights = torch.load(model_file) - model_weights_modified = {name.split('model.')[1]: value for name, value in model_weights['model'].items() if - 'model' in name} - model.load_state_dict(model_weights_modified) - model = nn.Sequential(*list(model.children())[:-1]) - elif model_type == "densecl": - model = models.resnet50(pretrained=False) - model_file = join(data_dir, 'densecl_r50_coco_1600ep.pth') - if not os.path.exists(model_file): - wget.download("https://cloudstor.aarnet.edu.au/plus/s/3GapXiWuVAzdKwJ/download", - model_file) - model_weights = torch.load(model_file) - # model_weights_modified = {name.split('model.')[1]: value for name, value in model_weights['model'].items() if - # 'model' in name} - model.load_state_dict(model_weights['state_dict'], strict=False) - model = nn.Sequential(*list(model.children())[:-1]) - elif model_type == "resnet50": - model = models.resnet50(pretrained=True) - model = nn.Sequential(*list(model.children())[:-1]) - elif model_type == "mocov2": - model = models.resnet50(pretrained=False) - model_file = join(data_dir, 'moco_v2_800ep_pretrain.pth.tar') - if not os.path.exists(model_file): - wget.download("https://dl.fbaipublicfiles.com/moco/moco_checkpoints/" - "moco_v2_800ep/moco_v2_800ep_pretrain.pth.tar", model_file) - checkpoint = torch.load(model_file) - # rename moco pre-trained keys - state_dict = checkpoint['state_dict'] - for k in list(state_dict.keys()): - # retain only encoder_q up to before the embedding layer - if k.startswith('module.encoder_q') and not k.startswith('module.encoder_q.fc'): - # remove prefix - state_dict[k[len("module.encoder_q."):]] = state_dict[k] - # delete renamed or unused k - del state_dict[k] - msg = model.load_state_dict(state_dict, strict=False) - assert set(msg.missing_keys) == {"fc.weight", "fc.bias"} - model = nn.Sequential(*list(model.children())[:-1]) - elif model_type == "densenet121": - model = models.densenet121(pretrained=True) - model = nn.Sequential(*list(model.children())[:-1] + [nn.AdaptiveAvgPool2d((1, 1))]) - elif model_type == "vgg11": - model = models.vgg11(pretrained=True) - model = nn.Sequential(*list(model.children())[:-1] + [nn.AdaptiveAvgPool2d((1, 1))]) - else: - raise ValueError("No model: {} found".format(model_type)) - - model.eval() - model.cuda() - return model - - -class UnNormalize(object): - def __init__(self, mean, std): - self.mean = mean - self.std = std - - def __call__(self, image): - image2 = torch.clone(image) - for t, m, s in zip(image2, self.mean, self.std): - t.mul_(s).add_(m) - return image2 - - -normalize = T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) -unnorm = UnNormalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) - - -class ToTargetTensor(object): - def __call__(self, target): - return torch.as_tensor(np.array(target), dtype=torch.int64).unsqueeze(0) - - -def prep_args(): - import sys - - old_args = sys.argv - new_args = [old_args.pop(0)] - while len(old_args) > 0: - arg = old_args.pop(0) - if len(arg.split("=")) == 2: - new_args.append(arg) - elif arg.startswith("--"): - new_args.append(arg[2:] + "=" + old_args.pop(0)) - else: - raise ValueError("Unexpected arg style {}".format(arg)) - sys.argv = new_args - - -def get_transform(res, is_label, crop_type): - if crop_type == "center": - cropper = T.CenterCrop(res) - elif crop_type == "random": - cropper = T.RandomCrop(res) - elif crop_type is None: - cropper = T.Lambda(lambda x: x) - res = (res, res) - else: - raise ValueError("Unknown Cropper {}".format(crop_type)) - if is_label: - return T.Compose([T.Resize(res, Image.NEAREST), - cropper, - ToTargetTensor()]) - else: - return T.Compose([T.Resize(res, Image.NEAREST), - cropper, - T.ToTensor(), - normalize]) - - -def _remove_axes(ax): - ax.xaxis.set_major_formatter(plt.NullFormatter()) - ax.yaxis.set_major_formatter(plt.NullFormatter()) - ax.set_xticks([]) - ax.set_yticks([]) - - -def remove_axes(axes): - if len(axes.shape) == 2: - for ax1 in axes: - for ax in ax1: - _remove_axes(ax) - else: - for ax in axes: - _remove_axes(ax) - - -class UnsupervisedMetrics(Metric): - def __init__(self, prefix: str, n_classes: int, extra_clusters: int, compute_hungarian: bool, - dist_sync_on_step=True): - # call `self.add_state`for every internal state that is needed for the metrics computations - # dist_reduce_fx indicates the function that should be used to reduce - # state from multiple processes - super().__init__(dist_sync_on_step=dist_sync_on_step) - - self.n_classes = n_classes - self.extra_clusters = extra_clusters - self.compute_hungarian = compute_hungarian - self.prefix = prefix - self.add_state("stats", - default=torch.zeros(n_classes + self.extra_clusters, n_classes, dtype=torch.int64), - dist_reduce_fx="sum") - - def update(self, preds: torch.Tensor, target: torch.Tensor): - with torch.no_grad(): - actual = target.reshape(-1) - preds = preds.reshape(-1) - mask = (actual >= 0) & (actual < self.n_classes) & (preds >= 0) & (preds < self.n_classes) - actual = actual[mask] - preds = preds[mask] - self.stats += torch.bincount( - (self.n_classes + self.extra_clusters) * actual + preds, - minlength=self.n_classes * (self.n_classes + self.extra_clusters)) \ - .reshape(self.n_classes, self.n_classes + self.extra_clusters).t().to(self.stats.device) - - def map_clusters(self, clusters): - if self.extra_clusters == 0: - return torch.tensor(self.assignments[1])[clusters] - else: - missing = sorted(list(set(range(self.n_classes + self.extra_clusters)) - set(self.assignments[0]))) - cluster_to_class = self.assignments[1] - for missing_entry in missing: - if missing_entry == cluster_to_class.shape[0]: - cluster_to_class = np.append(cluster_to_class, -1) - else: - cluster_to_class = np.insert(cluster_to_class, missing_entry + 1, -1) - cluster_to_class = torch.tensor(cluster_to_class) - return cluster_to_class[clusters] - - def compute(self): - if self.compute_hungarian: - self.assignments = linear_sum_assignment(self.stats.detach().cpu(), maximize=True) - # print(self.assignments) - if self.extra_clusters == 0: - self.histogram = self.stats[np.argsort(self.assignments[1]), :] - if self.extra_clusters > 0: - self.assignments_t = linear_sum_assignment(self.stats.detach().cpu().t(), maximize=True) - histogram = self.stats[self.assignments_t[1], :] - missing = list(set(range(self.n_classes + self.extra_clusters)) - set(self.assignments[0])) - new_row = self.stats[missing, :].sum(0, keepdim=True) - histogram = torch.cat([histogram, new_row], axis=0) - new_col = torch.zeros(self.n_classes + 1, 1, device=histogram.device) - self.histogram = torch.cat([histogram, new_col], axis=1) - else: - self.assignments = (torch.arange(self.n_classes).unsqueeze(1), - torch.arange(self.n_classes).unsqueeze(1)) - self.histogram = self.stats - - tp = torch.diag(self.histogram) - fp = torch.sum(self.histogram, dim=0) - tp - fn = torch.sum(self.histogram, dim=1) - tp - - iou = tp / (tp + fp + fn) - prc = tp / (tp + fn) - opc = torch.sum(tp) / torch.sum(self.histogram) - - metric_dict = {self.prefix + "mIoU": iou[~torch.isnan(iou)].mean().item(), - self.prefix + "Accuracy": opc.item()} - return {k: 100 * v for k, v in metric_dict.items()} - - -def flexible_collate(batch): - r"""Puts each data field into a tensor with outer dimension batch size""" - - elem = batch[0] - elem_type = type(elem) - if isinstance(elem, torch.Tensor): - out = None - if torch.utils.data.get_worker_info() is not None: - # If we're in a background process, concatenate directly into a - # shared memory tensor to avoid an extra copy - numel = sum([x.numel() for x in batch]) - storage = elem.storage()._new_shared(numel) - out = elem.new(storage) - try: - return torch.stack(batch, 0, out=out) - except RuntimeError: - return batch - elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \ - and elem_type.__name__ != 'string_': - if elem_type.__name__ == 'ndarray' or elem_type.__name__ == 'memmap': - # array of string classes and object - if np_str_obj_array_pattern.search(elem.dtype.str) is not None: - raise TypeError(default_collate_err_msg_format.format(elem.dtype)) - - return flexible_collate([torch.as_tensor(b) for b in batch]) - elif elem.shape == (): # scalars - return torch.as_tensor(batch) - elif isinstance(elem, float): - return torch.tensor(batch, dtype=torch.float64) - elif isinstance(elem, int): - return torch.tensor(batch) - elif isinstance(elem, string_classes): - return batch - elif isinstance(elem, collections.abc.Mapping): - return {key: flexible_collate([d[key] for d in batch]) for key in elem} - elif isinstance(elem, tuple) and hasattr(elem, '_fields'): # namedtuple - return elem_type(*(flexible_collate(samples) for samples in zip(*batch))) - elif isinstance(elem, collections.abc.Sequence): - # check to make sure that the elements in batch have consistent size - it = iter(batch) - elem_size = len(next(it)) - if not all(len(elem) == elem_size for elem in it): - raise RuntimeError('each element in list of batch should be of equal size') - transposed = zip(*batch) - return [flexible_collate(samples) for samples in transposed] - - raise TypeError(default_collate_err_msg_format.format(elem_type)) - - -if __name__ == "__main__": - fig = plot_imgs_labels(months, imgs, imgs_label, nb_values, scores) diff --git a/spaces/EleutherAI/VQGAN_CLIP/CLIP/model-card.md b/spaces/EleutherAI/VQGAN_CLIP/CLIP/model-card.md deleted file mode 100644 index 2d22e25bea89fdbccdaa2809fbeb83e0a7cfaa07..0000000000000000000000000000000000000000 --- a/spaces/EleutherAI/VQGAN_CLIP/CLIP/model-card.md +++ /dev/null @@ -1,120 +0,0 @@ -# Model Card: CLIP - -Inspired by [Model Cards for Model Reporting (Mitchell et al.)](https://arxiv.org/abs/1810.03993) and [Lessons from Archives (Jo & Gebru)](https://arxiv.org/pdf/1912.10389.pdf), we’re providing some accompanying information about the multimodal model. - -## Model Details - -The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within. - -### Model Date - -January 2021 - -### Model Type - -The base model uses a ResNet50 with several modifications as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. There is also a variant of the model where the ResNet image encoder is replaced with a Vision Transformer. - -### Model Versions - -Initially, we’ve released one CLIP model based on the Vision Transformer architecture equivalent to ViT-B/32, along with the RN50 model, using the architecture equivalent to ResNet-50. - -As part of the staged release process, we have also released the RN101 model, as well as RN50x4, a RN50 scaled up 4x according to the [EfficientNet](https://arxiv.org/abs/1905.11946) scaling rule. In July 2021, we additionally released the RN50x16 and ViT-B/16 models. - -Please see the paper linked below for further details about their specification. - -### Documents - -- [Blog Post](https://openai.com/blog/clip/) -- [CLIP Paper](https://arxiv.org/abs/2103.00020) - - - -## Model Use - -### Intended Use - -The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. - -#### Primary intended uses - -The primary intended users of these models are AI researchers. - -We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models. - -### Out-of-Scope Use Cases - -**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. - -Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use. - -Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases. - - - -## Data - -The model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users. - -### Data Mission Statement - -Our goal with building this dataset was to test out robustness and generalizability in computer vision tasks. As a result, the focus was on gathering large quantities of data from different publicly-available internet data sources. The data was gathered in a mostly non-interventionist manner. However, we only crawled websites that had policies against excessively violent and adult images and allowed us to filter out such content. We do not intend for this dataset to be used as the basis for any commercial or deployed model and will not be releasing the dataset. - - - -## Performance and Limitations - -### Performance - -We have evaluated the performance of CLIP on a wide range of benchmarks across a variety of computer vision datasets such as OCR to texture recognition to fine-grained classification. The paper describes model performance on the following datasets: - -- Food101 -- CIFAR10 -- CIFAR100 -- Birdsnap -- SUN397 -- Stanford Cars -- FGVC Aircraft -- VOC2007 -- DTD -- Oxford-IIIT Pet dataset -- Caltech101 -- Flowers102 -- MNIST -- SVHN -- IIIT5K -- Hateful Memes -- SST-2 -- UCF101 -- Kinetics700 -- Country211 -- CLEVR Counting -- KITTI Distance -- STL-10 -- RareAct -- Flickr30 -- MSCOCO -- ImageNet -- ImageNet-A -- ImageNet-R -- ImageNet Sketch -- ObjectNet (ImageNet Overlap) -- Youtube-BB -- ImageNet-Vid - -## Limitations - -CLIP and our analysis of it have a number of limitations. CLIP currently struggles with respect to certain tasks such as fine grained classification and counting objects. CLIP also poses issues with regards to fairness and bias which we discuss in the paper and briefly in the next section. Additionally, our approach to testing CLIP also has an important limitation- in many cases we have used linear probes to evaluate the performance of CLIP and there is evidence suggesting that linear probes can underestimate model performance. - -### Bias and Fairness - -We find that the performance of CLIP - and the specific biases it exhibits - can depend significantly on class design and the choices one makes for categories to include and exclude. We tested the risk of certain kinds of denigration with CLIP by classifying images of people from [Fairface](https://arxiv.org/abs/1908.04913) into crime-related and non-human animal categories. We found significant disparities with respect to race and gender. Additionally, we found that these disparities could shift based on how the classes were constructed. (Details captured in the Broader Impacts Section in the paper). - -We also tested the performance of CLIP on gender, race and age classification using the Fairface dataset (We default to using race categories as they are constructed in the Fairface dataset.) in order to assess quality of performance across different demographics. We found accuracy >96% across all races for gender classification with ‘Middle Eastern’ having the highest accuracy (98.4%) and ‘White’ having the lowest (96.5%). Additionally, CLIP averaged ~93% for racial classification and ~63% for age classification. Our use of evaluations to test for gender, race and age classification as well as denigration harms is simply to evaluate performance of the model across people and surface potential risks and not to demonstrate an endorsement/enthusiasm for such tasks. - - - -## Feedback - -### Where to send questions or comments about the model - -Please use [this Google Form](https://forms.gle/Uv7afRH5dvY34ZEs9) diff --git a/spaces/Eriberto/chatGPT/README.md b/spaces/Eriberto/chatGPT/README.md deleted file mode 100644 index 799948c169d953914e91d4e1bb867c5670e65ba7..0000000000000000000000000000000000000000 --- a/spaces/Eriberto/chatGPT/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ChatGPT -emoji: 📊 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -duplicated_from: yizhangliu/chatGPT ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/EronSamez/RVC_HFmeu/infer/modules/ipex/gradscaler.py b/spaces/EronSamez/RVC_HFmeu/infer/modules/ipex/gradscaler.py deleted file mode 100644 index 3c265ddb37453f02870afb481360c9cc30b05d81..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/infer/modules/ipex/gradscaler.py +++ /dev/null @@ -1,179 +0,0 @@ -from collections import defaultdict -import torch -import intel_extension_for_pytorch as ipex # pylint: disable=import-error, unused-import -import intel_extension_for_pytorch._C as core # pylint: disable=import-error, unused-import - -# pylint: disable=protected-access, missing-function-docstring, line-too-long - -OptState = ipex.cpu.autocast._grad_scaler.OptState -_MultiDeviceReplicator = ipex.cpu.autocast._grad_scaler._MultiDeviceReplicator -_refresh_per_optimizer_state = ipex.cpu.autocast._grad_scaler._refresh_per_optimizer_state - -def _unscale_grads_(self, optimizer, inv_scale, found_inf, allow_fp16): # pylint: disable=unused-argument - per_device_inv_scale = _MultiDeviceReplicator(inv_scale) - per_device_found_inf = _MultiDeviceReplicator(found_inf) - - # To set up _amp_foreach_non_finite_check_and_unscale_, split grads by device and dtype. - # There could be hundreds of grads, so we'd like to iterate through them just once. - # However, we don't know their devices or dtypes in advance. - - # https://stackoverflow.com/questions/5029934/defaultdict-of-defaultdict - # Google says mypy struggles with defaultdicts type annotations. - per_device_and_dtype_grads = defaultdict(lambda: defaultdict(list)) # type: ignore[var-annotated] - # sync grad to master weight - if hasattr(optimizer, "sync_grad"): - optimizer.sync_grad() - with torch.no_grad(): - for group in optimizer.param_groups: - for param in group["params"]: - if param.grad is None: - continue - if (not allow_fp16) and param.grad.dtype == torch.float16: - raise ValueError("Attempting to unscale FP16 gradients.") - if param.grad.is_sparse: - # is_coalesced() == False means the sparse grad has values with duplicate indices. - # coalesce() deduplicates indices and adds all values that have the same index. - # For scaled fp16 values, there's a good chance coalescing will cause overflow, - # so we should check the coalesced _values(). - if param.grad.dtype is torch.float16: - param.grad = param.grad.coalesce() - to_unscale = param.grad._values() - else: - to_unscale = param.grad - - # -: is there a way to split by device and dtype without appending in the inner loop? - to_unscale = to_unscale.to("cpu") - per_device_and_dtype_grads[to_unscale.device][ - to_unscale.dtype - ].append(to_unscale) - - for _, per_dtype_grads in per_device_and_dtype_grads.items(): - for grads in per_dtype_grads.values(): - core._amp_foreach_non_finite_check_and_unscale_( - grads, - per_device_found_inf.get("cpu"), - per_device_inv_scale.get("cpu"), - ) - - return per_device_found_inf._per_device_tensors - -def unscale_(self, optimizer): - """ - Divides ("unscales") the optimizer's gradient tensors by the scale factor. - :meth:`unscale_` is optional, serving cases where you need to - :ref:`modify or inspect gradients` - between the backward pass(es) and :meth:`step`. - If :meth:`unscale_` is not called explicitly, gradients will be unscaled automatically during :meth:`step`. - Simple example, using :meth:`unscale_` to enable clipping of unscaled gradients:: - ... - scaler.scale(loss).backward() - scaler.unscale_(optimizer) - torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm) - scaler.step(optimizer) - scaler.update() - Args: - optimizer (torch.optim.Optimizer): Optimizer that owns the gradients to be unscaled. - .. warning:: - :meth:`unscale_` should only be called once per optimizer per :meth:`step` call, - and only after all gradients for that optimizer's assigned parameters have been accumulated. - Calling :meth:`unscale_` twice for a given optimizer between each :meth:`step` triggers a RuntimeError. - .. warning:: - :meth:`unscale_` may unscale sparse gradients out of place, replacing the ``.grad`` attribute. - """ - if not self._enabled: - return - - self._check_scale_growth_tracker("unscale_") - - optimizer_state = self._per_optimizer_states[id(optimizer)] - - if optimizer_state["stage"] is OptState.UNSCALED: # pylint: disable=no-else-raise - raise RuntimeError( - "unscale_() has already been called on this optimizer since the last update()." - ) - elif optimizer_state["stage"] is OptState.STEPPED: - raise RuntimeError("unscale_() is being called after step().") - - # FP32 division can be imprecise for certain compile options, so we carry out the reciprocal in FP64. - assert self._scale is not None - inv_scale = self._scale.to("cpu").double().reciprocal().float().to(self._scale.device) - found_inf = torch.full( - (1,), 0.0, dtype=torch.float32, device=self._scale.device - ) - - optimizer_state["found_inf_per_device"] = self._unscale_grads_( - optimizer, inv_scale, found_inf, False - ) - optimizer_state["stage"] = OptState.UNSCALED - -def update(self, new_scale=None): - """ - Updates the scale factor. - If any optimizer steps were skipped the scale is multiplied by ``backoff_factor`` - to reduce it. If ``growth_interval`` unskipped iterations occurred consecutively, - the scale is multiplied by ``growth_factor`` to increase it. - Passing ``new_scale`` sets the new scale value manually. (``new_scale`` is not - used directly, it's used to fill GradScaler's internal scale tensor. So if - ``new_scale`` was a tensor, later in-place changes to that tensor will not further - affect the scale GradScaler uses internally.) - Args: - new_scale (float or :class:`torch.FloatTensor`, optional, default=None): New scale factor. - .. warning:: - :meth:`update` should only be called at the end of the iteration, after ``scaler.step(optimizer)`` has - been invoked for all optimizers used this iteration. - """ - if not self._enabled: - return - - _scale, _growth_tracker = self._check_scale_growth_tracker("update") - - if new_scale is not None: - # Accept a new user-defined scale. - if isinstance(new_scale, float): - self._scale.fill_(new_scale) # type: ignore[union-attr] - else: - reason = "new_scale should be a float or a 1-element torch.FloatTensor with requires_grad=False." - assert isinstance(new_scale, torch.FloatTensor), reason # type: ignore[attr-defined] - assert new_scale.numel() == 1, reason - assert new_scale.requires_grad is False, reason - self._scale.copy_(new_scale) # type: ignore[union-attr] - else: - # Consume shared inf/nan data collected from optimizers to update the scale. - # If all found_inf tensors are on the same device as self._scale, this operation is asynchronous. - found_infs = [ - found_inf.to(device="cpu", non_blocking=True) - for state in self._per_optimizer_states.values() - for found_inf in state["found_inf_per_device"].values() - ] - - assert len(found_infs) > 0, "No inf checks were recorded prior to update." - - found_inf_combined = found_infs[0] - if len(found_infs) > 1: - for i in range(1, len(found_infs)): - found_inf_combined += found_infs[i] - - to_device = _scale.device - _scale = _scale.to("cpu") - _growth_tracker = _growth_tracker.to("cpu") - - core._amp_update_scale_( - _scale, - _growth_tracker, - found_inf_combined, - self._growth_factor, - self._backoff_factor, - self._growth_interval, - ) - - _scale = _scale.to(to_device) - _growth_tracker = _growth_tracker.to(to_device) - # To prepare for next iteration, clear the data collected from optimizers this iteration. - self._per_optimizer_states = defaultdict(_refresh_per_optimizer_state) - -def gradscaler_init(): - torch.xpu.amp.GradScaler = ipex.cpu.autocast._grad_scaler.GradScaler - torch.xpu.amp.GradScaler._unscale_grads_ = _unscale_grads_ - torch.xpu.amp.GradScaler.unscale_ = unscale_ - torch.xpu.amp.GradScaler.update = update - return torch.xpu.amp.GradScaler \ No newline at end of file diff --git a/spaces/Faridmaruf/RVCV2MODEL/lib/infer_pack/onnx_inference.py b/spaces/Faridmaruf/RVCV2MODEL/lib/infer_pack/onnx_inference.py deleted file mode 100644 index c78324cbc08414fffcc689f325312de0e51bd6b4..0000000000000000000000000000000000000000 --- a/spaces/Faridmaruf/RVCV2MODEL/lib/infer_pack/onnx_inference.py +++ /dev/null @@ -1,143 +0,0 @@ -import onnxruntime -import librosa -import numpy as np -import soundfile - - -class ContentVec: - def __init__(self, vec_path="pretrained/vec-768-layer-12.onnx", device=None): - print("load model(s) from {}".format(vec_path)) - if device == "cpu" or device is None: - providers = ["CPUExecutionProvider"] - elif device == "cuda": - providers = ["CUDAExecutionProvider", "CPUExecutionProvider"] - elif device == "dml": - providers = ["DmlExecutionProvider"] - else: - raise RuntimeError("Unsportted Device") - self.model = onnxruntime.InferenceSession(vec_path, providers=providers) - - def __call__(self, wav): - return self.forward(wav) - - def forward(self, wav): - feats = wav - if feats.ndim == 2: # double channels - feats = feats.mean(-1) - assert feats.ndim == 1, feats.ndim - feats = np.expand_dims(np.expand_dims(feats, 0), 0) - onnx_input = {self.model.get_inputs()[0].name: feats} - logits = self.model.run(None, onnx_input)[0] - return logits.transpose(0, 2, 1) - - -def get_f0_predictor(f0_predictor, hop_length, sampling_rate, **kargs): - if f0_predictor == "pm": - from lib.infer_pack.modules.F0Predictor.PMF0Predictor import PMF0Predictor - - f0_predictor_object = PMF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - elif f0_predictor == "harvest": - from lib.infer_pack.modules.F0Predictor.HarvestF0Predictor import HarvestF0Predictor - - f0_predictor_object = HarvestF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - elif f0_predictor == "dio": - from lib.infer_pack.modules.F0Predictor.DioF0Predictor import DioF0Predictor - - f0_predictor_object = DioF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - else: - raise Exception("Unknown f0 predictor") - return f0_predictor_object - - -class OnnxRVC: - def __init__( - self, - model_path, - sr=40000, - hop_size=512, - vec_path="vec-768-layer-12", - device="cpu", - ): - vec_path = f"pretrained/{vec_path}.onnx" - self.vec_model = ContentVec(vec_path, device) - if device == "cpu" or device is None: - providers = ["CPUExecutionProvider"] - elif device == "cuda": - providers = ["CUDAExecutionProvider", "CPUExecutionProvider"] - elif device == "dml": - providers = ["DmlExecutionProvider"] - else: - raise RuntimeError("Unsportted Device") - self.model = onnxruntime.InferenceSession(model_path, providers=providers) - self.sampling_rate = sr - self.hop_size = hop_size - - def forward(self, hubert, hubert_length, pitch, pitchf, ds, rnd): - onnx_input = { - self.model.get_inputs()[0].name: hubert, - self.model.get_inputs()[1].name: hubert_length, - self.model.get_inputs()[2].name: pitch, - self.model.get_inputs()[3].name: pitchf, - self.model.get_inputs()[4].name: ds, - self.model.get_inputs()[5].name: rnd, - } - return (self.model.run(None, onnx_input)[0] * 32767).astype(np.int16) - - def inference( - self, - raw_path, - sid, - f0_method="dio", - f0_up_key=0, - pad_time=0.5, - cr_threshold=0.02, - ): - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - f0_predictor = get_f0_predictor( - f0_method, - hop_length=self.hop_size, - sampling_rate=self.sampling_rate, - threshold=cr_threshold, - ) - wav, sr = librosa.load(raw_path, sr=self.sampling_rate) - org_length = len(wav) - if org_length / sr > 50.0: - raise RuntimeError("Reached Max Length") - - wav16k = librosa.resample(wav, orig_sr=self.sampling_rate, target_sr=16000) - wav16k = wav16k - - hubert = self.vec_model(wav16k) - hubert = np.repeat(hubert, 2, axis=2).transpose(0, 2, 1).astype(np.float32) - hubert_length = hubert.shape[1] - - pitchf = f0_predictor.compute_f0(wav, hubert_length) - pitchf = pitchf * 2 ** (f0_up_key / 12) - pitch = pitchf.copy() - f0_mel = 1127 * np.log(1 + pitch / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - pitch = np.rint(f0_mel).astype(np.int64) - - pitchf = pitchf.reshape(1, len(pitchf)).astype(np.float32) - pitch = pitch.reshape(1, len(pitch)) - ds = np.array([sid]).astype(np.int64) - - rnd = np.random.randn(1, 192, hubert_length).astype(np.float32) - hubert_length = np.array([hubert_length]).astype(np.int64) - - out_wav = self.forward(hubert, hubert_length, pitch, pitchf, ds, rnd).squeeze() - out_wav = np.pad(out_wav, (0, 2 * self.hop_size), "constant") - return out_wav[0:org_length] diff --git a/spaces/Felix123456/bingo/src/pages/api/kblob.ts b/spaces/Felix123456/bingo/src/pages/api/kblob.ts deleted file mode 100644 index 0ce7e6063cdc06838e76f1cff1d5982d34ef52de..0000000000000000000000000000000000000000 --- a/spaces/Felix123456/bingo/src/pages/api/kblob.ts +++ /dev/null @@ -1,56 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import FormData from 'form-data' -import { fetch } from '@/lib/isomorphic' -import { KBlobRequest } from '@/lib/bots/bing/types' - -const API_DOMAIN = 'https://bing.vcanbb.top' - -export const config = { - api: { - bodyParser: { - sizeLimit: '10mb' // Set desired value here - } - } -} - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - try { - const { knowledgeRequest, imageBase64 } = req.body as KBlobRequest - - const formData = new FormData() - formData.append('knowledgeRequest', JSON.stringify(knowledgeRequest)) - if (imageBase64) { - formData.append('imageBase64', imageBase64) - } - - const response = await fetch(`${API_DOMAIN}/images/kblob`, - { - method: 'POST', - body: formData.getBuffer(), - headers: { - "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"", - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": "\"Windows\"", - "Referer": `${API_DOMAIN}/web/index.html`, - "Referrer-Policy": "origin-when-cross-origin", - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - ...formData.getHeaders() - } - } - ).then(res => res.text()) - - res.writeHead(200, { - 'Content-Type': 'application/json', - }) - res.end(response || JSON.stringify({ result: { value: 'UploadFailed', message: '请更换 IP 或代理后重试' } })) - } catch (e) { - return res.json({ - result: { - value: 'UploadFailed', - message: `${e}` - } - }) - } -} diff --git a/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/metrics/__init__.py b/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/metrics/__init__.py deleted file mode 100644 index 19d55cc8321f124c918d78465b053aef67f13a33..0000000000000000000000000000000000000000 --- a/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/metrics/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -from copy import deepcopy - -from basicsr.utils.registry import METRIC_REGISTRY -from .psnr_ssim import calculate_psnr, calculate_ssim - -__all__ = ['calculate_psnr', 'calculate_ssim'] - - -def calculate_metric(data, opt): - """Calculate metric from data and options. - - Args: - opt (dict): Configuration. It must constain: - type (str): Model type. - """ - opt = deepcopy(opt) - metric_type = opt.pop('type') - metric = METRIC_REGISTRY.get(metric_type)(**data, **opt) - return metric diff --git a/spaces/Ferion/image-matting-app/ppmatting/metrics/metric.py b/spaces/Ferion/image-matting-app/ppmatting/metrics/metric.py deleted file mode 100644 index 2784dcf20fcffeadc326ad00d9b6a74d07ad58cf..0000000000000000000000000000000000000000 --- a/spaces/Ferion/image-matting-app/ppmatting/metrics/metric.py +++ /dev/null @@ -1,278 +0,0 @@ -# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# Grad and Conn is refer to https://github.com/yucornetto/MGMatting/blob/main/code-base/utils/evaluate.py -# Output of `Grad` is sightly different from the MATLAB version provided by Adobe (less than 0.1%) -# Output of `Conn` is smaller than the MATLAB version (~5%, maybe MATLAB has a different algorithm) -# So do not report results calculated by these functions in your paper. -# Evaluate your inference with the MATLAB file `DIM_evaluation_code/evaluate.m`. - -import cv2 -import numpy as np -from scipy.ndimage import convolve -from scipy.special import gamma -from skimage.measure import label - - -class MSE: - """ - Only calculate the unknown region if trimap provided. - """ - - def __init__(self): - self.mse_diffs = 0 - self.count = 0 - - def update(self, pred, gt, trimap=None): - """ - update metric. - Args: - pred (np.ndarray): The value range is [0., 255.]. - gt (np.ndarray): The value range is [0, 255]. - trimap (np.ndarray, optional) The value is in {0, 128, 255}. Default: None. - """ - if trimap is None: - trimap = np.ones_like(gt) * 128 - if not (pred.shape == gt.shape == trimap.shape): - raise ValueError( - 'The shape of `pred`, `gt` and `trimap` should be equal. ' - 'but they are {}, {} and {}'.format(pred.shape, gt.shape, - trimap.shape)) - pred[trimap == 0] = 0 - pred[trimap == 255] = 255 - - mask = trimap == 128 - pixels = float(mask.sum()) - pred = pred / 255. - gt = gt / 255. - diff = (pred - gt) * mask - mse_diff = (diff**2).sum() / pixels if pixels > 0 else 0 - - self.mse_diffs += mse_diff - self.count += 1 - - return mse_diff - - def evaluate(self): - mse = self.mse_diffs / self.count if self.count > 0 else 0 - return mse - - -class SAD: - """ - Only calculate the unknown region if trimap provided. - """ - - def __init__(self): - self.sad_diffs = 0 - self.count = 0 - - def update(self, pred, gt, trimap=None): - """ - update metric. - Args: - pred (np.ndarray): The value range is [0., 255.]. - gt (np.ndarray): The value range is [0., 255.]. - trimap (np.ndarray, optional)L The value is in {0, 128, 255}. Default: None. - """ - if trimap is None: - trimap = np.ones_like(gt) * 128 - if not (pred.shape == gt.shape == trimap.shape): - raise ValueError( - 'The shape of `pred`, `gt` and `trimap` should be equal. ' - 'but they are {}, {} and {}'.format(pred.shape, gt.shape, - trimap.shape)) - pred[trimap == 0] = 0 - pred[trimap == 255] = 255 - - mask = trimap == 128 - pred = pred / 255. - gt = gt / 255. - diff = (pred - gt) * mask - sad_diff = (np.abs(diff)).sum() - - sad_diff /= 1000 - self.sad_diffs += sad_diff - self.count += 1 - - return sad_diff - - def evaluate(self): - sad = self.sad_diffs / self.count if self.count > 0 else 0 - return sad - - -class Grad: - """ - Only calculate the unknown region if trimap provided. - Refer to: https://github.com/open-mlab/mmediting/blob/master/mmedit/core/evaluation/metrics.py - """ - - def __init__(self): - self.grad_diffs = 0 - self.count = 0 - - def gaussian(self, x, sigma): - return np.exp(-x**2 / (2 * sigma**2)) / (sigma * np.sqrt(2 * np.pi)) - - def dgaussian(self, x, sigma): - return -x * self.gaussian(x, sigma) / sigma**2 - - def gauss_filter(self, sigma, epsilon=1e-2): - half_size = np.ceil( - sigma * np.sqrt(-2 * np.log(np.sqrt(2 * np.pi) * sigma * epsilon))) - size = int(2 * half_size + 1) - - # create filter in x axis - filter_x = np.zeros((size, size)) - for i in range(size): - for j in range(size): - filter_x[i, j] = self.gaussian( - i - half_size, sigma) * self.dgaussian(j - half_size, sigma) - - # normalize filter - norm = np.sqrt((filter_x**2).sum()) - filter_x = filter_x / norm - filter_y = np.transpose(filter_x) - - return filter_x, filter_y - - def gauss_gradient(self, img, sigma): - filter_x, filter_y = self.gauss_filter(sigma) - img_filtered_x = cv2.filter2D( - img, -1, filter_x, borderType=cv2.BORDER_REPLICATE) - img_filtered_y = cv2.filter2D( - img, -1, filter_y, borderType=cv2.BORDER_REPLICATE) - return np.sqrt(img_filtered_x**2 + img_filtered_y**2) - - def update(self, pred, gt, trimap=None, sigma=1.4): - """ - update metric. - Args: - pred (np.ndarray): The value range is [0., 1.]. - gt (np.ndarray): The value range is [0, 255]. - trimap (np.ndarray, optional)L The value is in {0, 128, 255}. Default: None. - sigma (float, optional): Standard deviation of the gaussian kernel. Default: 1.4. - """ - if trimap is None: - trimap = np.ones_like(gt) * 128 - if not (pred.shape == gt.shape == trimap.shape): - raise ValueError( - 'The shape of `pred`, `gt` and `trimap` should be equal. ' - 'but they are {}, {} and {}'.format(pred.shape, gt.shape, - trimap.shape)) - pred[trimap == 0] = 0 - pred[trimap == 255] = 255 - - gt = gt.squeeze() - pred = pred.squeeze() - gt = gt.astype(np.float64) - pred = pred.astype(np.float64) - gt_normed = np.zeros_like(gt) - pred_normed = np.zeros_like(pred) - cv2.normalize(gt, gt_normed, 1., 0., cv2.NORM_MINMAX) - cv2.normalize(pred, pred_normed, 1., 0., cv2.NORM_MINMAX) - - gt_grad = self.gauss_gradient(gt_normed, sigma).astype(np.float32) - pred_grad = self.gauss_gradient(pred_normed, sigma).astype(np.float32) - - grad_diff = ((gt_grad - pred_grad)**2 * (trimap == 128)).sum() - - grad_diff /= 1000 - self.grad_diffs += grad_diff - self.count += 1 - - return grad_diff - - def evaluate(self): - grad = self.grad_diffs / self.count if self.count > 0 else 0 - return grad - - -class Conn: - """ - Only calculate the unknown region if trimap provided. - Refer to: Refer to: https://github.com/open-mlab/mmediting/blob/master/mmedit/core/evaluation/metrics.py - """ - - def __init__(self): - self.conn_diffs = 0 - self.count = 0 - - def update(self, pred, gt, trimap=None, step=0.1): - """ - update metric. - Args: - pred (np.ndarray): The value range is [0., 1.]. - gt (np.ndarray): The value range is [0, 255]. - trimap (np.ndarray, optional)L The value is in {0, 128, 255}. Default: None. - step (float, optional): Step of threshold when computing intersection between - `gt` and `pred`. Default: 0.1. - """ - if trimap is None: - trimap = np.ones_like(gt) * 128 - if not (pred.shape == gt.shape == trimap.shape): - raise ValueError( - 'The shape of `pred`, `gt` and `trimap` should be equal. ' - 'but they are {}, {} and {}'.format(pred.shape, gt.shape, - trimap.shape)) - pred[trimap == 0] = 0 - pred[trimap == 255] = 255 - - gt = gt.squeeze() - pred = pred.squeeze() - gt = gt.astype(np.float32) / 255 - pred = pred.astype(np.float32) / 255 - - thresh_steps = np.arange(0, 1 + step, step) - round_down_map = -np.ones_like(gt) - for i in range(1, len(thresh_steps)): - gt_thresh = gt >= thresh_steps[i] - pred_thresh = pred >= thresh_steps[i] - intersection = (gt_thresh & pred_thresh).astype(np.uint8) - - # connected components - _, output, stats, _ = cv2.connectedComponentsWithStats( - intersection, connectivity=4) - # start from 1 in dim 0 to exclude background - size = stats[1:, -1] - - # largest connected component of the intersection - omega = np.zeros_like(gt) - if len(size) != 0: - max_id = np.argmax(size) - # plus one to include background - omega[output == max_id + 1] = 1 - - mask = (round_down_map == -1) & (omega == 0) - round_down_map[mask] = thresh_steps[i - 1] - round_down_map[round_down_map == -1] = 1 - - gt_diff = gt - round_down_map - pred_diff = pred - round_down_map - # only calculate difference larger than or equal to 0.15 - gt_phi = 1 - gt_diff * (gt_diff >= 0.15) - pred_phi = 1 - pred_diff * (pred_diff >= 0.15) - - conn_diff = np.sum(np.abs(gt_phi - pred_phi) * (trimap == 128)) - - conn_diff /= 1000 - self.conn_diffs += conn_diff - self.count += 1 - - return conn_diff - - def evaluate(self): - conn = self.conn_diffs / self.count if self.count > 0 else 0 - return conn diff --git a/spaces/FrankZxShen/so-vits-svc-models-ba/README.md b/spaces/FrankZxShen/so-vits-svc-models-ba/README.md deleted file mode 100644 index ab86ce926cec6eb11dcdf0adadd0d4dde43edb6c..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/so-vits-svc-models-ba/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: So Vits Svc Models Ba -emoji: 🦀 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/GipAdonimus/Real-Time-Voice-Cloning/synthesizer/audio.py b/spaces/GipAdonimus/Real-Time-Voice-Cloning/synthesizer/audio.py deleted file mode 100644 index 83dc96c63c962bc8e13c446d05e27c009fb3239f..0000000000000000000000000000000000000000 --- a/spaces/GipAdonimus/Real-Time-Voice-Cloning/synthesizer/audio.py +++ /dev/null @@ -1,206 +0,0 @@ -import librosa -import librosa.filters -import numpy as np -from scipy import signal -from scipy.io import wavfile -import soundfile as sf - - -def load_wav(path, sr): - return librosa.core.load(path, sr=sr)[0] - -def save_wav(wav, path, sr): - wav *= 32767 / max(0.01, np.max(np.abs(wav))) - #proposed by @dsmiller - wavfile.write(path, sr, wav.astype(np.int16)) - -def save_wavenet_wav(wav, path, sr): - sf.write(path, wav.astype(np.float32), sr) - -def preemphasis(wav, k, preemphasize=True): - if preemphasize: - return signal.lfilter([1, -k], [1], wav) - return wav - -def inv_preemphasis(wav, k, inv_preemphasize=True): - if inv_preemphasize: - return signal.lfilter([1], [1, -k], wav) - return wav - -#From https://github.com/r9y9/wavenet_vocoder/blob/master/audio.py -def start_and_end_indices(quantized, silence_threshold=2): - for start in range(quantized.size): - if abs(quantized[start] - 127) > silence_threshold: - break - for end in range(quantized.size - 1, 1, -1): - if abs(quantized[end] - 127) > silence_threshold: - break - - assert abs(quantized[start] - 127) > silence_threshold - assert abs(quantized[end] - 127) > silence_threshold - - return start, end - -def get_hop_size(hparams): - hop_size = hparams.hop_size - if hop_size is None: - assert hparams.frame_shift_ms is not None - hop_size = int(hparams.frame_shift_ms / 1000 * hparams.sample_rate) - return hop_size - -def linearspectrogram(wav, hparams): - D = _stft(preemphasis(wav, hparams.preemphasis, hparams.preemphasize), hparams) - S = _amp_to_db(np.abs(D), hparams) - hparams.ref_level_db - - if hparams.signal_normalization: - return _normalize(S, hparams) - return S - -def melspectrogram(wav, hparams): - D = _stft(preemphasis(wav, hparams.preemphasis, hparams.preemphasize), hparams) - S = _amp_to_db(_linear_to_mel(np.abs(D), hparams), hparams) - hparams.ref_level_db - - if hparams.signal_normalization: - return _normalize(S, hparams) - return S - -def inv_linear_spectrogram(linear_spectrogram, hparams): - """Converts linear spectrogram to waveform using librosa""" - if hparams.signal_normalization: - D = _denormalize(linear_spectrogram, hparams) - else: - D = linear_spectrogram - - S = _db_to_amp(D + hparams.ref_level_db) #Convert back to linear - - if hparams.use_lws: - processor = _lws_processor(hparams) - D = processor.run_lws(S.astype(np.float64).T ** hparams.power) - y = processor.istft(D).astype(np.float32) - return inv_preemphasis(y, hparams.preemphasis, hparams.preemphasize) - else: - return inv_preemphasis(_griffin_lim(S ** hparams.power, hparams), hparams.preemphasis, hparams.preemphasize) - -def inv_mel_spectrogram(mel_spectrogram, hparams): - """Converts mel spectrogram to waveform using librosa""" - if hparams.signal_normalization: - D = _denormalize(mel_spectrogram, hparams) - else: - D = mel_spectrogram - - S = _mel_to_linear(_db_to_amp(D + hparams.ref_level_db), hparams) # Convert back to linear - - if hparams.use_lws: - processor = _lws_processor(hparams) - D = processor.run_lws(S.astype(np.float64).T ** hparams.power) - y = processor.istft(D).astype(np.float32) - return inv_preemphasis(y, hparams.preemphasis, hparams.preemphasize) - else: - return inv_preemphasis(_griffin_lim(S ** hparams.power, hparams), hparams.preemphasis, hparams.preemphasize) - -def _lws_processor(hparams): - import lws - return lws.lws(hparams.n_fft, get_hop_size(hparams), fftsize=hparams.win_size, mode="speech") - -def _griffin_lim(S, hparams): - """librosa implementation of Griffin-Lim - Based on https://github.com/librosa/librosa/issues/434 - """ - angles = np.exp(2j * np.pi * np.random.rand(*S.shape)) - S_complex = np.abs(S).astype(np.complex) - y = _istft(S_complex * angles, hparams) - for i in range(hparams.griffin_lim_iters): - angles = np.exp(1j * np.angle(_stft(y, hparams))) - y = _istft(S_complex * angles, hparams) - return y - -def _stft(y, hparams): - if hparams.use_lws: - return _lws_processor(hparams).stft(y).T - else: - return librosa.stft(y=y, n_fft=hparams.n_fft, hop_length=get_hop_size(hparams), win_length=hparams.win_size) - -def _istft(y, hparams): - return librosa.istft(y, hop_length=get_hop_size(hparams), win_length=hparams.win_size) - -########################################################## -#Those are only correct when using lws!!! (This was messing with Wavenet quality for a long time!) -def num_frames(length, fsize, fshift): - """Compute number of time frames of spectrogram - """ - pad = (fsize - fshift) - if length % fshift == 0: - M = (length + pad * 2 - fsize) // fshift + 1 - else: - M = (length + pad * 2 - fsize) // fshift + 2 - return M - - -def pad_lr(x, fsize, fshift): - """Compute left and right padding - """ - M = num_frames(len(x), fsize, fshift) - pad = (fsize - fshift) - T = len(x) + 2 * pad - r = (M - 1) * fshift + fsize - T - return pad, pad + r -########################################################## -#Librosa correct padding -def librosa_pad_lr(x, fsize, fshift): - return 0, (x.shape[0] // fshift + 1) * fshift - x.shape[0] - -# Conversions -_mel_basis = None -_inv_mel_basis = None - -def _linear_to_mel(spectogram, hparams): - global _mel_basis - if _mel_basis is None: - _mel_basis = _build_mel_basis(hparams) - return np.dot(_mel_basis, spectogram) - -def _mel_to_linear(mel_spectrogram, hparams): - global _inv_mel_basis - if _inv_mel_basis is None: - _inv_mel_basis = np.linalg.pinv(_build_mel_basis(hparams)) - return np.maximum(1e-10, np.dot(_inv_mel_basis, mel_spectrogram)) - -def _build_mel_basis(hparams): - assert hparams.fmax <= hparams.sample_rate // 2 - return librosa.filters.mel(hparams.sample_rate, hparams.n_fft, n_mels=hparams.num_mels, - fmin=hparams.fmin, fmax=hparams.fmax) - -def _amp_to_db(x, hparams): - min_level = np.exp(hparams.min_level_db / 20 * np.log(10)) - return 20 * np.log10(np.maximum(min_level, x)) - -def _db_to_amp(x): - return np.power(10.0, (x) * 0.05) - -def _normalize(S, hparams): - if hparams.allow_clipping_in_normalization: - if hparams.symmetric_mels: - return np.clip((2 * hparams.max_abs_value) * ((S - hparams.min_level_db) / (-hparams.min_level_db)) - hparams.max_abs_value, - -hparams.max_abs_value, hparams.max_abs_value) - else: - return np.clip(hparams.max_abs_value * ((S - hparams.min_level_db) / (-hparams.min_level_db)), 0, hparams.max_abs_value) - - assert S.max() <= 0 and S.min() - hparams.min_level_db >= 0 - if hparams.symmetric_mels: - return (2 * hparams.max_abs_value) * ((S - hparams.min_level_db) / (-hparams.min_level_db)) - hparams.max_abs_value - else: - return hparams.max_abs_value * ((S - hparams.min_level_db) / (-hparams.min_level_db)) - -def _denormalize(D, hparams): - if hparams.allow_clipping_in_normalization: - if hparams.symmetric_mels: - return (((np.clip(D, -hparams.max_abs_value, - hparams.max_abs_value) + hparams.max_abs_value) * -hparams.min_level_db / (2 * hparams.max_abs_value)) - + hparams.min_level_db) - else: - return ((np.clip(D, 0, hparams.max_abs_value) * -hparams.min_level_db / hparams.max_abs_value) + hparams.min_level_db) - - if hparams.symmetric_mels: - return (((D + hparams.max_abs_value) * -hparams.min_level_db / (2 * hparams.max_abs_value)) + hparams.min_level_db) - else: - return ((D * -hparams.min_level_db / hparams.max_abs_value) + hparams.min_level_db) diff --git a/spaces/Gradio-Blocks/HairCLIP/style.css b/spaces/Gradio-Blocks/HairCLIP/style.css deleted file mode 100644 index a7c4024c1d7f79280601fb79b9ee3b34102944da..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/HairCLIP/style.css +++ /dev/null @@ -1,8 +0,0 @@ -h1 { - text-align: center; -} - -img#teaser { - max-width: 1000px; - max-height: 600px; -} diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/fcn_hr18s_480x480_80k_pascal_context_59.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/fcn_hr18s_480x480_80k_pascal_context_59.py deleted file mode 100644 index babd88db4eb5d96828adf8db2467b4f6fd8b7cf5..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/fcn_hr18s_480x480_80k_pascal_context_59.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = './fcn_hr18_480x480_80k_pascal_context_59.py' -model = dict( - pretrained='open-mmlab://msra/hrnetv2_w18_small', - backbone=dict( - extra=dict( - stage1=dict(num_blocks=(2, )), - stage2=dict(num_blocks=(2, 2)), - stage3=dict(num_modules=3, num_blocks=(2, 2, 2)), - stage4=dict(num_modules=2, num_blocks=(2, 2, 2, 2))))) diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/lib/net_tools.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/lib/net_tools.py deleted file mode 100644 index f8725358980012b5c4590f02151274d55b8b609a..0000000000000000000000000000000000000000 --- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/lib/net_tools.py +++ /dev/null @@ -1,51 +0,0 @@ -import importlib -from typing import Dict, Optional -import torch -import os -from collections import OrderedDict -from torch import nn - -def get_func(func_name: str): - """Helper to return a function object by name. func_name must identify a - function in this module or the path to a function relative to the base - 'modeling' module. - """ - if func_name == '': - return None - try: - parts = func_name.split('.') - # Refers to a function in this module - if len(parts) == 1: - return globals()[parts[0]] - # Otherwise, assume we're referencing a module under modeling - module_name = 'lib.' + '.'.join(parts[:-1]) - module = importlib.import_module(module_name) - return getattr(module, parts[-1]) - except Exception: - print('Failed to f1ind function: %s', func_name) - raise - -def load_ckpt(ckpt_path: str, depth_model: nn.Module) -> None: - """ - Load checkpoint. - """ - if os.path.isfile(ckpt_path): - print("Loading checkpoint %s" % ckpt_path) - checkpoint = torch.load(ckpt_path, map_location='cpu') - depth_model.load_state_dict( - strip_prefix_if_present(checkpoint['depth_model'], "module."), - strict=True - ) - del checkpoint - torch.cuda.empty_cache() - else: - raise Exception(f'Checkpoint path not found {ckpt_path}') - -def strip_prefix_if_present(state_dict: Dict[str, nn.Module], prefix: str): - keys = sorted(state_dict.keys()) - if not all(key.startswith(prefix) for key in keys): - return state_dict - stripped_state_dict = OrderedDict() - for key, value in state_dict.items(): - stripped_state_dict[key.replace(prefix, "")] = value - return stripped_state_dict \ No newline at end of file diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/ablation_study.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/ablation_study.py deleted file mode 100644 index 03665a774724d6f8ad538a044dfe1cd2a52bec30..0000000000000000000000000000000000000000 --- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/ablation_study.py +++ /dev/null @@ -1,12 +0,0 @@ -class ABLATION_STUDY: - INPUTS_OUTPUTS = 'ablation_inputs_outputs' - DATA_AUGMENTATION = 'ablation_data_augmentation' - PRETRAINED_BACKBONE = 'ablation_pretrained_backbone' - GPU_TYPE = 'ablation_gpu_type' - -ABLATION_STUDY_LIST = [ - ABLATION_STUDY.INPUTS_OUTPUTS, - ABLATION_STUDY.DATA_AUGMENTATION, - ABLATION_STUDY.PRETRAINED_BACKBONE, - ABLATION_STUDY.GPU_TYPE, -] diff --git a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/serverstate.py b/spaces/HaHaBill/LandShapes-Antarctica/netdissect/serverstate.py deleted file mode 100644 index e7ddc790c3dfc881f8aa4322d10d90e4e4fc09f0..0000000000000000000000000000000000000000 --- a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/serverstate.py +++ /dev/null @@ -1,526 +0,0 @@ -import os, torch, numpy, base64, json, re, threading, random -from torch.utils.data import TensorDataset, DataLoader -from collections import defaultdict -from netdissect.easydict import EasyDict -from netdissect.modelconfig import create_instrumented_model -from netdissect.runningstats import RunningQuantile -from netdissect.dissection import safe_dir_name -from netdissect.zdataset import z_sample_for_model -from PIL import Image -from io import BytesIO - -class DissectionProject: - ''' - DissectionProject understand how to drive a GanTester within a - dissection project directory structure: it caches data in files, - creates image files, and translates data between plain python data - types and the pytorch-specific tensors required by GanTester. - ''' - def __init__(self, config, project_dir, path_url, public_host): - print('config done', project_dir) - self.use_cuda = torch.cuda.is_available() - self.dissect = config - self.project_dir = project_dir - self.path_url = path_url - self.public_host = public_host - self.cachedir = os.path.join(self.project_dir, 'cache') - self.tester = GanTester( - config.settings, dissectdir=project_dir, - device=torch.device('cuda') if self.use_cuda - else torch.device('cpu')) - self.stdz = [] - - def get_zs(self, size): - if size <= len(self.stdz): - return self.stdz[:size].tolist() - z_tensor = self.tester.standard_z_sample(size) - numpy_z = z_tensor.cpu().numpy() - self.stdz = numpy_z - return self.stdz.tolist() - - def get_z(self, id): - if id < len(self.stdz): - return self.stdz[id] - return self.get_zs((id + 1) * 2)[id] - - def get_zs_for_ids(self, ids): - max_id = max(ids) - if max_id >= len(self.stdz): - self.get_z(max_id) - return self.stdz[ids] - - def get_layers(self): - result = [] - layer_shapes = self.tester.layer_shapes() - for layer in self.tester.layers: - shape = layer_shapes[layer] - result.append(dict( - layer=layer, - channels=shape[1], - shape=[shape[2], shape[3]])) - return result - - def get_units(self, layer): - try: - dlayer = [dl for dl in self.dissect['layers'] - if dl['layer'] == layer][0] - except: - return None - - dunits = dlayer['units'] - result = [dict(unit=unit_num, - img='/%s/%s/s-image/%d-top.jpg' % - (self.path_url, layer, unit_num), - label=unit['iou_label']) - for unit_num, unit in enumerate(dunits)] - return result - - def get_rankings(self, layer): - try: - dlayer = [dl for dl in self.dissect['layers'] - if dl['layer'] == layer][0] - except: - return None - result = [dict(name=ranking['name'], - metric=ranking.get('metric', None), - scores=ranking['score']) - for ranking in dlayer['rankings']] - return result - - def get_levels(self, layer, quantiles): - levels = self.tester.levels( - layer, torch.from_numpy(numpy.array(quantiles))) - return levels.cpu().numpy().tolist() - - def generate_images(self, zs, ids, interventions, return_urls=False): - if ids is not None: - assert zs is None - zs = self.get_zs_for_ids(ids) - if not interventions: - # Do file caching when ids are given (and no ablations). - imgdir = os.path.join(self.cachedir, 'img', 'id') - os.makedirs(imgdir, exist_ok=True) - exist = set(os.listdir(imgdir)) - unfinished = [('%d.jpg' % id) not in exist for id in ids] - needed_z_tensor = torch.tensor(zs[unfinished]).float().to( - self.tester.device) - needed_ids = numpy.array(ids)[unfinished] - # Generate image files for just the needed images. - if len(needed_z_tensor): - imgs = self.tester.generate_images(needed_z_tensor - ).cpu().numpy() - for i, img in zip(needed_ids, imgs): - Image.fromarray(img.transpose(1, 2, 0)).save( - os.path.join(imgdir, '%d.jpg' % i), 'jpeg', - quality=99, optimize=True, progressive=True) - # Assemble a response. - imgurls = ['/%s/cache/img/id/%d.jpg' - % (self.path_url, i) for i in ids] - return [dict(id=i, d=d) for i, d in zip(ids, imgurls)] - # No file caching when ids are not given (or ablations are applied) - z_tensor = torch.tensor(zs).float().to(self.tester.device) - imgs = self.tester.generate_images(z_tensor, - intervention=decode_intervention_array(interventions, - self.tester.layer_shapes()), - ).cpu().numpy() - numpy_z = z_tensor.cpu().numpy() - if return_urls: - randdir = '%03d' % random.randrange(1000) - imgdir = os.path.join(self.cachedir, 'img', 'uniq', randdir) - os.makedirs(imgdir, exist_ok=True) - startind = random.randrange(100000) - imgurls = [] - for i, img in enumerate(imgs): - filename = '%d.jpg' % (i + startind) - Image.fromarray(img.transpose(1, 2, 0)).save( - os.path.join(imgdir, filename), 'jpeg', - quality=99, optimize=True, progressive=True) - image_url_path = ('/%s/cache/img/uniq/%s/%s' - % (self.path_url, randdir, filename)) - imgurls.append(image_url_path) - tweet_filename = 'tweet-%d.html' % (i + startind) - tweet_url_path = ('/%s/cache/img/uniq/%s/%s' - % (self.path_url, randdir, tweet_filename)) - with open(os.path.join(imgdir, tweet_filename), 'w') as f: - f.write(twitter_card(image_url_path, tweet_url_path, - self.public_host)) - return [dict(d=d) for d in imgurls] - imgurls = [img2base64(img.transpose(1, 2, 0)) for img in imgs] - return [dict(d=d) for d in imgurls] - - def get_features(self, ids, masks, layers, interventions): - zs = self.get_zs_for_ids(ids) - z_tensor = torch.tensor(zs).float().to(self.tester.device) - t_masks = torch.stack( - [torch.from_numpy(mask_to_numpy(mask)) for mask in masks] - )[:,None,:,:].to(self.tester.device) - t_features = self.tester.feature_stats(z_tensor, t_masks, - decode_intervention_array(interventions, - self.tester.layer_shapes()), layers) - # Convert torch arrays to plain python lists before returning. - return { layer: { key: value.cpu().numpy().tolist() - for key, value in feature.items() } - for layer, feature in t_features.items() } - - def get_featuremaps(self, ids, layers, interventions): - zs = self.get_zs_for_ids(ids) - z_tensor = torch.tensor(zs).float().to(self.tester.device) - # Quantilized features are returned. - q_features = self.tester.feature_maps(z_tensor, - decode_intervention_array(interventions, - self.tester.layer_shapes()), layers) - # Scale them 0-255 and return them. - # TODO: turn them into pngs for returning. - return { layer: [ - value.clamp(0, 1).mul(255).byte().cpu().numpy().tolist() - for value in valuelist ] - for layer, valuelist in q_features.items() - if (not layers) or (layer in layers) } - - def get_recipes(self): - recipedir = os.path.join(self.project_dir, 'recipe') - if not os.path.isdir(recipedir): - return [] - result = [] - for filename in os.listdir(recipedir): - with open(os.path.join(recipedir, filename)) as f: - result.append(json.load(f)) - return result - - - - -class GanTester: - ''' - GanTester holds on to a specific model to test. - - (1) loads and instantiates the GAN; - (2) instruments it at every layer so that units can be ablated - (3) precomputes z dimensionality, and output image dimensions. - ''' - def __init__(self, args, dissectdir=None, device=None): - self.cachedir = os.path.join(dissectdir, 'cache') - self.device = device if device is not None else torch.device('cpu') - self.dissectdir = dissectdir - self.modellock = threading.Lock() - - # Load the generator from the pth file. - args_copy = EasyDict(args) - args_copy.edit = True - model = create_instrumented_model(args_copy) - model.eval() - self.model = model - - # Get the set of layers of interest. - # Default: all shallow children except last. - self.layers = sorted(model.retained_features().keys()) - - # Move it to CUDA if wanted. - model.to(device) - - self.quantiles = { - layer: load_quantile_if_present(os.path.join(self.dissectdir, - safe_dir_name(layer)), 'quantiles.npz', - device=torch.device('cpu')) - for layer in self.layers } - - def layer_shapes(self): - return self.model.feature_shape - - def standard_z_sample(self, size=100, seed=1, device=None): - ''' - Generate a standard set of random Z as a (size, z_dimension) tensor. - With the same random seed, it always returns the same z (e.g., - the first one is always the same regardless of the size.) - ''' - result = z_sample_for_model(self.model, size) - if device is not None: - result = result.to(device) - return result - - def reset_intervention(self): - self.model.remove_edits() - - def apply_intervention(self, intervention): - ''' - Applies an ablation recipe of the form [(layer, unit, alpha)...]. - ''' - self.reset_intervention() - if not intervention: - return - for layer, (a, v) in intervention.items(): - self.model.edit_layer(layer, ablation=a, replacement=v) - - def generate_images(self, z_batch, intervention=None): - ''' - Makes some images. - ''' - with torch.no_grad(), self.modellock: - batch_size = 10 - self.apply_intervention(intervention) - test_loader = DataLoader(TensorDataset(z_batch[:,:,None,None]), - batch_size=batch_size, - pin_memory=('cuda' == self.device.type - and z_batch.device.type == 'cpu')) - result_img = torch.zeros( - *((len(z_batch), 3) + self.model.output_shape[2:]), - dtype=torch.uint8, device=self.device) - for batch_num, [batch_z,] in enumerate(test_loader): - batch_z = batch_z.to(self.device) - out = self.model(batch_z) - result_img[batch_num*batch_size: - batch_num*batch_size+len(batch_z)] = ( - (((out + 1) / 2) * 255).clamp(0, 255).byte()) - return result_img - - def get_layers(self): - return self.layers - - def feature_stats(self, z_batch, - masks=None, intervention=None, layers=None): - feature_stat = defaultdict(dict) - with torch.no_grad(), self.modellock: - batch_size = 10 - self.apply_intervention(intervention) - if masks is None: - masks = torch.ones(z_batch.size(0), 1, 1, 1, - device=z_batch.device, dtype=z_batch.dtype) - else: - assert masks.shape[0] == z_batch.shape[0] - assert masks.shape[1] == 1 - test_loader = DataLoader( - TensorDataset(z_batch[:,:,None,None], masks), - batch_size=batch_size, - pin_memory=('cuda' == self.device.type - and z_batch.device.type == 'cpu')) - processed = 0 - for batch_num, [batch_z, batch_m] in enumerate(test_loader): - batch_z, batch_m = [ - d.to(self.device) for d in [batch_z, batch_m]] - # Run model but disregard output - self.model(batch_z) - processing = batch_z.shape[0] - for layer, feature in self.model.retained_features().items(): - if layers is not None: - if layer not in layers: - continue - # Compute max features touching mask - resized_max = torch.nn.functional.adaptive_max_pool2d( - batch_m, - (feature.shape[2], feature.shape[3])) - max_feature = (feature * resized_max).view( - feature.shape[0], feature.shape[1], -1 - ).max(2)[0].max(0)[0] - if 'max' not in feature_stat[layer]: - feature_stat[layer]['max'] = max_feature - else: - torch.max(feature_stat[layer]['max'], max_feature, - out=feature_stat[layer]['max']) - # Compute mean features weighted by overlap with mask - resized_mean = torch.nn.functional.adaptive_avg_pool2d( - batch_m, - (feature.shape[2], feature.shape[3])) - mean_feature = (feature * resized_mean).view( - feature.shape[0], feature.shape[1], -1 - ).sum(2).sum(0) / (resized_mean.sum() + 1e-15) - if 'mean' not in feature_stat[layer]: - feature_stat[layer]['mean'] = mean_feature - else: - feature_stat[layer]['mean'] = ( - processed * feature_mean[layer]['mean'] - + processing * mean_feature) / ( - processed + processing) - processed += processing - # After summaries are done, also compute quantile stats - for layer, stats in feature_stat.items(): - if self.quantiles.get(layer, None) is not None: - for statname in ['max', 'mean']: - stats['%s_quantile' % statname] = ( - self.quantiles[layer].normalize(stats[statname])) - return feature_stat - - def levels(self, layer, quantiles): - return self.quantiles[layer].quantiles(quantiles) - - def feature_maps(self, z_batch, intervention=None, layers=None, - quantiles=True): - feature_map = defaultdict(list) - with torch.no_grad(), self.modellock: - batch_size = 10 - self.apply_intervention(intervention) - test_loader = DataLoader( - TensorDataset(z_batch[:,:,None,None]), - batch_size=batch_size, - pin_memory=('cuda' == self.device.type - and z_batch.device.type == 'cpu')) - processed = 0 - for batch_num, [batch_z] in enumerate(test_loader): - batch_z = batch_z.to(self.device) - # Run model but disregard output - self.model(batch_z) - processing = batch_z.shape[0] - for layer, feature in self.model.retained_features().items(): - for single_featuremap in feature: - if quantiles: - feature_map[layer].append(self.quantiles[layer] - .normalize(single_featuremap)) - else: - feature_map[layer].append(single_featuremap) - return feature_map - -def load_quantile_if_present(outdir, filename, device): - filepath = os.path.join(outdir, filename) - if os.path.isfile(filepath): - data = numpy.load(filepath) - result = RunningQuantile(state=data) - result.to_(device) - return result - return None - -if __name__ == '__main__': - test_main() - -def mask_to_numpy(mask_record): - # Detect a png image mask. - bitstring = mask_record['bitstring'] - bitnumpy = None - default_shape = (256, 256) - if 'image/png;base64,' in bitstring: - bitnumpy = base642img(bitstring) - default_shape = bitnumpy.shape[:2] - # Set up results - shape = mask_record.get('shape', None) - if not shape: # None or empty [] - shape = default_shape - result = numpy.zeros(shape=shape, dtype=numpy.float32) - bitbounds = mask_record.get('bitbounds', None) - if not bitbounds: # None or empty [] - bitbounds = ([0] * len(result.shape)) + list(result.shape) - start = bitbounds[:len(result.shape)] - end = bitbounds[len(result.shape):] - if bitnumpy is not None: - if bitnumpy.shape[2] == 4: - # Mask is any nontransparent bits in the alpha channel if present - result[start[0]:end[0], start[1]:end[1]] = (bitnumpy[:,:,3] > 0) - else: - # Or any nonwhite pixels in the red channel if no alpha. - result[start[0]:end[0], start[1]:end[1]] = (bitnumpy[:,:,0] < 255) - return result - else: - # Or bitstring can be just ones and zeros. - indexes = start.copy() - bitindex = 0 - while True: - result[tuple(indexes)] = (bitstring[bitindex] != '0') - for ii in range(len(indexes) - 1, -1, -1): - if indexes[ii] < end[ii] - 1: - break - indexes[ii] = start[ii] - else: - assert (bitindex + 1) == len(bitstring) - return result - indexes[ii] += 1 - bitindex += 1 - -def decode_intervention_array(interventions, layer_shapes): - result = {} - for channels in [decode_intervention(intervention, layer_shapes) - for intervention in (interventions or [])]: - for layer, channel in channels.items(): - if layer not in result: - result[layer] = channel - continue - accum = result[layer] - newalpha = 1 - (1 - channel[:1]) * (1 - accum[:1]) - newvalue = (accum[1:] * accum[:1] * (1 - channel[:1]) + - channel[1:] * channel[:1]) / (newalpha + 1e-40) - accum[:1] = newalpha - accum[1:] = newvalue - return result - -def decode_intervention(intervention, layer_shapes): - # Every plane of an intervention is a solid choice of activation - # over a set of channels, with a mask applied to alpha-blended channels - # (when the mask resolution is different from the feature map, it can - # be either a max-pooled or average-pooled to the proper resolution). - # This can be reduced to a single alpha-blended featuremap. - if intervention is None: - return None - mask = intervention.get('mask', None) - if mask: - mask = torch.from_numpy(mask_to_numpy(mask)) - maskpooling = intervention.get('maskpooling', 'max') - channels = {} # layer -> ([alpha, val], c) - for arec in intervention.get('ablations', []): - unit = arec['unit'] - layer = arec['layer'] - alpha = arec.get('alpha', 1.0) - if alpha is None: - alpha = 1.0 - value = arec.get('value', 0.0) - if value is None: - value = 0.0 - if alpha != 0.0 or value != 0.0: - if layer not in channels: - channels[layer] = torch.zeros(2, *layer_shapes[layer][1:]) - channels[layer][0, unit] = alpha - channels[layer][1, unit] = value - if mask is not None: - for layer in channels: - layer_shape = layer_shapes[layer][2:] - if maskpooling == 'mean': - layer_mask = torch.nn.functional.adaptive_avg_pool2d( - mask[None,None,...], layer_shape)[0] - else: - layer_mask = torch.nn.functional.adaptive_max_pool2d( - mask[None,None,...], layer_shape)[0] - channels[layer][0] *= layer_mask - return channels - -def img2base64(imgarray, for_html=True, image_format='jpeg'): - ''' - Converts a numpy array to a jpeg base64 url - ''' - input_image_buff = BytesIO() - Image.fromarray(imgarray).save(input_image_buff, image_format, - quality=99, optimize=True, progressive=True) - res = base64.b64encode(input_image_buff.getvalue()).decode('ascii') - if for_html: - return 'data:image/' + image_format + ';base64,' + res - else: - return res - -def base642img(stringdata): - stringdata = re.sub('^(?:data:)?image/\w+;base64,', '', stringdata) - im = Image.open(BytesIO(base64.b64decode(stringdata))) - return numpy.array(im) - -def twitter_card(image_path, tweet_path, public_host): - return '''\ - - - - - - - - - - - - -
-

Painting with GANs from MIT-IBM Watson AI Lab

-

This demo lets you modify a selection of meatningful GAN units for a generated image by simply painting.

- -

Redirecting to -GANPaint -

-
- -'''.format( - image_path=image_path, - tweet_path=tweet_path, - public_host=public_host) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/lr_scheduler/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/lr_scheduler/__init__.py deleted file mode 100644 index 5b3dbc023aa4a6f7bfb8403b8204d71ca432f79c..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/lr_scheduler/__init__.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -"""isort:skip_file""" - -import importlib -import os - -from fairseq import registry -from fairseq.optim.lr_scheduler.fairseq_lr_scheduler import ( # noqa - FairseqLRScheduler, - LegacyFairseqLRScheduler, -) -from omegaconf import DictConfig - - -( - build_lr_scheduler_, - register_lr_scheduler, - LR_SCHEDULER_REGISTRY, - LR_SCHEDULER_DATACLASS_REGISTRY, -) = registry.setup_registry( - "--lr-scheduler", base_class=FairseqLRScheduler, default="fixed" -) - - -def build_lr_scheduler(cfg: DictConfig, optimizer): - return build_lr_scheduler_(cfg, optimizer) - - -# automatically import any Python files in the optim/lr_scheduler/ directory -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_"): - file_name = file[: file.find(".py")] - importlib.import_module("fairseq.optim.lr_scheduler." + file_name) diff --git a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/monotonic_align/monotonic_align/mas.py b/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/monotonic_align/monotonic_align/mas.py deleted file mode 100644 index 207ab3e858389ec06c902fd6f5bec6c5da2996af..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/monotonic_align/monotonic_align/mas.py +++ /dev/null @@ -1,57 +0,0 @@ -from typing import overload -import numpy as np -import torch -from monotonic_align.core import maximum_path_c - - -def mask_from_len(lens: torch.Tensor, max_len=None): - """ - Make a `mask` from lens. - - :param inputs: (B, T, D) - :param lens: (B) - - :return: - `mask`: (B, T) - """ - if max_len is None: - max_len = lens.max() - index = torch.arange(max_len).to(lens).view(1, -1) - return index < lens.unsqueeze(1) # (B, T) - - -def mask_from_lens( - similarity: torch.Tensor, - symbol_lens: torch.Tensor, - mel_lens: torch.Tensor, -): - """ - :param similarity: (B, S, T) - :param symbol_lens: (B,) - :param mel_lens: (B,) - """ - _, S, T = similarity.size() - mask_S = mask_from_len(symbol_lens, S) - mask_T = mask_from_len(mel_lens, T) - mask_ST = mask_S.unsqueeze(2) * mask_T.unsqueeze(1) - return mask_ST.to(similarity) - - -def maximum_path(value, mask=None): - """Cython optimised version. - value: [b, t_x, t_y] - mask: [b, t_x, t_y] - """ - if mask is None: - mask = torch.zeros_like(value) - - value = value * mask - device = value.device - dtype = value.dtype - value = value.data.cpu().numpy().astype(np.float32) - path = np.zeros_like(value).astype(np.int32) - mask = mask.data.cpu().numpy() - t_x_max = mask.sum(1)[:, 0].astype(np.int32) - t_y_max = mask.sum(2)[:, 0].astype(np.int32) - maximum_path_c(path, value, t_x_max, t_y_max) - return torch.from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/hifi_gan/inference.py b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/hifi_gan/inference.py deleted file mode 100644 index c70ee09b4110677b7cf9732d76a5e6ca93c8860c..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/hifi_gan/inference.py +++ /dev/null @@ -1,98 +0,0 @@ -from __future__ import absolute_import, division, print_function, unicode_literals - -import glob -import os -import argparse -import json -import torch -from scipy.io.wavfile import write -from env import AttrDict -from meldataset import mel_spectrogram, MAX_WAV_VALUE, load_wav -from models import Generator - -h = None -device = None - - -def load_checkpoint(filepath, device): - assert os.path.isfile(filepath) - print("Loading '{}'".format(filepath)) - checkpoint_dict = torch.load(filepath, map_location=device) - print("Complete.") - return checkpoint_dict - - -def get_mel(x): - return mel_spectrogram( - x, h.n_fft, h.num_mels, h.sampling_rate, h.hop_size, h.win_size, h.fmin, h.fmax - ) - - -def scan_checkpoint(cp_dir, prefix): - pattern = os.path.join(cp_dir, prefix + "*") - cp_list = glob.glob(pattern) - if len(cp_list) == 0: - return "" - return sorted(cp_list)[-1] - - -def inference(a): - generator = Generator(h).to(device) - - state_dict_g = load_checkpoint(a.checkpoint_file, device) - generator.load_state_dict(state_dict_g["generator"]) - - filelist = os.listdir(a.input_wavs_dir) - - os.makedirs(a.output_dir, exist_ok=True) - - generator.eval() - generator.remove_weight_norm() - with torch.no_grad(): - for i, filname in enumerate(filelist): - wav, sr = load_wav(os.path.join(a.input_wavs_dir, filname)) - wav = wav / MAX_WAV_VALUE - wav = torch.FloatTensor(wav).to(device) - x = get_mel(wav.unsqueeze(0)) - y_g_hat = generator(x) - audio = y_g_hat.squeeze() - audio = audio * MAX_WAV_VALUE - audio = audio.cpu().numpy().astype("int16") - - output_file = os.path.join( - a.output_dir, os.path.splitext(filname)[0] + "_generated.wav" - ) - write(output_file, h.sampling_rate, audio) - print(output_file) - - -def main(): - print("Initializing Inference Process..") - - parser = argparse.ArgumentParser() - parser.add_argument("--input_wavs_dir", default="test_files") - parser.add_argument("--output_dir", default="generated_files") - parser.add_argument("--checkpoint_file", required=True) - a = parser.parse_args() - - config_file = os.path.join(os.path.split(a.checkpoint_file)[0], "config.json") - with open(config_file) as f: - data = f.read() - - global h - json_config = json.loads(data) - h = AttrDict(json_config) - - torch.manual_seed(h.seed) - global device - if torch.cuda.is_available(): - torch.cuda.manual_seed(h.seed) - device = torch.device("cuda") - else: - device = torch.device("cpu") - - inference(a) - - -if __name__ == "__main__": - main() diff --git a/spaces/Hexamind/QnA/src/model/paragraph.py b/spaces/Hexamind/QnA/src/model/paragraph.py deleted file mode 100644 index b8e30cfbc9ca93979467a314d365a4cd18e2aca3..0000000000000000000000000000000000000000 --- a/spaces/Hexamind/QnA/src/model/paragraph.py +++ /dev/null @@ -1,37 +0,0 @@ -import string - -INFINITE = 10000 - - -class Paragraph: - - def __init__(self, xparagraph, doc_id: int, id_: int): - - self.xparagraph = xparagraph - self.id_ = int(str(2)+str(doc_id)+str(id_)) - name = self.xparagraph.style.name - self.level = int(name.split(' ')[-1]) if 'Heading' in name else INFINITE - self.is_structure = self.level < INFINITE - self.text = self.xparagraph.text - - @property - def structure(self): - structure = {str(self.id_): { - 'index': str(self.id_), - 'canMove': True, - 'isFolder': False, - 'children': [], - 'title': self.text, - 'canRename': True, - 'data': {}, - 'level': self.level, - }} - return structure - - @property - def blank(self): - """ - checks if the paragraph is blank: i.e. it brings some signal (it may otherwise be ignored) - """ - text = self.text.replace('\n', '') - return set(text).isdisjoint(string.ascii_letters) diff --git a/spaces/HighCWu/Style2Paints-4-Gradio/ai.py b/spaces/HighCWu/Style2Paints-4-Gradio/ai.py deleted file mode 100644 index 04f182b122553b0f27b333f1f16c90bc6d07dce1..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/Style2Paints-4-Gradio/ai.py +++ /dev/null @@ -1,203 +0,0 @@ -import tensorflow - -tensorflow.compat.v1.disable_v2_behavior() -tf = tensorflow.compat.v1 - -import keras -import numpy as np -from config import * -from keras.models import load_model -from smoother import * -import keras.backend as K -from models import * - - -def ToGray(x): - R = x[:, :, :, 0:1] - G = x[:, :, :, 1:2] - B = x[:, :, :, 2:3] - return 0.30 * R + 0.59 * G + 0.11 * B - - -def RGB2YUV(x): - R = x[:, :, :, 0:1] - G = x[:, :, :, 1:2] - B = x[:, :, :, 2:3] - Y = 0.299 * R + 0.587 * G + 0.114 * B - U = 0.492 * (B - Y) + 128 - V = 0.877 * (R - Y) + 128 - return tf.concat([Y, U, V], axis=3) - - -def YUV2RGB(x): - Y = x[:, :, :, 0:1] - U = x[:, :, :, 1:2] - V = x[:, :, :, 2:3] - R = Y + 1.140 * (V - 128) - G = Y - 0.394 * (U - 128) - 0.581 * (V - 128) - B = Y + 2.032 * (U - 128) - return tf.concat([R, G, B], axis=3) - - -def VGG2RGB(x): - return (x + [103.939, 116.779, 123.68])[:, :, :, ::-1] - - -def blur(x): - return Smoother({'data': tf.pad(x, [[0, 0], [9, 9], [9, 9], [0, 0]], 'SYMMETRIC')}, 7, 2).get_output()[:, 9: -9, 9: -9, :] - - -def norm_feature(x, core): - cs0 = tf.shape(core)[1] - cs1 = tf.shape(core)[2] - small = tf.image.resize_area(x, (cs0, cs1)) - avged = tf.nn.avg_pool(tf.pad(small, [[0, 0], [2, 2], [2, 2], [0, 0]], 'REFLECT'), [1, 5, 5, 1], [1, 1, 1, 1], 'VALID') - return tf.image.resize_bicubic(avged, tf.shape(x)[1:3]) - - -def upsample(x): - return K.resize_images(x, 2, 2, 'channels_last') - - -def downsample(x): - return tf.nn.avg_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') - - -def nts(x): - return (x + [103.939, 116.779, 123.68])[:, :, :, ::-1] / 255.0 - - -session = keras.backend.get_session() - -ip1 = tf.placeholder(dtype=tf.float32, shape=(None, None, None, 1)) -ip3 = tf.placeholder(dtype=tf.float32, shape=(None, None, None, 3)) -ip4 = tf.placeholder(dtype=tf.float32, shape=(None, None, None, 4)) - -print('1') - -vector = make_diff_net() -vector_op = 255.0 - tf.nn.sigmoid(vector(ip3 / 255.0)) * 255.0 - -print('4') - -reader = load_model('./nets/reader.net') -features = reader(ip3 / 255.0) - -print('5') - -head = load_model('./nets/head.net') -feed = [1 - ip1 / 255.0, (ip4[:, :, :, 0:3] / 127.5 - 1) * ip4[:, :, :, 3:4] / 255.0] -for _ in range(len(features)): - feed.append(keras.backend.mean(features[_], axis=[1, 2])) -nil0, nil1, head_temp = head(feed) - -print('6') - -neck = load_model('./nets/neck.net') -nil2, nil3, neck_temp = neck(feed) -feed[0] = tf.clip_by_value(1 - tf.image.resize_bilinear(ToGray(VGG2RGB(head_temp) / 255.0), tf.shape(ip1)[1:3]), 0.0, 1.0) -nil4, nil5, head_temp = neck(feed) -head_op = VGG2RGB(head_temp) -neck_op = VGG2RGB(neck_temp) - -print('7') - -inception = load_model('./nets/inception.net') -features_render = inception((ip3 + (downsample(ip1) - blur(downsample(ip1))) * 2.0) / 255.0) -precessed_feed = [(ip4[:, :, :, 0:3] / 127.5 - 1) * ip4[:, :, :, 3:4] / 255.0] + [ - norm_feature(item, features_render[-1]) for item in features_render] - -print('8') - -render_head = load_model('./nets/render_head.net') -render_neck = load_model('./nets/render_neck.net') -nil6, nil7, render_A = render_head([1 - ip1 / 255.0] + precessed_feed) -nil8, nil9, render_B = render_neck( - [1 - tf.image.resize_bilinear(ToGray(nts(render_A)), tf.shape(ip1)[1:3])] + precessed_feed) -render_op = nts(render_B) * 255.0 - -print('9') - -tail = load_model('./nets/tail.net') -pads = 7 -tail_op = tail(tf.pad(ip3 / 255.0, [[0, 0], [pads, pads], [pads, pads], [0, 0]], 'REFLECT'))[:, pads * 2:-pads * 2, pads * 2:-pads * 2, :][:, 1:-1, 1:-1, :] * 255.0 - -print('10') - - -vgg7 = load_model('./nets/vgg7.net') -pads = 7 -vgg7_op = vgg7(tf.pad(ip1 / 255.0, [[0, 0], [pads, pads], [pads, pads], [0, 0]], 'REFLECT'))[:, pads:-pads, pads:-pads, :] * 255.0 - -print('11') - - -mat = make_unet512() -mat_op = mat(ip3 / 255.0) * 255.0 - -print('11') - -norm = load_model('./nets/norm.net') -norm_op = norm(ip1 / 255.0) * 255.0 - -print('12') - -session.run(tf.global_variables_initializer()) - -print('begin load') - - -tail.load_weights('./nets/tail.net') -vgg7.load_weights('./nets/vgg7.net') -head.load_weights('./nets/head.net') -neck.load_weights('./nets/neck.net') -reader.load_weights('./nets/reader.net') -vector.load_weights('./nets/vector.net') -render_head.load_weights('./nets/render_head.net') -render_neck.load_weights('./nets/render_neck.net') -inception.load_weights('./nets/inception.net') -mat.load_weights('./nets/mat.net') -norm.load_weights('./nets/norm.net') - - -def go_head(sketch, global_hint, local_hint): - return session.run(head_op, feed_dict={ - ip1: sketch[None, :, :, None], ip3: global_hint[None, :, :, :], ip4: local_hint[None, :, :, :] - })[0].clip(0, 255).astype(np.uint8) - - -def go_render(sketch, segmentation, points): - return session.run(render_op, feed_dict={ - ip1: sketch[None, :, :, None], ip3: segmentation[None, :, :, :], ip4: points[None, :, :, :] - })[0].clip(0, 255).astype(np.uint8) - - -def go_tail(x): - return session.run(tail_op, feed_dict={ - ip3: x[None, :, :, :] - })[0].clip(0, 255).astype(np.uint8) - - -def go_vgg7(x): - return session.run(vgg7_op, feed_dict={ - ip1: x[None, :, :, None] - })[0, :, :, 0].clip(0, 255).astype(np.uint8) - - -def go_vector(x): - return session.run(vector_op, feed_dict={ - ip3: x[None, :, :, :] - })[0].clip(0, 255).astype(np.uint8) - - -def go_mat(x): - return session.run(mat_op, feed_dict={ - ip3: x[None, :, :, :] - })[0, :, :, 0].clip(0, 255).astype(np.uint8) - - -def go_norm(x): - return session.run(norm_op, feed_dict={ - ip1: x[None, :, :, None] - })[0].clip(0, 255).astype(np.uint8) - diff --git a/spaces/HuggingFaceH4/open_llm_leaderboard/scripts/create_request_file.py b/spaces/HuggingFaceH4/open_llm_leaderboard/scripts/create_request_file.py deleted file mode 100644 index bd06c80e11a5e5f1e541e17a111a3bf70f9c8674..0000000000000000000000000000000000000000 --- a/spaces/HuggingFaceH4/open_llm_leaderboard/scripts/create_request_file.py +++ /dev/null @@ -1,104 +0,0 @@ -from datetime import datetime, timezone -import json -import os -import re -import click -from huggingface_hub import HfApi, snapshot_download -from colorama import Fore -import pprint - -EVAL_REQUESTS_PATH = "eval-queue" -QUEUE_REPO = "open-llm-leaderboard/requests" - -precisions =("float16", "bfloat16", "8bit (LLM.int8)", "4bit (QLoRA / FP4)", "GPTQ") -model_types = ("pretrained", "fine-tuned", "RL-tuned", "instruction-tuned") -weight_types = ("Original", "Delta", "Adapter") - -def get_model_size(model_info, precision: str): - size_pattern = size_pattern = re.compile(r"(\d\.)?\d+(b|m)") - try: - model_size = round(model_info.safetensors["total"] / 1e9, 3) - except AttributeError: - try: - size_match = re.search(size_pattern, model_info.modelId.lower()) - model_size = size_match.group(0) - model_size = round(float(model_size[:-1]) if model_size[-1] == "b" else float(model_size[:-1]) / 1e3, 3) - except AttributeError: - return 0 # Unknown model sizes are indicated as 0, see NUMERIC_INTERVALS in app.py - - size_factor = 8 if (precision == "GPTQ" or "gptq" in model_info.modelId.lower()) else 1 - model_size = size_factor * model_size - return model_size - -def main(): - api = HfApi() - current_time = datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ") - snapshot_download(repo_id=QUEUE_REPO, revision="main", local_dir=EVAL_REQUESTS_PATH, repo_type="dataset") - - model_name = click.prompt("Enter model name") - revision = click.prompt("Enter revision", default="main") - precision = click.prompt("Enter precision", default="float16", type=click.Choice(precisions)) - model_type = click.prompt("Enter model type", type=click.Choice(model_types)) - weight_type = click.prompt("Enter weight type", default="Original", type=click.Choice(weight_types)) - base_model = click.prompt("Enter base model", default="") - status = click.prompt("Enter status", default="FINISHED") - - try: - model_info = api.model_info(repo_id=model_name, revision=revision) - except Exception as e: - print(f"{Fore.RED}Could not find model info for {model_name} on the Hub\n{e}{Fore.RESET}") - return 1 - - model_size = get_model_size(model_info=model_info , precision=precision) - - try: - license = model_info.cardData["license"] - except Exception: - license = "?" - - eval_entry = { - "model": model_name, - "base_model": base_model, - "revision": revision, - "private": False, - "precision": precision, - "weight_type": weight_type, - "status": status, - "submitted_time": current_time, - "model_type": model_type, - "likes": model_info.likes, - "params": model_size, - "license": license, - } - - user_name = "" - model_path = model_name - if "/" in model_name: - user_name = model_name.split("/")[0] - model_path = model_name.split("/")[1] - - pprint.pprint(eval_entry) - - if click.confirm("Do you want to continue? This request file will be pushed to the hub"): - click.echo("continuing...") - - out_dir = f"{EVAL_REQUESTS_PATH}/{user_name}" - os.makedirs(out_dir, exist_ok=True) - out_path = f"{out_dir}/{model_path}_eval_request_{False}_{precision}_{weight_type}.json" - - with open(out_path, "w") as f: - f.write(json.dumps(eval_entry)) - - api.upload_file( - path_or_fileobj=out_path, - path_in_repo=out_path.split(f"{EVAL_REQUESTS_PATH}/")[1], - repo_id=QUEUE_REPO, - repo_type="dataset", - commit_message=f"Add {model_name} to eval queue", - ) - else: - click.echo("aborting...") - - -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/spaces/ICML2022/OFA/fairseq/examples/scaling_nmt/README.md b/spaces/ICML2022/OFA/fairseq/examples/scaling_nmt/README.md deleted file mode 100644 index 0cc3360c3bbd58fe35a51591db8f081fc8576877..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/scaling_nmt/README.md +++ /dev/null @@ -1,114 +0,0 @@ -# Scaling Neural Machine Translation (Ott et al., 2018) - -This page includes instructions for reproducing results from the paper [Scaling Neural Machine Translation (Ott et al., 2018)](https://arxiv.org/abs/1806.00187). - -## Pre-trained models - -Model | Description | Dataset | Download ----|---|---|--- -`transformer.wmt14.en-fr` | Transformer
([Ott et al., 2018](https://arxiv.org/abs/1806.00187)) | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | model:
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt14.en-fr.joined-dict.transformer.tar.bz2)
newstest2014:
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-fr.joined-dict.newstest2014.tar.bz2) -`transformer.wmt16.en-de` | Transformer
([Ott et al., 2018](https://arxiv.org/abs/1806.00187)) | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model:
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt16.en-de.joined-dict.transformer.tar.bz2)
newstest2014:
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2) - -## Training a new model on WMT'16 En-De - -First download the [preprocessed WMT'16 En-De data provided by Google](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8). - -Then: - -##### 1. Extract the WMT'16 En-De data -```bash -TEXT=wmt16_en_de_bpe32k -mkdir -p $TEXT -tar -xzvf wmt16_en_de.tar.gz -C $TEXT -``` - -##### 2. Preprocess the dataset with a joined dictionary -```bash -fairseq-preprocess \ - --source-lang en --target-lang de \ - --trainpref $TEXT/train.tok.clean.bpe.32000 \ - --validpref $TEXT/newstest2013.tok.bpe.32000 \ - --testpref $TEXT/newstest2014.tok.bpe.32000 \ - --destdir data-bin/wmt16_en_de_bpe32k \ - --nwordssrc 32768 --nwordstgt 32768 \ - --joined-dictionary \ - --workers 20 -``` - -##### 3. Train a model -```bash -fairseq-train \ - data-bin/wmt16_en_de_bpe32k \ - --arch transformer_vaswani_wmt_en_de_big --share-all-embeddings \ - --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 \ - --lr 0.0005 --lr-scheduler inverse_sqrt --warmup-updates 4000 --warmup-init-lr 1e-07 \ - --dropout 0.3 --weight-decay 0.0 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --max-tokens 3584 \ - --fp16 -``` - -Note that the `--fp16` flag requires you have CUDA 9.1 or greater and a Volta GPU or newer. - -***IMPORTANT:*** You will get better performance by training with big batches and -increasing the learning rate. If you want to train the above model with big batches -(assuming your machine has 8 GPUs): -- add `--update-freq 16` to simulate training on 8x16=128 GPUs -- increase the learning rate; 0.001 works well for big batches - -##### 4. Evaluate - -Now we can evaluate our trained model. - -Note that the original [Attention Is All You Need](https://arxiv.org/abs/1706.03762) -paper used a couple tricks to achieve better BLEU scores. We use these same tricks in -the Scaling NMT paper, so it's important to apply them when reproducing our results. - -First, use the [average_checkpoints.py](/scripts/average_checkpoints.py) script to -average the last few checkpoints. Averaging the last 5-10 checkpoints is usually -good, but you may need to adjust this depending on how long you've trained: -```bash -python scripts/average_checkpoints \ - --inputs /path/to/checkpoints \ - --num-epoch-checkpoints 10 \ - --output checkpoint.avg10.pt -``` - -Next, generate translations using a beam width of 4 and length penalty of 0.6: -```bash -fairseq-generate \ - data-bin/wmt16_en_de_bpe32k \ - --path checkpoint.avg10.pt \ - --beam 4 --lenpen 0.6 --remove-bpe > gen.out -``` - -Finally, we apply the ["compound splitting" script](/scripts/compound_split_bleu.sh) to -add spaces around dashes. For example "Café-Liebhaber" would become three tokens: -"Café - Liebhaber". This typically results in larger BLEU scores, but it is not -appropriate to compare these inflated scores to work which does not include this trick. -This trick was used in the [original AIAYN code](https://github.com/tensorflow/tensor2tensor/blob/fc9335c0203685cbbfe2b30c92db4352d8f60779/tensor2tensor/utils/get_ende_bleu.sh), -so we used it in the Scaling NMT paper as well. That said, it's strongly advised to -report [sacrebleu](https://github.com/mjpost/sacrebleu) scores instead. - -To compute "compound split" tokenized BLEU (not recommended!): -```bash -bash scripts/compound_split_bleu.sh gen.out -# BLEU4 = 29.29, 60.3/35.0/22.8/15.3 (BP=1.000, ratio=1.004, syslen=64763, reflen=64496) -``` - -To compute detokenized BLEU with sacrebleu (preferred): -```bash -bash scripts/sacrebleu.sh wmt14/full en de gen.out -# BLEU+case.mixed+lang.en-de+numrefs.1+smooth.exp+test.wmt14/full+tok.13a+version.1.4.3 = 28.6 59.3/34.3/22.1/14.9 (BP = 1.000 ratio = 1.016 hyp_len = 63666 ref_len = 62688) -``` - -## Citation - -```bibtex -@inproceedings{ott2018scaling, - title = {Scaling Neural Machine Translation}, - author = {Ott, Myle and Edunov, Sergey and Grangier, David and Auli, Michael}, - booktitle = {Proceedings of the Third Conference on Machine Translation (WMT)}, - year = 2018, -} -``` diff --git a/spaces/Illumotion/Koboldcpp/examples/server/httplib.h b/spaces/Illumotion/Koboldcpp/examples/server/httplib.h deleted file mode 100644 index 28746000cddceb5ecb8d8b56e7a25ab08ea7ee4f..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/examples/server/httplib.h +++ /dev/null @@ -1,8794 +0,0 @@ -// -// httplib.h -// -// Copyright (c) 2023 Yuji Hirose. All rights reserved. -// MIT License -// - -#ifndef CPPHTTPLIB_HTTPLIB_H -#define CPPHTTPLIB_HTTPLIB_H - -#define CPPHTTPLIB_VERSION "0.12.2" - -/* - * Configuration - */ - -#ifndef CPPHTTPLIB_KEEPALIVE_TIMEOUT_SECOND -#define CPPHTTPLIB_KEEPALIVE_TIMEOUT_SECOND 5 -#endif - -#ifndef CPPHTTPLIB_KEEPALIVE_MAX_COUNT -#define CPPHTTPLIB_KEEPALIVE_MAX_COUNT 5 -#endif - -#ifndef CPPHTTPLIB_CONNECTION_TIMEOUT_SECOND -#define CPPHTTPLIB_CONNECTION_TIMEOUT_SECOND 300 -#endif - -#ifndef CPPHTTPLIB_CONNECTION_TIMEOUT_USECOND -#define CPPHTTPLIB_CONNECTION_TIMEOUT_USECOND 0 -#endif - -#ifndef CPPHTTPLIB_READ_TIMEOUT_SECOND -#define CPPHTTPLIB_READ_TIMEOUT_SECOND 5 -#endif - -#ifndef CPPHTTPLIB_READ_TIMEOUT_USECOND -#define CPPHTTPLIB_READ_TIMEOUT_USECOND 0 -#endif - -#ifndef CPPHTTPLIB_WRITE_TIMEOUT_SECOND -#define CPPHTTPLIB_WRITE_TIMEOUT_SECOND 5 -#endif - -#ifndef CPPHTTPLIB_WRITE_TIMEOUT_USECOND -#define CPPHTTPLIB_WRITE_TIMEOUT_USECOND 0 -#endif - -#ifndef CPPHTTPLIB_IDLE_INTERVAL_SECOND -#define CPPHTTPLIB_IDLE_INTERVAL_SECOND 0 -#endif - -#ifndef CPPHTTPLIB_IDLE_INTERVAL_USECOND -#ifdef _WIN32 -#define CPPHTTPLIB_IDLE_INTERVAL_USECOND 10000 -#else -#define CPPHTTPLIB_IDLE_INTERVAL_USECOND 0 -#endif -#endif - -#ifndef CPPHTTPLIB_REQUEST_URI_MAX_LENGTH -#define CPPHTTPLIB_REQUEST_URI_MAX_LENGTH 8192 -#endif - -#ifndef CPPHTTPLIB_HEADER_MAX_LENGTH -#define CPPHTTPLIB_HEADER_MAX_LENGTH 8192 -#endif - -#ifndef CPPHTTPLIB_REDIRECT_MAX_COUNT -#define CPPHTTPLIB_REDIRECT_MAX_COUNT 20 -#endif - -#ifndef CPPHTTPLIB_MULTIPART_FORM_DATA_FILE_MAX_COUNT -#define CPPHTTPLIB_MULTIPART_FORM_DATA_FILE_MAX_COUNT 1024 -#endif - -#ifndef CPPHTTPLIB_PAYLOAD_MAX_LENGTH -#define CPPHTTPLIB_PAYLOAD_MAX_LENGTH ((std::numeric_limits::max)()) -#endif - -#ifndef CPPHTTPLIB_FORM_URL_ENCODED_PAYLOAD_MAX_LENGTH -#define CPPHTTPLIB_FORM_URL_ENCODED_PAYLOAD_MAX_LENGTH 8192 -#endif - -#ifndef CPPHTTPLIB_TCP_NODELAY -#define CPPHTTPLIB_TCP_NODELAY false -#endif - -#ifndef CPPHTTPLIB_RECV_BUFSIZ -#define CPPHTTPLIB_RECV_BUFSIZ size_t(4096u) -#endif - -#ifndef CPPHTTPLIB_COMPRESSION_BUFSIZ -#define CPPHTTPLIB_COMPRESSION_BUFSIZ size_t(16384u) -#endif - -#ifndef CPPHTTPLIB_THREAD_POOL_COUNT -#define CPPHTTPLIB_THREAD_POOL_COUNT \ - ((std::max)(8u, std::thread::hardware_concurrency() > 0 \ - ? std::thread::hardware_concurrency() - 1 \ - : 0)) -#endif - -#ifndef CPPHTTPLIB_RECV_FLAGS -#define CPPHTTPLIB_RECV_FLAGS 0 -#endif - -#ifndef CPPHTTPLIB_SEND_FLAGS -#define CPPHTTPLIB_SEND_FLAGS 0 -#endif - -#ifndef CPPHTTPLIB_LISTEN_BACKLOG -#define CPPHTTPLIB_LISTEN_BACKLOG 5 -#endif - -/* - * Headers - */ - -#ifdef _WIN32 -#ifndef _CRT_SECURE_NO_WARNINGS -#define _CRT_SECURE_NO_WARNINGS -#endif //_CRT_SECURE_NO_WARNINGS - -#ifndef _CRT_NONSTDC_NO_DEPRECATE -#define _CRT_NONSTDC_NO_DEPRECATE -#endif //_CRT_NONSTDC_NO_DEPRECATE - -#if defined(_MSC_VER) -#if _MSC_VER < 1900 -#error Sorry, Visual Studio versions prior to 2015 are not supported -#endif - -#pragma comment(lib, "ws2_32.lib") - -#ifdef _WIN64 -using ssize_t = __int64; -#else -using ssize_t = long; -#endif -#endif // _MSC_VER - -#ifndef S_ISREG -#define S_ISREG(m) (((m)&S_IFREG) == S_IFREG) -#endif // S_ISREG - -#ifndef S_ISDIR -#define S_ISDIR(m) (((m)&S_IFDIR) == S_IFDIR) -#endif // S_ISDIR - -#ifndef NOMINMAX -#define NOMINMAX -#endif // NOMINMAX - -#include -#include -#include - -#ifndef WSA_FLAG_NO_HANDLE_INHERIT -#define WSA_FLAG_NO_HANDLE_INHERIT 0x80 -#endif - -#ifndef strcasecmp -#define strcasecmp _stricmp -#endif // strcasecmp - -using socket_t = SOCKET; -#ifdef CPPHTTPLIB_USE_POLL -#define poll(fds, nfds, timeout) WSAPoll(fds, nfds, timeout) -#endif - -#else // not _WIN32 - -#include -#ifndef _AIX -#include -#endif -#include -#include -#include -#ifdef __linux__ -#include -#endif -#include -#ifdef CPPHTTPLIB_USE_POLL -#include -#endif -#include -#include -#include -#include -#include -#include - -using socket_t = int; -#ifndef INVALID_SOCKET -#define INVALID_SOCKET (-1) -#endif -#endif //_WIN32 - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT -#ifdef _WIN32 -#include - -// these are defined in wincrypt.h and it breaks compilation if BoringSSL is -// used -#undef X509_NAME -#undef X509_CERT_PAIR -#undef X509_EXTENSIONS -#undef PKCS7_SIGNER_INFO - -#ifdef _MSC_VER -#pragma comment(lib, "crypt32.lib") -#pragma comment(lib, "cryptui.lib") -#endif -#elif defined(CPPHTTPLIB_USE_CERTS_FROM_MACOSX_KEYCHAIN) && defined(__APPLE__) -#include -#if TARGET_OS_OSX -#include -#include -#endif // TARGET_OS_OSX -#endif // _WIN32 - -#include -#include -#include -#include - -#if defined(_WIN32) && defined(OPENSSL_USE_APPLINK) -#include -#endif - -#include -#include - -#if OPENSSL_VERSION_NUMBER < 0x1010100fL -#error Sorry, OpenSSL versions prior to 1.1.1 are not supported -#elif OPENSSL_VERSION_NUMBER < 0x30000000L -#define SSL_get1_peer_certificate SSL_get_peer_certificate -#endif - -#endif - -#ifdef CPPHTTPLIB_ZLIB_SUPPORT -#include -#endif - -#ifdef CPPHTTPLIB_BROTLI_SUPPORT -#include -#include -#endif - -/* - * Declaration - */ -namespace httplib { - -namespace detail { - -/* - * Backport std::make_unique from C++14. - * - * NOTE: This code came up with the following stackoverflow post: - * https://stackoverflow.com/questions/10149840/c-arrays-and-make-unique - * - */ - -template -typename std::enable_if::value, std::unique_ptr>::type -make_unique(Args &&...args) { - return std::unique_ptr(new T(std::forward(args)...)); -} - -template -typename std::enable_if::value, std::unique_ptr>::type -make_unique(std::size_t n) { - typedef typename std::remove_extent::type RT; - return std::unique_ptr(new RT[n]); -} - -struct ci { - bool operator()(const std::string &s1, const std::string &s2) const { - return std::lexicographical_compare(s1.begin(), s1.end(), s2.begin(), - s2.end(), - [](unsigned char c1, unsigned char c2) { - return ::tolower(c1) < ::tolower(c2); - }); - } -}; - -// This is based on -// "http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4189". - -struct scope_exit { - explicit scope_exit(std::function &&f) - : exit_function(std::move(f)), execute_on_destruction{true} {} - - scope_exit(scope_exit &&rhs) - : exit_function(std::move(rhs.exit_function)), - execute_on_destruction{rhs.execute_on_destruction} { - rhs.release(); - } - - ~scope_exit() { - if (execute_on_destruction) { this->exit_function(); } - } - - void release() { this->execute_on_destruction = false; } - -private: - scope_exit(const scope_exit &) = delete; - void operator=(const scope_exit &) = delete; - scope_exit &operator=(scope_exit &&) = delete; - - std::function exit_function; - bool execute_on_destruction; -}; - -} // namespace detail - -using Headers = std::multimap; - -using Params = std::multimap; -using Match = std::smatch; - -using Progress = std::function; - -struct Response; -using ResponseHandler = std::function; - -struct MultipartFormData { - std::string name; - std::string content; - std::string filename; - std::string content_type; -}; -using MultipartFormDataItems = std::vector; -using MultipartFormDataMap = std::multimap; - -class DataSink { -public: - DataSink() : os(&sb_), sb_(*this) {} - - DataSink(const DataSink &) = delete; - DataSink &operator=(const DataSink &) = delete; - DataSink(DataSink &&) = delete; - DataSink &operator=(DataSink &&) = delete; - - std::function write; - std::function done; - std::function done_with_trailer; - std::ostream os; - -private: - class data_sink_streambuf : public std::streambuf { - public: - explicit data_sink_streambuf(DataSink &sink) : sink_(sink) {} - - protected: - std::streamsize xsputn(const char *s, std::streamsize n) { - sink_.write(s, static_cast(n)); - return n; - } - - private: - DataSink &sink_; - }; - - data_sink_streambuf sb_; -}; - -using ContentProvider = - std::function; - -using ContentProviderWithoutLength = - std::function; - -using ContentProviderResourceReleaser = std::function; - -struct MultipartFormDataProvider { - std::string name; - ContentProviderWithoutLength provider; - std::string filename; - std::string content_type; -}; -using MultipartFormDataProviderItems = std::vector; - -using ContentReceiverWithProgress = - std::function; - -using ContentReceiver = - std::function; - -using MultipartContentHeader = - std::function; - -class ContentReader { -public: - using Reader = std::function; - using MultipartReader = std::function; - - ContentReader(Reader reader, MultipartReader multipart_reader) - : reader_(std::move(reader)), - multipart_reader_(std::move(multipart_reader)) {} - - bool operator()(MultipartContentHeader header, - ContentReceiver receiver) const { - return multipart_reader_(std::move(header), std::move(receiver)); - } - - bool operator()(ContentReceiver receiver) const { - return reader_(std::move(receiver)); - } - - Reader reader_; - MultipartReader multipart_reader_; -}; - -using Range = std::pair; -using Ranges = std::vector; - -struct Request { - std::string method; - std::string path; - Headers headers; - std::string body; - - std::string remote_addr; - int remote_port = -1; - std::string local_addr; - int local_port = -1; - - // for server - std::string version; - std::string target; - Params params; - MultipartFormDataMap files; - Ranges ranges; - Match matches; - - // for client - ResponseHandler response_handler; - ContentReceiverWithProgress content_receiver; - Progress progress; -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT - const SSL *ssl = nullptr; -#endif - - bool has_header(const std::string &key) const; - std::string get_header_value(const std::string &key, size_t id = 0) const; - template - T get_header_value(const std::string &key, size_t id = 0) const; - size_t get_header_value_count(const std::string &key) const; - void set_header(const std::string &key, const std::string &val); - - bool has_param(const std::string &key) const; - std::string get_param_value(const std::string &key, size_t id = 0) const; - size_t get_param_value_count(const std::string &key) const; - - bool is_multipart_form_data() const; - - bool has_file(const std::string &key) const; - MultipartFormData get_file_value(const std::string &key) const; - std::vector get_file_values(const std::string &key) const; - - // private members... - size_t redirect_count_ = CPPHTTPLIB_REDIRECT_MAX_COUNT; - size_t content_length_ = 0; - ContentProvider content_provider_; - bool is_chunked_content_provider_ = false; - size_t authorization_count_ = 0; -}; - -struct Response { - std::string version; - int status = -1; - std::string reason; - Headers headers; - std::string body; - std::string location; // Redirect location - - bool has_header(const std::string &key) const; - std::string get_header_value(const std::string &key, size_t id = 0) const; - template - T get_header_value(const std::string &key, size_t id = 0) const; - size_t get_header_value_count(const std::string &key) const; - void set_header(const std::string &key, const std::string &val); - - void set_redirect(const std::string &url, int status = 302); - void set_content(const char *s, size_t n, const std::string &content_type); - void set_content(const std::string &s, const std::string &content_type); - - void set_content_provider( - size_t length, const std::string &content_type, ContentProvider provider, - ContentProviderResourceReleaser resource_releaser = nullptr); - - void set_content_provider( - const std::string &content_type, ContentProviderWithoutLength provider, - ContentProviderResourceReleaser resource_releaser = nullptr); - - void set_chunked_content_provider( - const std::string &content_type, ContentProviderWithoutLength provider, - ContentProviderResourceReleaser resource_releaser = nullptr); - - Response() = default; - Response(const Response &) = default; - Response &operator=(const Response &) = default; - Response(Response &&) = default; - Response &operator=(Response &&) = default; - ~Response() { - if (content_provider_resource_releaser_) { - content_provider_resource_releaser_(content_provider_success_); - } - } - - // private members... - size_t content_length_ = 0; - ContentProvider content_provider_; - ContentProviderResourceReleaser content_provider_resource_releaser_; - bool is_chunked_content_provider_ = false; - bool content_provider_success_ = false; -}; - -class Stream { -public: - virtual ~Stream() = default; - - virtual bool is_readable() const = 0; - virtual bool is_writable() const = 0; - - virtual ssize_t read(char *ptr, size_t size) = 0; - virtual ssize_t write(const char *ptr, size_t size) = 0; - virtual void get_remote_ip_and_port(std::string &ip, int &port) const = 0; - virtual void get_local_ip_and_port(std::string &ip, int &port) const = 0; - virtual socket_t socket() const = 0; - - template - ssize_t write_format(const char *fmt, const Args &...args); - ssize_t write(const char *ptr); - ssize_t write(const std::string &s); -}; - -class TaskQueue { -public: - TaskQueue() = default; - virtual ~TaskQueue() = default; - - virtual void enqueue(std::function fn) = 0; - virtual void shutdown() = 0; - - virtual void on_idle() {} -}; - -class ThreadPool : public TaskQueue { -public: - explicit ThreadPool(size_t n) : shutdown_(false) { - while (n) { - threads_.emplace_back(worker(*this)); - n--; - } - } - - ThreadPool(const ThreadPool &) = delete; - ~ThreadPool() override = default; - - void enqueue(std::function fn) override { - { - std::unique_lock lock(mutex_); - jobs_.push_back(std::move(fn)); - } - - cond_.notify_one(); - } - - void shutdown() override { - // Stop all worker threads... - { - std::unique_lock lock(mutex_); - shutdown_ = true; - } - - cond_.notify_all(); - - // Join... - for (auto &t : threads_) { - t.join(); - } - } - -private: - struct worker { - explicit worker(ThreadPool &pool) : pool_(pool) {} - - void operator()() { - for (;;) { - std::function fn; - { - std::unique_lock lock(pool_.mutex_); - - pool_.cond_.wait( - lock, [&] { return !pool_.jobs_.empty() || pool_.shutdown_; }); - - if (pool_.shutdown_ && pool_.jobs_.empty()) { break; } - - fn = std::move(pool_.jobs_.front()); - pool_.jobs_.pop_front(); - } - - assert(true == static_cast(fn)); - fn(); - } - } - - ThreadPool &pool_; - }; - friend struct worker; - - std::vector threads_; - std::list> jobs_; - - bool shutdown_; - - std::condition_variable cond_; - std::mutex mutex_; -}; - -using Logger = std::function; - -using SocketOptions = std::function; - -void default_socket_options(socket_t sock); - -class Server { -public: - using Handler = std::function; - - using ExceptionHandler = - std::function; - - enum class HandlerResponse { - Handled, - Unhandled, - }; - using HandlerWithResponse = - std::function; - - using HandlerWithContentReader = std::function; - - using Expect100ContinueHandler = - std::function; - - Server(); - - virtual ~Server(); - - virtual bool is_valid() const; - - Server &Get(const std::string &pattern, Handler handler); - Server &Post(const std::string &pattern, Handler handler); - Server &Post(const std::string &pattern, HandlerWithContentReader handler); - Server &Put(const std::string &pattern, Handler handler); - Server &Put(const std::string &pattern, HandlerWithContentReader handler); - Server &Patch(const std::string &pattern, Handler handler); - Server &Patch(const std::string &pattern, HandlerWithContentReader handler); - Server &Delete(const std::string &pattern, Handler handler); - Server &Delete(const std::string &pattern, HandlerWithContentReader handler); - Server &Options(const std::string &pattern, Handler handler); - - bool set_base_dir(const std::string &dir, - const std::string &mount_point = std::string()); - bool set_mount_point(const std::string &mount_point, const std::string &dir, - Headers headers = Headers()); - bool remove_mount_point(const std::string &mount_point); - Server &set_file_extension_and_mimetype_mapping(const std::string &ext, - const std::string &mime); - Server &set_file_request_handler(Handler handler); - - Server &set_error_handler(HandlerWithResponse handler); - Server &set_error_handler(Handler handler); - Server &set_exception_handler(ExceptionHandler handler); - Server &set_pre_routing_handler(HandlerWithResponse handler); - Server &set_post_routing_handler(Handler handler); - - Server &set_expect_100_continue_handler(Expect100ContinueHandler handler); - Server &set_logger(Logger logger); - - Server &set_address_family(int family); - Server &set_tcp_nodelay(bool on); - Server &set_socket_options(SocketOptions socket_options); - - Server &set_default_headers(Headers headers); - - Server &set_keep_alive_max_count(size_t count); - Server &set_keep_alive_timeout(time_t sec); - - Server &set_read_timeout(time_t sec, time_t usec = 0); - template - Server &set_read_timeout(const std::chrono::duration &duration); - - Server &set_write_timeout(time_t sec, time_t usec = 0); - template - Server &set_write_timeout(const std::chrono::duration &duration); - - Server &set_idle_interval(time_t sec, time_t usec = 0); - template - Server &set_idle_interval(const std::chrono::duration &duration); - - Server &set_payload_max_length(size_t length); - - bool bind_to_port(const std::string &host, int port, int socket_flags = 0); - int bind_to_any_port(const std::string &host, int socket_flags = 0); - bool listen_after_bind(); - - bool listen(const std::string &host, int port, int socket_flags = 0); - - bool is_running() const; - void wait_until_ready() const; - void stop(); - - std::function new_task_queue; - -protected: - bool process_request(Stream &strm, bool close_connection, - bool &connection_closed, - const std::function &setup_request); - - std::atomic svr_sock_{INVALID_SOCKET}; - size_t keep_alive_max_count_ = CPPHTTPLIB_KEEPALIVE_MAX_COUNT; - time_t keep_alive_timeout_sec_ = CPPHTTPLIB_KEEPALIVE_TIMEOUT_SECOND; - time_t read_timeout_sec_ = CPPHTTPLIB_READ_TIMEOUT_SECOND; - time_t read_timeout_usec_ = CPPHTTPLIB_READ_TIMEOUT_USECOND; - time_t write_timeout_sec_ = CPPHTTPLIB_WRITE_TIMEOUT_SECOND; - time_t write_timeout_usec_ = CPPHTTPLIB_WRITE_TIMEOUT_USECOND; - time_t idle_interval_sec_ = CPPHTTPLIB_IDLE_INTERVAL_SECOND; - time_t idle_interval_usec_ = CPPHTTPLIB_IDLE_INTERVAL_USECOND; - size_t payload_max_length_ = CPPHTTPLIB_PAYLOAD_MAX_LENGTH; - -private: - using Handlers = std::vector>; - using HandlersForContentReader = - std::vector>; - - socket_t create_server_socket(const std::string &host, int port, - int socket_flags, - SocketOptions socket_options) const; - int bind_internal(const std::string &host, int port, int socket_flags); - bool listen_internal(); - - bool routing(Request &req, Response &res, Stream &strm); - bool handle_file_request(const Request &req, Response &res, - bool head = false); - bool dispatch_request(Request &req, Response &res, const Handlers &handlers); - bool - dispatch_request_for_content_reader(Request &req, Response &res, - ContentReader content_reader, - const HandlersForContentReader &handlers); - - bool parse_request_line(const char *s, Request &req); - void apply_ranges(const Request &req, Response &res, - std::string &content_type, std::string &boundary); - bool write_response(Stream &strm, bool close_connection, const Request &req, - Response &res); - bool write_response_with_content(Stream &strm, bool close_connection, - const Request &req, Response &res); - bool write_response_core(Stream &strm, bool close_connection, - const Request &req, Response &res, - bool need_apply_ranges); - bool write_content_with_provider(Stream &strm, const Request &req, - Response &res, const std::string &boundary, - const std::string &content_type); - bool read_content(Stream &strm, Request &req, Response &res); - bool - read_content_with_content_receiver(Stream &strm, Request &req, Response &res, - ContentReceiver receiver, - MultipartContentHeader multipart_header, - ContentReceiver multipart_receiver); - bool read_content_core(Stream &strm, Request &req, Response &res, - ContentReceiver receiver, - MultipartContentHeader multipart_header, - ContentReceiver multipart_receiver); - - virtual bool process_and_close_socket(socket_t sock); - - struct MountPointEntry { - std::string mount_point; - std::string base_dir; - Headers headers; - }; - std::vector base_dirs_; - - std::atomic is_running_{false}; - std::atomic done_{false}; - std::map file_extension_and_mimetype_map_; - Handler file_request_handler_; - Handlers get_handlers_; - Handlers post_handlers_; - HandlersForContentReader post_handlers_for_content_reader_; - Handlers put_handlers_; - HandlersForContentReader put_handlers_for_content_reader_; - Handlers patch_handlers_; - HandlersForContentReader patch_handlers_for_content_reader_; - Handlers delete_handlers_; - HandlersForContentReader delete_handlers_for_content_reader_; - Handlers options_handlers_; - HandlerWithResponse error_handler_; - ExceptionHandler exception_handler_; - HandlerWithResponse pre_routing_handler_; - Handler post_routing_handler_; - Logger logger_; - Expect100ContinueHandler expect_100_continue_handler_; - - int address_family_ = AF_UNSPEC; - bool tcp_nodelay_ = CPPHTTPLIB_TCP_NODELAY; - SocketOptions socket_options_ = default_socket_options; - - Headers default_headers_; -}; - -enum class Error { - Success = 0, - Unknown, - Connection, - BindIPAddress, - Read, - Write, - ExceedRedirectCount, - Canceled, - SSLConnection, - SSLLoadingCerts, - SSLServerVerification, - UnsupportedMultipartBoundaryChars, - Compression, - ConnectionTimeout, - - // For internal use only - SSLPeerCouldBeClosed_, -}; - -std::string to_string(const Error error); - -std::ostream &operator<<(std::ostream &os, const Error &obj); - -class Result { -public: - Result(std::unique_ptr &&res, Error err, - Headers &&request_headers = Headers{}) - : res_(std::move(res)), err_(err), - request_headers_(std::move(request_headers)) {} - // Response - operator bool() const { return res_ != nullptr; } - bool operator==(std::nullptr_t) const { return res_ == nullptr; } - bool operator!=(std::nullptr_t) const { return res_ != nullptr; } - const Response &value() const { return *res_; } - Response &value() { return *res_; } - const Response &operator*() const { return *res_; } - Response &operator*() { return *res_; } - const Response *operator->() const { return res_.get(); } - Response *operator->() { return res_.get(); } - - // Error - Error error() const { return err_; } - - // Request Headers - bool has_request_header(const std::string &key) const; - std::string get_request_header_value(const std::string &key, - size_t id = 0) const; - template - T get_request_header_value(const std::string &key, size_t id = 0) const; - size_t get_request_header_value_count(const std::string &key) const; - -private: - std::unique_ptr res_; - Error err_; - Headers request_headers_; -}; - -class ClientImpl { -public: - explicit ClientImpl(const std::string &host); - - explicit ClientImpl(const std::string &host, int port); - - explicit ClientImpl(const std::string &host, int port, - const std::string &client_cert_path, - const std::string &client_key_path); - - virtual ~ClientImpl(); - - virtual bool is_valid() const; - - Result Get(const std::string &path); - Result Get(const std::string &path, const Headers &headers); - Result Get(const std::string &path, Progress progress); - Result Get(const std::string &path, const Headers &headers, - Progress progress); - Result Get(const std::string &path, ContentReceiver content_receiver); - Result Get(const std::string &path, const Headers &headers, - ContentReceiver content_receiver); - Result Get(const std::string &path, ContentReceiver content_receiver, - Progress progress); - Result Get(const std::string &path, const Headers &headers, - ContentReceiver content_receiver, Progress progress); - Result Get(const std::string &path, ResponseHandler response_handler, - ContentReceiver content_receiver); - Result Get(const std::string &path, const Headers &headers, - ResponseHandler response_handler, - ContentReceiver content_receiver); - Result Get(const std::string &path, ResponseHandler response_handler, - ContentReceiver content_receiver, Progress progress); - Result Get(const std::string &path, const Headers &headers, - ResponseHandler response_handler, ContentReceiver content_receiver, - Progress progress); - - Result Get(const std::string &path, const Params ¶ms, - const Headers &headers, Progress progress = nullptr); - Result Get(const std::string &path, const Params ¶ms, - const Headers &headers, ContentReceiver content_receiver, - Progress progress = nullptr); - Result Get(const std::string &path, const Params ¶ms, - const Headers &headers, ResponseHandler response_handler, - ContentReceiver content_receiver, Progress progress = nullptr); - - Result Head(const std::string &path); - Result Head(const std::string &path, const Headers &headers); - - Result Post(const std::string &path); - Result Post(const std::string &path, const Headers &headers); - Result Post(const std::string &path, const char *body, size_t content_length, - const std::string &content_type); - Result Post(const std::string &path, const Headers &headers, const char *body, - size_t content_length, const std::string &content_type); - Result Post(const std::string &path, const std::string &body, - const std::string &content_type); - Result Post(const std::string &path, const Headers &headers, - const std::string &body, const std::string &content_type); - Result Post(const std::string &path, size_t content_length, - ContentProvider content_provider, - const std::string &content_type); - Result Post(const std::string &path, - ContentProviderWithoutLength content_provider, - const std::string &content_type); - Result Post(const std::string &path, const Headers &headers, - size_t content_length, ContentProvider content_provider, - const std::string &content_type); - Result Post(const std::string &path, const Headers &headers, - ContentProviderWithoutLength content_provider, - const std::string &content_type); - Result Post(const std::string &path, const Params ¶ms); - Result Post(const std::string &path, const Headers &headers, - const Params ¶ms); - Result Post(const std::string &path, const MultipartFormDataItems &items); - Result Post(const std::string &path, const Headers &headers, - const MultipartFormDataItems &items); - Result Post(const std::string &path, const Headers &headers, - const MultipartFormDataItems &items, const std::string &boundary); - Result Post(const std::string &path, const Headers &headers, - const MultipartFormDataItems &items, - const MultipartFormDataProviderItems &provider_items); - - Result Put(const std::string &path); - Result Put(const std::string &path, const char *body, size_t content_length, - const std::string &content_type); - Result Put(const std::string &path, const Headers &headers, const char *body, - size_t content_length, const std::string &content_type); - Result Put(const std::string &path, const std::string &body, - const std::string &content_type); - Result Put(const std::string &path, const Headers &headers, - const std::string &body, const std::string &content_type); - Result Put(const std::string &path, size_t content_length, - ContentProvider content_provider, const std::string &content_type); - Result Put(const std::string &path, - ContentProviderWithoutLength content_provider, - const std::string &content_type); - Result Put(const std::string &path, const Headers &headers, - size_t content_length, ContentProvider content_provider, - const std::string &content_type); - Result Put(const std::string &path, const Headers &headers, - ContentProviderWithoutLength content_provider, - const std::string &content_type); - Result Put(const std::string &path, const Params ¶ms); - Result Put(const std::string &path, const Headers &headers, - const Params ¶ms); - Result Put(const std::string &path, const MultipartFormDataItems &items); - Result Put(const std::string &path, const Headers &headers, - const MultipartFormDataItems &items); - Result Put(const std::string &path, const Headers &headers, - const MultipartFormDataItems &items, const std::string &boundary); - Result Put(const std::string &path, const Headers &headers, - const MultipartFormDataItems &items, - const MultipartFormDataProviderItems &provider_items); - - Result Patch(const std::string &path); - Result Patch(const std::string &path, const char *body, size_t content_length, - const std::string &content_type); - Result Patch(const std::string &path, const Headers &headers, - const char *body, size_t content_length, - const std::string &content_type); - Result Patch(const std::string &path, const std::string &body, - const std::string &content_type); - Result Patch(const std::string &path, const Headers &headers, - const std::string &body, const std::string &content_type); - Result Patch(const std::string &path, size_t content_length, - ContentProvider content_provider, - const std::string &content_type); - Result Patch(const std::string &path, - ContentProviderWithoutLength content_provider, - const std::string &content_type); - Result Patch(const std::string &path, const Headers &headers, - size_t content_length, ContentProvider content_provider, - const std::string &content_type); - Result Patch(const std::string &path, const Headers &headers, - ContentProviderWithoutLength content_provider, - const std::string &content_type); - - Result Delete(const std::string &path); - Result Delete(const std::string &path, const Headers &headers); - Result Delete(const std::string &path, const char *body, - size_t content_length, const std::string &content_type); - Result Delete(const std::string &path, const Headers &headers, - const char *body, size_t content_length, - const std::string &content_type); - Result Delete(const std::string &path, const std::string &body, - const std::string &content_type); - Result Delete(const std::string &path, const Headers &headers, - const std::string &body, const std::string &content_type); - - Result Options(const std::string &path); - Result Options(const std::string &path, const Headers &headers); - - bool send(Request &req, Response &res, Error &error); - Result send(const Request &req); - - size_t is_socket_open() const; - - socket_t socket() const; - - void stop(); - - void set_hostname_addr_map(std::map addr_map); - - void set_default_headers(Headers headers); - - void set_address_family(int family); - void set_tcp_nodelay(bool on); - void set_socket_options(SocketOptions socket_options); - - void set_connection_timeout(time_t sec, time_t usec = 0); - template - void - set_connection_timeout(const std::chrono::duration &duration); - - void set_read_timeout(time_t sec, time_t usec = 0); - template - void set_read_timeout(const std::chrono::duration &duration); - - void set_write_timeout(time_t sec, time_t usec = 0); - template - void set_write_timeout(const std::chrono::duration &duration); - - void set_basic_auth(const std::string &username, const std::string &password); - void set_bearer_token_auth(const std::string &token); -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT - void set_digest_auth(const std::string &username, - const std::string &password); -#endif - - void set_keep_alive(bool on); - void set_follow_location(bool on); - - void set_url_encode(bool on); - - void set_compress(bool on); - - void set_decompress(bool on); - - void set_interface(const std::string &intf); - - void set_proxy(const std::string &host, int port); - void set_proxy_basic_auth(const std::string &username, - const std::string &password); - void set_proxy_bearer_token_auth(const std::string &token); -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT - void set_proxy_digest_auth(const std::string &username, - const std::string &password); -#endif - -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT - void set_ca_cert_path(const std::string &ca_cert_file_path, - const std::string &ca_cert_dir_path = std::string()); - void set_ca_cert_store(X509_STORE *ca_cert_store); -#endif - -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT - void enable_server_certificate_verification(bool enabled); -#endif - - void set_logger(Logger logger); - -protected: - struct Socket { - socket_t sock = INVALID_SOCKET; -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT - SSL *ssl = nullptr; -#endif - - bool is_open() const { return sock != INVALID_SOCKET; } - }; - - virtual bool create_and_connect_socket(Socket &socket, Error &error); - - // All of: - // shutdown_ssl - // shutdown_socket - // close_socket - // should ONLY be called when socket_mutex_ is locked. - // Also, shutdown_ssl and close_socket should also NOT be called concurrently - // with a DIFFERENT thread sending requests using that socket. - virtual void shutdown_ssl(Socket &socket, bool shutdown_gracefully); - void shutdown_socket(Socket &socket); - void close_socket(Socket &socket); - - bool process_request(Stream &strm, Request &req, Response &res, - bool close_connection, Error &error); - - bool write_content_with_provider(Stream &strm, const Request &req, - Error &error); - - void copy_settings(const ClientImpl &rhs); - - // Socket endpoint information - const std::string host_; - const int port_; - const std::string host_and_port_; - - // Current open socket - Socket socket_; - mutable std::mutex socket_mutex_; - std::recursive_mutex request_mutex_; - - // These are all protected under socket_mutex - size_t socket_requests_in_flight_ = 0; - std::thread::id socket_requests_are_from_thread_ = std::thread::id(); - bool socket_should_be_closed_when_request_is_done_ = false; - - // Hostname-IP map - std::map addr_map_; - - // Default headers - Headers default_headers_; - - // Settings - std::string client_cert_path_; - std::string client_key_path_; - - time_t connection_timeout_sec_ = CPPHTTPLIB_CONNECTION_TIMEOUT_SECOND; - time_t connection_timeout_usec_ = CPPHTTPLIB_CONNECTION_TIMEOUT_USECOND; - time_t read_timeout_sec_ = CPPHTTPLIB_READ_TIMEOUT_SECOND; - time_t read_timeout_usec_ = CPPHTTPLIB_READ_TIMEOUT_USECOND; - time_t write_timeout_sec_ = CPPHTTPLIB_WRITE_TIMEOUT_SECOND; - time_t write_timeout_usec_ = CPPHTTPLIB_WRITE_TIMEOUT_USECOND; - - std::string basic_auth_username_; - std::string basic_auth_password_; - std::string bearer_token_auth_token_; -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT - std::string digest_auth_username_; - std::string digest_auth_password_; -#endif - - bool keep_alive_ = false; - bool follow_location_ = false; - - bool url_encode_ = true; - - int address_family_ = AF_UNSPEC; - bool tcp_nodelay_ = CPPHTTPLIB_TCP_NODELAY; - SocketOptions socket_options_ = nullptr; - - bool compress_ = false; - bool decompress_ = true; - - std::string interface_; - - std::string proxy_host_; - int proxy_port_ = -1; - - std::string proxy_basic_auth_username_; - std::string proxy_basic_auth_password_; - std::string proxy_bearer_token_auth_token_; -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT - std::string proxy_digest_auth_username_; - std::string proxy_digest_auth_password_; -#endif - -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT - std::string ca_cert_file_path_; - std::string ca_cert_dir_path_; - - X509_STORE *ca_cert_store_ = nullptr; -#endif - -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT - bool server_certificate_verification_ = true; -#endif - - Logger logger_; - -private: - bool send_(Request &req, Response &res, Error &error); - Result send_(Request &&req); - - socket_t create_client_socket(Error &error) const; - bool read_response_line(Stream &strm, const Request &req, Response &res); - bool write_request(Stream &strm, Request &req, bool close_connection, - Error &error); - bool redirect(Request &req, Response &res, Error &error); - bool handle_request(Stream &strm, Request &req, Response &res, - bool close_connection, Error &error); - std::unique_ptr send_with_content_provider( - Request &req, const char *body, size_t content_length, - ContentProvider content_provider, - ContentProviderWithoutLength content_provider_without_length, - const std::string &content_type, Error &error); - Result send_with_content_provider( - const std::string &method, const std::string &path, - const Headers &headers, const char *body, size_t content_length, - ContentProvider content_provider, - ContentProviderWithoutLength content_provider_without_length, - const std::string &content_type); - ContentProviderWithoutLength get_multipart_content_provider( - const std::string &boundary, const MultipartFormDataItems &items, - const MultipartFormDataProviderItems &provider_items); - - std::string adjust_host_string(const std::string &host) const; - - virtual bool process_socket(const Socket &socket, - std::function callback); - virtual bool is_ssl() const; -}; - -class Client { -public: - // Universal interface - explicit Client(const std::string &scheme_host_port); - - explicit Client(const std::string &scheme_host_port, - const std::string &client_cert_path, - const std::string &client_key_path); - - // HTTP only interface - explicit Client(const std::string &host, int port); - - explicit Client(const std::string &host, int port, - const std::string &client_cert_path, - const std::string &client_key_path); - - Client(Client &&) = default; - - ~Client(); - - bool is_valid() const; - - Result Get(const std::string &path); - Result Get(const std::string &path, const Headers &headers); - Result Get(const std::string &path, Progress progress); - Result Get(const std::string &path, const Headers &headers, - Progress progress); - Result Get(const std::string &path, ContentReceiver content_receiver); - Result Get(const std::string &path, const Headers &headers, - ContentReceiver content_receiver); - Result Get(const std::string &path, ContentReceiver content_receiver, - Progress progress); - Result Get(const std::string &path, const Headers &headers, - ContentReceiver content_receiver, Progress progress); - Result Get(const std::string &path, ResponseHandler response_handler, - ContentReceiver content_receiver); - Result Get(const std::string &path, const Headers &headers, - ResponseHandler response_handler, - ContentReceiver content_receiver); - Result Get(const std::string &path, const Headers &headers, - ResponseHandler response_handler, ContentReceiver content_receiver, - Progress progress); - Result Get(const std::string &path, ResponseHandler response_handler, - ContentReceiver content_receiver, Progress progress); - - Result Get(const std::string &path, const Params ¶ms, - const Headers &headers, Progress progress = nullptr); - Result Get(const std::string &path, const Params ¶ms, - const Headers &headers, ContentReceiver content_receiver, - Progress progress = nullptr); - Result Get(const std::string &path, const Params ¶ms, - const Headers &headers, ResponseHandler response_handler, - ContentReceiver content_receiver, Progress progress = nullptr); - - Result Head(const std::string &path); - Result Head(const std::string &path, const Headers &headers); - - Result Post(const std::string &path); - Result Post(const std::string &path, const Headers &headers); - Result Post(const std::string &path, const char *body, size_t content_length, - const std::string &content_type); - Result Post(const std::string &path, const Headers &headers, const char *body, - size_t content_length, const std::string &content_type); - Result Post(const std::string &path, const std::string &body, - const std::string &content_type); - Result Post(const std::string &path, const Headers &headers, - const std::string &body, const std::string &content_type); - Result Post(const std::string &path, size_t content_length, - ContentProvider content_provider, - const std::string &content_type); - Result Post(const std::string &path, - ContentProviderWithoutLength content_provider, - const std::string &content_type); - Result Post(const std::string &path, const Headers &headers, - size_t content_length, ContentProvider content_provider, - const std::string &content_type); - Result Post(const std::string &path, const Headers &headers, - ContentProviderWithoutLength content_provider, - const std::string &content_type); - Result Post(const std::string &path, const Params ¶ms); - Result Post(const std::string &path, const Headers &headers, - const Params ¶ms); - Result Post(const std::string &path, const MultipartFormDataItems &items); - Result Post(const std::string &path, const Headers &headers, - const MultipartFormDataItems &items); - Result Post(const std::string &path, const Headers &headers, - const MultipartFormDataItems &items, const std::string &boundary); - Result Post(const std::string &path, const Headers &headers, - const MultipartFormDataItems &items, - const MultipartFormDataProviderItems &provider_items); - - Result Put(const std::string &path); - Result Put(const std::string &path, const char *body, size_t content_length, - const std::string &content_type); - Result Put(const std::string &path, const Headers &headers, const char *body, - size_t content_length, const std::string &content_type); - Result Put(const std::string &path, const std::string &body, - const std::string &content_type); - Result Put(const std::string &path, const Headers &headers, - const std::string &body, const std::string &content_type); - Result Put(const std::string &path, size_t content_length, - ContentProvider content_provider, const std::string &content_type); - Result Put(const std::string &path, - ContentProviderWithoutLength content_provider, - const std::string &content_type); - Result Put(const std::string &path, const Headers &headers, - size_t content_length, ContentProvider content_provider, - const std::string &content_type); - Result Put(const std::string &path, const Headers &headers, - ContentProviderWithoutLength content_provider, - const std::string &content_type); - Result Put(const std::string &path, const Params ¶ms); - Result Put(const std::string &path, const Headers &headers, - const Params ¶ms); - Result Put(const std::string &path, const MultipartFormDataItems &items); - Result Put(const std::string &path, const Headers &headers, - const MultipartFormDataItems &items); - Result Put(const std::string &path, const Headers &headers, - const MultipartFormDataItems &items, const std::string &boundary); - Result Put(const std::string &path, const Headers &headers, - const MultipartFormDataItems &items, - const MultipartFormDataProviderItems &provider_items); - - Result Patch(const std::string &path); - Result Patch(const std::string &path, const char *body, size_t content_length, - const std::string &content_type); - Result Patch(const std::string &path, const Headers &headers, - const char *body, size_t content_length, - const std::string &content_type); - Result Patch(const std::string &path, const std::string &body, - const std::string &content_type); - Result Patch(const std::string &path, const Headers &headers, - const std::string &body, const std::string &content_type); - Result Patch(const std::string &path, size_t content_length, - ContentProvider content_provider, - const std::string &content_type); - Result Patch(const std::string &path, - ContentProviderWithoutLength content_provider, - const std::string &content_type); - Result Patch(const std::string &path, const Headers &headers, - size_t content_length, ContentProvider content_provider, - const std::string &content_type); - Result Patch(const std::string &path, const Headers &headers, - ContentProviderWithoutLength content_provider, - const std::string &content_type); - - Result Delete(const std::string &path); - Result Delete(const std::string &path, const Headers &headers); - Result Delete(const std::string &path, const char *body, - size_t content_length, const std::string &content_type); - Result Delete(const std::string &path, const Headers &headers, - const char *body, size_t content_length, - const std::string &content_type); - Result Delete(const std::string &path, const std::string &body, - const std::string &content_type); - Result Delete(const std::string &path, const Headers &headers, - const std::string &body, const std::string &content_type); - - Result Options(const std::string &path); - Result Options(const std::string &path, const Headers &headers); - - bool send(Request &req, Response &res, Error &error); - Result send(const Request &req); - - size_t is_socket_open() const; - - socket_t socket() const; - - void stop(); - - void set_hostname_addr_map(std::map addr_map); - - void set_default_headers(Headers headers); - - void set_address_family(int family); - void set_tcp_nodelay(bool on); - void set_socket_options(SocketOptions socket_options); - - void set_connection_timeout(time_t sec, time_t usec = 0); - template - void - set_connection_timeout(const std::chrono::duration &duration); - - void set_read_timeout(time_t sec, time_t usec = 0); - template - void set_read_timeout(const std::chrono::duration &duration); - - void set_write_timeout(time_t sec, time_t usec = 0); - template - void set_write_timeout(const std::chrono::duration &duration); - - void set_basic_auth(const std::string &username, const std::string &password); - void set_bearer_token_auth(const std::string &token); -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT - void set_digest_auth(const std::string &username, - const std::string &password); -#endif - - void set_keep_alive(bool on); - void set_follow_location(bool on); - - void set_url_encode(bool on); - - void set_compress(bool on); - - void set_decompress(bool on); - - void set_interface(const std::string &intf); - - void set_proxy(const std::string &host, int port); - void set_proxy_basic_auth(const std::string &username, - const std::string &password); - void set_proxy_bearer_token_auth(const std::string &token); -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT - void set_proxy_digest_auth(const std::string &username, - const std::string &password); -#endif - -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT - void enable_server_certificate_verification(bool enabled); -#endif - - void set_logger(Logger logger); - - // SSL -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT - void set_ca_cert_path(const std::string &ca_cert_file_path, - const std::string &ca_cert_dir_path = std::string()); - - void set_ca_cert_store(X509_STORE *ca_cert_store); - - long get_openssl_verify_result() const; - - SSL_CTX *ssl_context() const; -#endif - -private: - std::unique_ptr cli_; - -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT - bool is_ssl_ = false; -#endif -}; - -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT -class SSLServer : public Server { -public: - SSLServer(const char *cert_path, const char *private_key_path, - const char *client_ca_cert_file_path = nullptr, - const char *client_ca_cert_dir_path = nullptr, - const char *private_key_password = nullptr); - - SSLServer(X509 *cert, EVP_PKEY *private_key, - X509_STORE *client_ca_cert_store = nullptr); - - SSLServer( - const std::function &setup_ssl_ctx_callback); - - ~SSLServer() override; - - bool is_valid() const override; - - SSL_CTX *ssl_context() const; - -private: - bool process_and_close_socket(socket_t sock) override; - - SSL_CTX *ctx_; - std::mutex ctx_mutex_; -}; - -class SSLClient : public ClientImpl { -public: - explicit SSLClient(const std::string &host); - - explicit SSLClient(const std::string &host, int port); - - explicit SSLClient(const std::string &host, int port, - const std::string &client_cert_path, - const std::string &client_key_path); - - explicit SSLClient(const std::string &host, int port, X509 *client_cert, - EVP_PKEY *client_key); - - ~SSLClient() override; - - bool is_valid() const override; - - void set_ca_cert_store(X509_STORE *ca_cert_store); - - long get_openssl_verify_result() const; - - SSL_CTX *ssl_context() const; - -private: - bool create_and_connect_socket(Socket &socket, Error &error) override; - void shutdown_ssl(Socket &socket, bool shutdown_gracefully) override; - void shutdown_ssl_impl(Socket &socket, bool shutdown_socket); - - bool process_socket(const Socket &socket, - std::function callback) override; - bool is_ssl() const override; - - bool connect_with_proxy(Socket &sock, Response &res, bool &success, - Error &error); - bool initialize_ssl(Socket &socket, Error &error); - - bool load_certs(); - - bool verify_host(X509 *server_cert) const; - bool verify_host_with_subject_alt_name(X509 *server_cert) const; - bool verify_host_with_common_name(X509 *server_cert) const; - bool check_host_name(const char *pattern, size_t pattern_len) const; - - SSL_CTX *ctx_; - std::mutex ctx_mutex_; - std::once_flag initialize_cert_; - - std::vector host_components_; - - long verify_result_ = 0; - - friend class ClientImpl; -}; -#endif - -/* - * Implementation of template methods. - */ - -namespace detail { - -template -inline void duration_to_sec_and_usec(const T &duration, U callback) { - auto sec = std::chrono::duration_cast(duration).count(); - auto usec = std::chrono::duration_cast( - duration - std::chrono::seconds(sec)) - .count(); - callback(static_cast(sec), static_cast(usec)); -} - -template -inline T get_header_value(const Headers & /*headers*/, - const std::string & /*key*/, size_t /*id*/ = 0, - uint64_t /*def*/ = 0) {} - -template <> -inline uint64_t get_header_value(const Headers &headers, - const std::string &key, size_t id, - uint64_t def) { - auto rng = headers.equal_range(key); - auto it = rng.first; - std::advance(it, static_cast(id)); - if (it != rng.second) { - return std::strtoull(it->second.data(), nullptr, 10); - } - return def; -} - -} // namespace detail - -template -inline T Request::get_header_value(const std::string &key, size_t id) const { - return detail::get_header_value(headers, key, id, 0); -} - -template -inline T Response::get_header_value(const std::string &key, size_t id) const { - return detail::get_header_value(headers, key, id, 0); -} - -template -inline ssize_t Stream::write_format(const char *fmt, const Args &...args) { - const auto bufsiz = 2048; - std::array buf{}; - - auto sn = snprintf(buf.data(), buf.size() - 1, fmt, args...); - if (sn <= 0) { return sn; } - - auto n = static_cast(sn); - - if (n >= buf.size() - 1) { - std::vector glowable_buf(buf.size()); - - while (n >= glowable_buf.size() - 1) { - glowable_buf.resize(glowable_buf.size() * 2); - n = static_cast( - snprintf(&glowable_buf[0], glowable_buf.size() - 1, fmt, args...)); - } - return write(&glowable_buf[0], n); - } else { - return write(buf.data(), n); - } -} - -inline void default_socket_options(socket_t sock) { - int yes = 1; -#ifdef _WIN32 - setsockopt(sock, SOL_SOCKET, SO_REUSEADDR, reinterpret_cast(&yes), - sizeof(yes)); - setsockopt(sock, SOL_SOCKET, SO_EXCLUSIVEADDRUSE, - reinterpret_cast(&yes), sizeof(yes)); -#else -#ifdef SO_REUSEPORT - setsockopt(sock, SOL_SOCKET, SO_REUSEPORT, reinterpret_cast(&yes), - sizeof(yes)); -#else - setsockopt(sock, SOL_SOCKET, SO_REUSEADDR, reinterpret_cast(&yes), - sizeof(yes)); -#endif -#endif -} - -template -inline Server & -Server::set_read_timeout(const std::chrono::duration &duration) { - detail::duration_to_sec_and_usec( - duration, [&](time_t sec, time_t usec) { set_read_timeout(sec, usec); }); - return *this; -} - -template -inline Server & -Server::set_write_timeout(const std::chrono::duration &duration) { - detail::duration_to_sec_and_usec( - duration, [&](time_t sec, time_t usec) { set_write_timeout(sec, usec); }); - return *this; -} - -template -inline Server & -Server::set_idle_interval(const std::chrono::duration &duration) { - detail::duration_to_sec_and_usec( - duration, [&](time_t sec, time_t usec) { set_idle_interval(sec, usec); }); - return *this; -} - -inline std::string to_string(const Error error) { - switch (error) { - case Error::Success: return "Success (no error)"; - case Error::Connection: return "Could not establish connection"; - case Error::BindIPAddress: return "Failed to bind IP address"; - case Error::Read: return "Failed to read connection"; - case Error::Write: return "Failed to write connection"; - case Error::ExceedRedirectCount: return "Maximum redirect count exceeded"; - case Error::Canceled: return "Connection handling canceled"; - case Error::SSLConnection: return "SSL connection failed"; - case Error::SSLLoadingCerts: return "SSL certificate loading failed"; - case Error::SSLServerVerification: return "SSL server verification failed"; - case Error::UnsupportedMultipartBoundaryChars: - return "Unsupported HTTP multipart boundary characters"; - case Error::Compression: return "Compression failed"; - case Error::ConnectionTimeout: return "Connection timed out"; - case Error::Unknown: return "Unknown"; - default: break; - } - - return "Invalid"; -} - -inline std::ostream &operator<<(std::ostream &os, const Error &obj) { - os << to_string(obj); - os << " (" << static_cast::type>(obj) << ')'; - return os; -} - -template -inline T Result::get_request_header_value(const std::string &key, - size_t id) const { - return detail::get_header_value(request_headers_, key, id, 0); -} - -template -inline void ClientImpl::set_connection_timeout( - const std::chrono::duration &duration) { - detail::duration_to_sec_and_usec(duration, [&](time_t sec, time_t usec) { - set_connection_timeout(sec, usec); - }); -} - -template -inline void ClientImpl::set_read_timeout( - const std::chrono::duration &duration) { - detail::duration_to_sec_and_usec( - duration, [&](time_t sec, time_t usec) { set_read_timeout(sec, usec); }); -} - -template -inline void ClientImpl::set_write_timeout( - const std::chrono::duration &duration) { - detail::duration_to_sec_and_usec( - duration, [&](time_t sec, time_t usec) { set_write_timeout(sec, usec); }); -} - -template -inline void Client::set_connection_timeout( - const std::chrono::duration &duration) { - cli_->set_connection_timeout(duration); -} - -template -inline void -Client::set_read_timeout(const std::chrono::duration &duration) { - cli_->set_read_timeout(duration); -} - -template -inline void -Client::set_write_timeout(const std::chrono::duration &duration) { - cli_->set_write_timeout(duration); -} - -/* - * Forward declarations and types that will be part of the .h file if split into - * .h + .cc. - */ - -std::string hosted_at(const std::string &hostname); - -void hosted_at(const std::string &hostname, std::vector &addrs); - -std::string append_query_params(const std::string &path, const Params ¶ms); - -std::pair make_range_header(Ranges ranges); - -std::pair -make_basic_authentication_header(const std::string &username, - const std::string &password, - bool is_proxy = false); - -namespace detail { - -std::string encode_query_param(const std::string &value); - -std::string decode_url(const std::string &s, bool convert_plus_to_space); - -void read_file(const std::string &path, std::string &out); - -std::string trim_copy(const std::string &s); - -void split(const char *b, const char *e, char d, - std::function fn); - -bool process_client_socket(socket_t sock, time_t read_timeout_sec, - time_t read_timeout_usec, time_t write_timeout_sec, - time_t write_timeout_usec, - std::function callback); - -socket_t create_client_socket( - const std::string &host, const std::string &ip, int port, - int address_family, bool tcp_nodelay, SocketOptions socket_options, - time_t connection_timeout_sec, time_t connection_timeout_usec, - time_t read_timeout_sec, time_t read_timeout_usec, time_t write_timeout_sec, - time_t write_timeout_usec, const std::string &intf, Error &error); - -const char *get_header_value(const Headers &headers, const std::string &key, - size_t id = 0, const char *def = nullptr); - -std::string params_to_query_str(const Params ¶ms); - -void parse_query_text(const std::string &s, Params ¶ms); - -bool parse_multipart_boundary(const std::string &content_type, - std::string &boundary); - -bool parse_range_header(const std::string &s, Ranges &ranges); - -int close_socket(socket_t sock); - -ssize_t send_socket(socket_t sock, const void *ptr, size_t size, int flags); - -ssize_t read_socket(socket_t sock, void *ptr, size_t size, int flags); - -enum class EncodingType { None = 0, Gzip, Brotli }; - -EncodingType encoding_type(const Request &req, const Response &res); - -class BufferStream : public Stream { -public: - BufferStream() = default; - ~BufferStream() override = default; - - bool is_readable() const override; - bool is_writable() const override; - ssize_t read(char *ptr, size_t size) override; - ssize_t write(const char *ptr, size_t size) override; - void get_remote_ip_and_port(std::string &ip, int &port) const override; - void get_local_ip_and_port(std::string &ip, int &port) const override; - socket_t socket() const override; - - const std::string &get_buffer() const; - -private: - std::string buffer; - size_t position = 0; -}; - -class compressor { -public: - virtual ~compressor() = default; - - typedef std::function Callback; - virtual bool compress(const char *data, size_t data_length, bool last, - Callback callback) = 0; -}; - -class decompressor { -public: - virtual ~decompressor() = default; - - virtual bool is_valid() const = 0; - - typedef std::function Callback; - virtual bool decompress(const char *data, size_t data_length, - Callback callback) = 0; -}; - -class nocompressor : public compressor { -public: - virtual ~nocompressor() = default; - - bool compress(const char *data, size_t data_length, bool /*last*/, - Callback callback) override; -}; - -#ifdef CPPHTTPLIB_ZLIB_SUPPORT -class gzip_compressor : public compressor { -public: - gzip_compressor(); - ~gzip_compressor(); - - bool compress(const char *data, size_t data_length, bool last, - Callback callback) override; - -private: - bool is_valid_ = false; - z_stream strm_; -}; - -class gzip_decompressor : public decompressor { -public: - gzip_decompressor(); - ~gzip_decompressor(); - - bool is_valid() const override; - - bool decompress(const char *data, size_t data_length, - Callback callback) override; - -private: - bool is_valid_ = false; - z_stream strm_; -}; -#endif - -#ifdef CPPHTTPLIB_BROTLI_SUPPORT -class brotli_compressor : public compressor { -public: - brotli_compressor(); - ~brotli_compressor(); - - bool compress(const char *data, size_t data_length, bool last, - Callback callback) override; - -private: - BrotliEncoderState *state_ = nullptr; -}; - -class brotli_decompressor : public decompressor { -public: - brotli_decompressor(); - ~brotli_decompressor(); - - bool is_valid() const override; - - bool decompress(const char *data, size_t data_length, - Callback callback) override; - -private: - BrotliDecoderResult decoder_r; - BrotliDecoderState *decoder_s = nullptr; -}; -#endif - -// NOTE: until the read size reaches `fixed_buffer_size`, use `fixed_buffer` -// to store data. The call can set memory on stack for performance. -class stream_line_reader { -public: - stream_line_reader(Stream &strm, char *fixed_buffer, - size_t fixed_buffer_size); - const char *ptr() const; - size_t size() const; - bool end_with_crlf() const; - bool getline(); - -private: - void append(char c); - - Stream &strm_; - char *fixed_buffer_; - const size_t fixed_buffer_size_; - size_t fixed_buffer_used_size_ = 0; - std::string glowable_buffer_; -}; - -} // namespace detail - -// ---------------------------------------------------------------------------- - -/* - * Implementation that will be part of the .cc file if split into .h + .cc. - */ - -namespace detail { - -inline bool is_hex(char c, int &v) { - if (0x20 <= c && isdigit(c)) { - v = c - '0'; - return true; - } else if ('A' <= c && c <= 'F') { - v = c - 'A' + 10; - return true; - } else if ('a' <= c && c <= 'f') { - v = c - 'a' + 10; - return true; - } - return false; -} - -inline bool from_hex_to_i(const std::string &s, size_t i, size_t cnt, - int &val) { - if (i >= s.size()) { return false; } - - val = 0; - for (; cnt; i++, cnt--) { - if (!s[i]) { return false; } - int v = 0; - if (is_hex(s[i], v)) { - val = val * 16 + v; - } else { - return false; - } - } - return true; -} - -inline std::string from_i_to_hex(size_t n) { - const char *charset = "0123456789abcdef"; - std::string ret; - do { - ret = charset[n & 15] + ret; - n >>= 4; - } while (n > 0); - return ret; -} - -inline size_t to_utf8(int code, char *buff) { - if (code < 0x0080) { - buff[0] = (code & 0x7F); - return 1; - } else if (code < 0x0800) { - buff[0] = static_cast(0xC0 | ((code >> 6) & 0x1F)); - buff[1] = static_cast(0x80 | (code & 0x3F)); - return 2; - } else if (code < 0xD800) { - buff[0] = static_cast(0xE0 | ((code >> 12) & 0xF)); - buff[1] = static_cast(0x80 | ((code >> 6) & 0x3F)); - buff[2] = static_cast(0x80 | (code & 0x3F)); - return 3; - } else if (code < 0xE000) { // D800 - DFFF is invalid... - return 0; - } else if (code < 0x10000) { - buff[0] = static_cast(0xE0 | ((code >> 12) & 0xF)); - buff[1] = static_cast(0x80 | ((code >> 6) & 0x3F)); - buff[2] = static_cast(0x80 | (code & 0x3F)); - return 3; - } else if (code < 0x110000) { - buff[0] = static_cast(0xF0 | ((code >> 18) & 0x7)); - buff[1] = static_cast(0x80 | ((code >> 12) & 0x3F)); - buff[2] = static_cast(0x80 | ((code >> 6) & 0x3F)); - buff[3] = static_cast(0x80 | (code & 0x3F)); - return 4; - } - - // NOTREACHED - return 0; -} - -// NOTE: This code came up with the following stackoverflow post: -// https://stackoverflow.com/questions/180947/base64-decode-snippet-in-c -inline std::string base64_encode(const std::string &in) { - static const auto lookup = - "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"; - - std::string out; - out.reserve(in.size()); - - int val = 0; - int valb = -6; - - for (auto c : in) { - val = (val << 8) + static_cast(c); - valb += 8; - while (valb >= 0) { - out.push_back(lookup[(val >> valb) & 0x3F]); - valb -= 6; - } - } - - if (valb > -6) { out.push_back(lookup[((val << 8) >> (valb + 8)) & 0x3F]); } - - while (out.size() % 4) { - out.push_back('='); - } - - return out; -} - -inline bool is_file(const std::string &path) { -#ifdef _WIN32 - return _access_s(path.c_str(), 0) == 0; -#else - struct stat st; - return stat(path.c_str(), &st) >= 0 && S_ISREG(st.st_mode); -#endif -} - -inline bool is_dir(const std::string &path) { - struct stat st; - return stat(path.c_str(), &st) >= 0 && S_ISDIR(st.st_mode); -} - -inline bool is_valid_path(const std::string &path) { - size_t level = 0; - size_t i = 0; - - // Skip slash - while (i < path.size() && path[i] == '/') { - i++; - } - - while (i < path.size()) { - // Read component - auto beg = i; - while (i < path.size() && path[i] != '/') { - i++; - } - - auto len = i - beg; - assert(len > 0); - - if (!path.compare(beg, len, ".")) { - ; - } else if (!path.compare(beg, len, "..")) { - if (level == 0) { return false; } - level--; - } else { - level++; - } - - // Skip slash - while (i < path.size() && path[i] == '/') { - i++; - } - } - - return true; -} - -inline std::string encode_query_param(const std::string &value) { - std::ostringstream escaped; - escaped.fill('0'); - escaped << std::hex; - - for (auto c : value) { - if (std::isalnum(static_cast(c)) || c == '-' || c == '_' || - c == '.' || c == '!' || c == '~' || c == '*' || c == '\'' || c == '(' || - c == ')') { - escaped << c; - } else { - escaped << std::uppercase; - escaped << '%' << std::setw(2) - << static_cast(static_cast(c)); - escaped << std::nouppercase; - } - } - - return escaped.str(); -} - -inline std::string encode_url(const std::string &s) { - std::string result; - result.reserve(s.size()); - - for (size_t i = 0; s[i]; i++) { - switch (s[i]) { - case ' ': result += "%20"; break; - case '+': result += "%2B"; break; - case '\r': result += "%0D"; break; - case '\n': result += "%0A"; break; - case '\'': result += "%27"; break; - case ',': result += "%2C"; break; - // case ':': result += "%3A"; break; // ok? probably... - case ';': result += "%3B"; break; - default: - auto c = static_cast(s[i]); - if (c >= 0x80) { - result += '%'; - char hex[4]; - auto len = snprintf(hex, sizeof(hex) - 1, "%02X", c); - assert(len == 2); - result.append(hex, static_cast(len)); - } else { - result += s[i]; - } - break; - } - } - - return result; -} - -inline std::string decode_url(const std::string &s, - bool convert_plus_to_space) { - std::string result; - - for (size_t i = 0; i < s.size(); i++) { - if (s[i] == '%' && i + 1 < s.size()) { - if (s[i + 1] == 'u') { - int val = 0; - if (from_hex_to_i(s, i + 2, 4, val)) { - // 4 digits Unicode codes - char buff[4]; - size_t len = to_utf8(val, buff); - if (len > 0) { result.append(buff, len); } - i += 5; // 'u0000' - } else { - result += s[i]; - } - } else { - int val = 0; - if (from_hex_to_i(s, i + 1, 2, val)) { - // 2 digits hex codes - result += static_cast(val); - i += 2; // '00' - } else { - result += s[i]; - } - } - } else if (convert_plus_to_space && s[i] == '+') { - result += ' '; - } else { - result += s[i]; - } - } - - return result; -} - -inline void read_file(const std::string &path, std::string &out) { - std::ifstream fs(path, std::ios_base::binary); - fs.seekg(0, std::ios_base::end); - auto size = fs.tellg(); - fs.seekg(0); - out.resize(static_cast(size)); - fs.read(&out[0], static_cast(size)); -} - -inline std::string file_extension(const std::string &path) { - std::smatch m; - static auto re = std::regex("\\.([a-zA-Z0-9]+)$"); - if (std::regex_search(path, m, re)) { return m[1].str(); } - return std::string(); -} - -inline bool is_space_or_tab(char c) { return c == ' ' || c == '\t'; } - -inline std::pair trim(const char *b, const char *e, size_t left, - size_t right) { - while (b + left < e && is_space_or_tab(b[left])) { - left++; - } - while (right > 0 && is_space_or_tab(b[right - 1])) { - right--; - } - return std::make_pair(left, right); -} - -inline std::string trim_copy(const std::string &s) { - auto r = trim(s.data(), s.data() + s.size(), 0, s.size()); - return s.substr(r.first, r.second - r.first); -} - -inline void split(const char *b, const char *e, char d, - std::function fn) { - size_t i = 0; - size_t beg = 0; - - while (e ? (b + i < e) : (b[i] != '\0')) { - if (b[i] == d) { - auto r = trim(b, e, beg, i); - if (r.first < r.second) { fn(&b[r.first], &b[r.second]); } - beg = i + 1; - } - i++; - } - - if (i) { - auto r = trim(b, e, beg, i); - if (r.first < r.second) { fn(&b[r.first], &b[r.second]); } - } -} - -inline stream_line_reader::stream_line_reader(Stream &strm, char *fixed_buffer, - size_t fixed_buffer_size) - : strm_(strm), fixed_buffer_(fixed_buffer), - fixed_buffer_size_(fixed_buffer_size) {} - -inline const char *stream_line_reader::ptr() const { - if (glowable_buffer_.empty()) { - return fixed_buffer_; - } else { - return glowable_buffer_.data(); - } -} - -inline size_t stream_line_reader::size() const { - if (glowable_buffer_.empty()) { - return fixed_buffer_used_size_; - } else { - return glowable_buffer_.size(); - } -} - -inline bool stream_line_reader::end_with_crlf() const { - auto end = ptr() + size(); - return size() >= 2 && end[-2] == '\r' && end[-1] == '\n'; -} - -inline bool stream_line_reader::getline() { - fixed_buffer_used_size_ = 0; - glowable_buffer_.clear(); - - for (size_t i = 0;; i++) { - char byte; - auto n = strm_.read(&byte, 1); - - if (n < 0) { - return false; - } else if (n == 0) { - if (i == 0) { - return false; - } else { - break; - } - } - - append(byte); - - if (byte == '\n') { break; } - } - - return true; -} - -inline void stream_line_reader::append(char c) { - if (fixed_buffer_used_size_ < fixed_buffer_size_ - 1) { - fixed_buffer_[fixed_buffer_used_size_++] = c; - fixed_buffer_[fixed_buffer_used_size_] = '\0'; - } else { - if (glowable_buffer_.empty()) { - assert(fixed_buffer_[fixed_buffer_used_size_] == '\0'); - glowable_buffer_.assign(fixed_buffer_, fixed_buffer_used_size_); - } - glowable_buffer_ += c; - } -} - -inline int close_socket(socket_t sock) { -#ifdef _WIN32 - return closesocket(sock); -#else - return close(sock); -#endif -} - -template inline ssize_t handle_EINTR(T fn) { - ssize_t res = false; - while (true) { - res = fn(); - if (res < 0 && errno == EINTR) { continue; } - break; - } - return res; -} - -inline ssize_t read_socket(socket_t sock, void *ptr, size_t size, int flags) { - return handle_EINTR([&]() { - return recv(sock, -#ifdef _WIN32 - static_cast(ptr), static_cast(size), -#else - ptr, size, -#endif - flags); - }); -} - -inline ssize_t send_socket(socket_t sock, const void *ptr, size_t size, - int flags) { - return handle_EINTR([&]() { - return send(sock, -#ifdef _WIN32 - static_cast(ptr), static_cast(size), -#else - ptr, size, -#endif - flags); - }); -} - -inline ssize_t select_read(socket_t sock, time_t sec, time_t usec) { -#ifdef CPPHTTPLIB_USE_POLL - struct pollfd pfd_read; - pfd_read.fd = sock; - pfd_read.events = POLLIN; - - auto timeout = static_cast(sec * 1000 + usec / 1000); - - return handle_EINTR([&]() { return poll(&pfd_read, 1, timeout); }); -#else -#ifndef _WIN32 - if (sock >= FD_SETSIZE) { return 1; } -#endif - - fd_set fds; - FD_ZERO(&fds); - FD_SET(sock, &fds); - - timeval tv; - tv.tv_sec = static_cast(sec); - tv.tv_usec = static_cast(usec); - - return handle_EINTR([&]() { - return select(static_cast(sock + 1), &fds, nullptr, nullptr, &tv); - }); -#endif -} - -inline ssize_t select_write(socket_t sock, time_t sec, time_t usec) { -#ifdef CPPHTTPLIB_USE_POLL - struct pollfd pfd_read; - pfd_read.fd = sock; - pfd_read.events = POLLOUT; - - auto timeout = static_cast(sec * 1000 + usec / 1000); - - return handle_EINTR([&]() { return poll(&pfd_read, 1, timeout); }); -#else -#ifndef _WIN32 - if (sock >= FD_SETSIZE) { return 1; } -#endif - - fd_set fds; - FD_ZERO(&fds); - FD_SET(sock, &fds); - - timeval tv; - tv.tv_sec = static_cast(sec); - tv.tv_usec = static_cast(usec); - - return handle_EINTR([&]() { - return select(static_cast(sock + 1), nullptr, &fds, nullptr, &tv); - }); -#endif -} - -inline Error wait_until_socket_is_ready(socket_t sock, time_t sec, - time_t usec) { -#ifdef CPPHTTPLIB_USE_POLL - struct pollfd pfd_read; - pfd_read.fd = sock; - pfd_read.events = POLLIN | POLLOUT; - - auto timeout = static_cast(sec * 1000 + usec / 1000); - - auto poll_res = handle_EINTR([&]() { return poll(&pfd_read, 1, timeout); }); - - if (poll_res == 0) { return Error::ConnectionTimeout; } - - if (poll_res > 0 && pfd_read.revents & (POLLIN | POLLOUT)) { - int error = 0; - socklen_t len = sizeof(error); - auto res = getsockopt(sock, SOL_SOCKET, SO_ERROR, - reinterpret_cast(&error), &len); - auto successful = res >= 0 && !error; - return successful ? Error::Success : Error::Connection; - } - - return Error::Connection; -#else -#ifndef _WIN32 - if (sock >= FD_SETSIZE) { return Error::Connection; } -#endif - - fd_set fdsr; - FD_ZERO(&fdsr); - FD_SET(sock, &fdsr); - - auto fdsw = fdsr; - auto fdse = fdsr; - - timeval tv; - tv.tv_sec = static_cast(sec); - tv.tv_usec = static_cast(usec); - - auto ret = handle_EINTR([&]() { - return select(static_cast(sock + 1), &fdsr, &fdsw, &fdse, &tv); - }); - - if (ret == 0) { return Error::ConnectionTimeout; } - - if (ret > 0 && (FD_ISSET(sock, &fdsr) || FD_ISSET(sock, &fdsw))) { - int error = 0; - socklen_t len = sizeof(error); - auto res = getsockopt(sock, SOL_SOCKET, SO_ERROR, - reinterpret_cast(&error), &len); - auto successful = res >= 0 && !error; - return successful ? Error::Success : Error::Connection; - } - return Error::Connection; -#endif -} - -inline bool is_socket_alive(socket_t sock) { - const auto val = detail::select_read(sock, 0, 0); - if (val == 0) { - return true; - } else if (val < 0 && errno == EBADF) { - return false; - } - char buf[1]; - return detail::read_socket(sock, &buf[0], sizeof(buf), MSG_PEEK) > 0; -} - -class SocketStream : public Stream { -public: - SocketStream(socket_t sock, time_t read_timeout_sec, time_t read_timeout_usec, - time_t write_timeout_sec, time_t write_timeout_usec); - ~SocketStream() override; - - bool is_readable() const override; - bool is_writable() const override; - ssize_t read(char *ptr, size_t size) override; - ssize_t write(const char *ptr, size_t size) override; - void get_remote_ip_and_port(std::string &ip, int &port) const override; - void get_local_ip_and_port(std::string &ip, int &port) const override; - socket_t socket() const override; - -private: - socket_t sock_; - time_t read_timeout_sec_; - time_t read_timeout_usec_; - time_t write_timeout_sec_; - time_t write_timeout_usec_; - - std::vector read_buff_; - size_t read_buff_off_ = 0; - size_t read_buff_content_size_ = 0; - - static const size_t read_buff_size_ = 1024 * 4; -}; - -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT -class SSLSocketStream : public Stream { -public: - SSLSocketStream(socket_t sock, SSL *ssl, time_t read_timeout_sec, - time_t read_timeout_usec, time_t write_timeout_sec, - time_t write_timeout_usec); - ~SSLSocketStream() override; - - bool is_readable() const override; - bool is_writable() const override; - ssize_t read(char *ptr, size_t size) override; - ssize_t write(const char *ptr, size_t size) override; - void get_remote_ip_and_port(std::string &ip, int &port) const override; - void get_local_ip_and_port(std::string &ip, int &port) const override; - socket_t socket() const override; - -private: - socket_t sock_; - SSL *ssl_; - time_t read_timeout_sec_; - time_t read_timeout_usec_; - time_t write_timeout_sec_; - time_t write_timeout_usec_; -}; -#endif - -inline bool keep_alive(socket_t sock, time_t keep_alive_timeout_sec) { - using namespace std::chrono; - auto start = steady_clock::now(); - while (true) { - auto val = select_read(sock, 0, 10000); - if (val < 0) { - return false; - } else if (val == 0) { - auto current = steady_clock::now(); - auto duration = duration_cast(current - start); - auto timeout = keep_alive_timeout_sec * 1000; - if (duration.count() > timeout) { return false; } - std::this_thread::sleep_for(std::chrono::milliseconds(1)); - } else { - return true; - } - } -} - -template -inline bool -process_server_socket_core(const std::atomic &svr_sock, socket_t sock, - size_t keep_alive_max_count, - time_t keep_alive_timeout_sec, T callback) { - assert(keep_alive_max_count > 0); - auto ret = false; - auto count = keep_alive_max_count; - while (svr_sock != INVALID_SOCKET && count > 0 && - keep_alive(sock, keep_alive_timeout_sec)) { - auto close_connection = count == 1; - auto connection_closed = false; - ret = callback(close_connection, connection_closed); - if (!ret || connection_closed) { break; } - count--; - } - return ret; -} - -template -inline bool -process_server_socket(const std::atomic &svr_sock, socket_t sock, - size_t keep_alive_max_count, - time_t keep_alive_timeout_sec, time_t read_timeout_sec, - time_t read_timeout_usec, time_t write_timeout_sec, - time_t write_timeout_usec, T callback) { - return process_server_socket_core( - svr_sock, sock, keep_alive_max_count, keep_alive_timeout_sec, - [&](bool close_connection, bool &connection_closed) { - SocketStream strm(sock, read_timeout_sec, read_timeout_usec, - write_timeout_sec, write_timeout_usec); - return callback(strm, close_connection, connection_closed); - }); -} - -inline bool process_client_socket(socket_t sock, time_t read_timeout_sec, - time_t read_timeout_usec, - time_t write_timeout_sec, - time_t write_timeout_usec, - std::function callback) { - SocketStream strm(sock, read_timeout_sec, read_timeout_usec, - write_timeout_sec, write_timeout_usec); - return callback(strm); -} - -inline int shutdown_socket(socket_t sock) { -#ifdef _WIN32 - return shutdown(sock, SD_BOTH); -#else - return shutdown(sock, SHUT_RDWR); -#endif -} - -template -socket_t create_socket(const std::string &host, const std::string &ip, int port, - int address_family, int socket_flags, bool tcp_nodelay, - SocketOptions socket_options, - BindOrConnect bind_or_connect) { - // Get address info - const char *node = nullptr; - struct addrinfo hints; - struct addrinfo *result; - - memset(&hints, 0, sizeof(struct addrinfo)); - hints.ai_socktype = SOCK_STREAM; - hints.ai_protocol = 0; - - if (!ip.empty()) { - node = ip.c_str(); - // Ask getaddrinfo to convert IP in c-string to address - hints.ai_family = AF_UNSPEC; - hints.ai_flags = AI_NUMERICHOST; - } else { - if (!host.empty()) { node = host.c_str(); } - hints.ai_family = address_family; - hints.ai_flags = socket_flags; - } - -#ifndef _WIN32 - if (hints.ai_family == AF_UNIX) { - const auto addrlen = host.length(); - if (addrlen > sizeof(sockaddr_un::sun_path)) return INVALID_SOCKET; - - auto sock = socket(hints.ai_family, hints.ai_socktype, hints.ai_protocol); - if (sock != INVALID_SOCKET) { - sockaddr_un addr{}; - addr.sun_family = AF_UNIX; - std::copy(host.begin(), host.end(), addr.sun_path); - - hints.ai_addr = reinterpret_cast(&addr); - hints.ai_addrlen = static_cast( - sizeof(addr) - sizeof(addr.sun_path) + addrlen); - - fcntl(sock, F_SETFD, FD_CLOEXEC); - if (socket_options) { socket_options(sock); } - - if (!bind_or_connect(sock, hints)) { - close_socket(sock); - sock = INVALID_SOCKET; - } - } - return sock; - } -#endif - - auto service = std::to_string(port); - - if (getaddrinfo(node, service.c_str(), &hints, &result)) { -#if defined __linux__ && !defined __ANDROID__ - res_init(); -#endif - return INVALID_SOCKET; - } - - for (auto rp = result; rp; rp = rp->ai_next) { - // Create a socket -#ifdef _WIN32 - auto sock = - WSASocketW(rp->ai_family, rp->ai_socktype, rp->ai_protocol, nullptr, 0, - WSA_FLAG_NO_HANDLE_INHERIT | WSA_FLAG_OVERLAPPED); - /** - * Since the WSA_FLAG_NO_HANDLE_INHERIT is only supported on Windows 7 SP1 - * and above the socket creation fails on older Windows Systems. - * - * Let's try to create a socket the old way in this case. - * - * Reference: - * https://docs.microsoft.com/en-us/windows/win32/api/winsock2/nf-winsock2-wsasocketa - * - * WSA_FLAG_NO_HANDLE_INHERIT: - * This flag is supported on Windows 7 with SP1, Windows Server 2008 R2 with - * SP1, and later - * - */ - if (sock == INVALID_SOCKET) { - sock = socket(rp->ai_family, rp->ai_socktype, rp->ai_protocol); - } -#else - auto sock = socket(rp->ai_family, rp->ai_socktype, rp->ai_protocol); -#endif - if (sock == INVALID_SOCKET) { continue; } - -#ifndef _WIN32 - if (fcntl(sock, F_SETFD, FD_CLOEXEC) == -1) { - close_socket(sock); - continue; - } -#endif - - if (tcp_nodelay) { - int yes = 1; - setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, reinterpret_cast(&yes), - sizeof(yes)); - } - - if (socket_options) { socket_options(sock); } - - if (rp->ai_family == AF_INET6) { - int no = 0; - setsockopt(sock, IPPROTO_IPV6, IPV6_V6ONLY, reinterpret_cast(&no), - sizeof(no)); - } - - // bind or connect - if (bind_or_connect(sock, *rp)) { - freeaddrinfo(result); - return sock; - } - - close_socket(sock); - } - - freeaddrinfo(result); - return INVALID_SOCKET; -} - -inline void set_nonblocking(socket_t sock, bool nonblocking) { -#ifdef _WIN32 - auto flags = nonblocking ? 1UL : 0UL; - ioctlsocket(sock, FIONBIO, &flags); -#else - auto flags = fcntl(sock, F_GETFL, 0); - fcntl(sock, F_SETFL, - nonblocking ? (flags | O_NONBLOCK) : (flags & (~O_NONBLOCK))); -#endif -} - -inline bool is_connection_error() { -#ifdef _WIN32 - return WSAGetLastError() != WSAEWOULDBLOCK; -#else - return errno != EINPROGRESS; -#endif -} - -inline bool bind_ip_address(socket_t sock, const std::string &host) { - struct addrinfo hints; - struct addrinfo *result; - - memset(&hints, 0, sizeof(struct addrinfo)); - hints.ai_family = AF_UNSPEC; - hints.ai_socktype = SOCK_STREAM; - hints.ai_protocol = 0; - - if (getaddrinfo(host.c_str(), "0", &hints, &result)) { return false; } - - auto ret = false; - for (auto rp = result; rp; rp = rp->ai_next) { - const auto &ai = *rp; - if (!::bind(sock, ai.ai_addr, static_cast(ai.ai_addrlen))) { - ret = true; - break; - } - } - - freeaddrinfo(result); - return ret; -} - -#if !defined _WIN32 && !defined ANDROID && !defined _AIX -#define USE_IF2IP -#endif - -#ifdef USE_IF2IP -inline std::string if2ip(int address_family, const std::string &ifn) { - struct ifaddrs *ifap; - getifaddrs(&ifap); - std::string addr_candidate; - for (auto ifa = ifap; ifa; ifa = ifa->ifa_next) { - if (ifa->ifa_addr && ifn == ifa->ifa_name && - (AF_UNSPEC == address_family || - ifa->ifa_addr->sa_family == address_family)) { - if (ifa->ifa_addr->sa_family == AF_INET) { - auto sa = reinterpret_cast(ifa->ifa_addr); - char buf[INET_ADDRSTRLEN]; - if (inet_ntop(AF_INET, &sa->sin_addr, buf, INET_ADDRSTRLEN)) { - freeifaddrs(ifap); - return std::string(buf, INET_ADDRSTRLEN); - } - } else if (ifa->ifa_addr->sa_family == AF_INET6) { - auto sa = reinterpret_cast(ifa->ifa_addr); - if (!IN6_IS_ADDR_LINKLOCAL(&sa->sin6_addr)) { - char buf[INET6_ADDRSTRLEN] = {}; - if (inet_ntop(AF_INET6, &sa->sin6_addr, buf, INET6_ADDRSTRLEN)) { - // equivalent to mac's IN6_IS_ADDR_UNIQUE_LOCAL - auto s6_addr_head = sa->sin6_addr.s6_addr[0]; - if (s6_addr_head == 0xfc || s6_addr_head == 0xfd) { - addr_candidate = std::string(buf, INET6_ADDRSTRLEN); - } else { - freeifaddrs(ifap); - return std::string(buf, INET6_ADDRSTRLEN); - } - } - } - } - } - } - freeifaddrs(ifap); - return addr_candidate; -} -#endif - -inline socket_t create_client_socket( - const std::string &host, const std::string &ip, int port, - int address_family, bool tcp_nodelay, SocketOptions socket_options, - time_t connection_timeout_sec, time_t connection_timeout_usec, - time_t read_timeout_sec, time_t read_timeout_usec, time_t write_timeout_sec, - time_t write_timeout_usec, const std::string &intf, Error &error) { - auto sock = create_socket( - host, ip, port, address_family, 0, tcp_nodelay, std::move(socket_options), - [&](socket_t sock2, struct addrinfo &ai) -> bool { - if (!intf.empty()) { -#ifdef USE_IF2IP - auto ip_from_if = if2ip(address_family, intf); - if (ip_from_if.empty()) { ip_from_if = intf; } - if (!bind_ip_address(sock2, ip_from_if.c_str())) { - error = Error::BindIPAddress; - return false; - } -#endif - } - - set_nonblocking(sock2, true); - - auto ret = - ::connect(sock2, ai.ai_addr, static_cast(ai.ai_addrlen)); - - if (ret < 0) { - if (is_connection_error()) { - error = Error::Connection; - return false; - } - error = wait_until_socket_is_ready(sock2, connection_timeout_sec, - connection_timeout_usec); - if (error != Error::Success) { return false; } - } - - set_nonblocking(sock2, false); - - { -#ifdef _WIN32 - auto timeout = static_cast(read_timeout_sec * 1000 + - read_timeout_usec / 1000); - setsockopt(sock2, SOL_SOCKET, SO_RCVTIMEO, (char *)&timeout, - sizeof(timeout)); -#else - timeval tv; - tv.tv_sec = static_cast(read_timeout_sec); - tv.tv_usec = static_cast(read_timeout_usec); - setsockopt(sock2, SOL_SOCKET, SO_RCVTIMEO, (char *)&tv, sizeof(tv)); -#endif - } - { - -#ifdef _WIN32 - auto timeout = static_cast(write_timeout_sec * 1000 + - write_timeout_usec / 1000); - setsockopt(sock2, SOL_SOCKET, SO_SNDTIMEO, (char *)&timeout, - sizeof(timeout)); -#else - timeval tv; - tv.tv_sec = static_cast(write_timeout_sec); - tv.tv_usec = static_cast(write_timeout_usec); - setsockopt(sock2, SOL_SOCKET, SO_SNDTIMEO, (char *)&tv, sizeof(tv)); -#endif - } - - error = Error::Success; - return true; - }); - - if (sock != INVALID_SOCKET) { - error = Error::Success; - } else { - if (error == Error::Success) { error = Error::Connection; } - } - - return sock; -} - -inline bool get_ip_and_port(const struct sockaddr_storage &addr, - socklen_t addr_len, std::string &ip, int &port) { - if (addr.ss_family == AF_INET) { - port = ntohs(reinterpret_cast(&addr)->sin_port); - } else if (addr.ss_family == AF_INET6) { - port = - ntohs(reinterpret_cast(&addr)->sin6_port); - } else { - return false; - } - - std::array ipstr{}; - if (getnameinfo(reinterpret_cast(&addr), addr_len, - ipstr.data(), static_cast(ipstr.size()), nullptr, - 0, NI_NUMERICHOST)) { - return false; - } - - ip = ipstr.data(); - return true; -} - -inline void get_local_ip_and_port(socket_t sock, std::string &ip, int &port) { - struct sockaddr_storage addr; - socklen_t addr_len = sizeof(addr); - if (!getsockname(sock, reinterpret_cast(&addr), - &addr_len)) { - get_ip_and_port(addr, addr_len, ip, port); - } -} - -inline void get_remote_ip_and_port(socket_t sock, std::string &ip, int &port) { - struct sockaddr_storage addr; - socklen_t addr_len = sizeof(addr); - - if (!getpeername(sock, reinterpret_cast(&addr), - &addr_len)) { -#ifndef _WIN32 - if (addr.ss_family == AF_UNIX) { -#if defined(__linux__) - struct ucred ucred; - socklen_t len = sizeof(ucred); - if (getsockopt(sock, SOL_SOCKET, SO_PEERCRED, &ucred, &len) == 0) { - port = ucred.pid; - } -#elif defined(SOL_LOCAL) && defined(SO_PEERPID) // __APPLE__ - pid_t pid; - socklen_t len = sizeof(pid); - if (getsockopt(sock, SOL_LOCAL, SO_PEERPID, &pid, &len) == 0) { - port = pid; - } -#endif - return; - } -#endif - get_ip_and_port(addr, addr_len, ip, port); - } -} - -inline constexpr unsigned int str2tag_core(const char *s, size_t l, - unsigned int h) { - return (l == 0) - ? h - : str2tag_core( - s + 1, l - 1, - // Unsets the 6 high bits of h, therefore no overflow happens - (((std::numeric_limits::max)() >> 6) & - h * 33) ^ - static_cast(*s)); -} - -inline unsigned int str2tag(const std::string &s) { - return str2tag_core(s.data(), s.size(), 0); -} - -namespace udl { - -inline constexpr unsigned int operator"" _t(const char *s, size_t l) { - return str2tag_core(s, l, 0); -} - -} // namespace udl - -inline const char * -find_content_type(const std::string &path, - const std::map &user_data) { - auto ext = file_extension(path); - - auto it = user_data.find(ext); - if (it != user_data.end()) { return it->second.c_str(); } - - using udl::operator""_t; - - switch (str2tag(ext)) { - default: return nullptr; - case "css"_t: return "text/css"; - case "csv"_t: return "text/csv"; - case "htm"_t: - case "html"_t: return "text/html"; - case "js"_t: - case "mjs"_t: return "text/javascript"; - case "txt"_t: return "text/plain"; - case "vtt"_t: return "text/vtt"; - - case "apng"_t: return "image/apng"; - case "avif"_t: return "image/avif"; - case "bmp"_t: return "image/bmp"; - case "gif"_t: return "image/gif"; - case "png"_t: return "image/png"; - case "svg"_t: return "image/svg+xml"; - case "webp"_t: return "image/webp"; - case "ico"_t: return "image/x-icon"; - case "tif"_t: return "image/tiff"; - case "tiff"_t: return "image/tiff"; - case "jpg"_t: - case "jpeg"_t: return "image/jpeg"; - - case "mp4"_t: return "video/mp4"; - case "mpeg"_t: return "video/mpeg"; - case "webm"_t: return "video/webm"; - - case "mp3"_t: return "audio/mp3"; - case "mpga"_t: return "audio/mpeg"; - case "weba"_t: return "audio/webm"; - case "wav"_t: return "audio/wave"; - - case "otf"_t: return "font/otf"; - case "ttf"_t: return "font/ttf"; - case "woff"_t: return "font/woff"; - case "woff2"_t: return "font/woff2"; - - case "7z"_t: return "application/x-7z-compressed"; - case "atom"_t: return "application/atom+xml"; - case "pdf"_t: return "application/pdf"; - case "json"_t: return "application/json"; - case "rss"_t: return "application/rss+xml"; - case "tar"_t: return "application/x-tar"; - case "xht"_t: - case "xhtml"_t: return "application/xhtml+xml"; - case "xslt"_t: return "application/xslt+xml"; - case "xml"_t: return "application/xml"; - case "gz"_t: return "application/gzip"; - case "zip"_t: return "application/zip"; - case "wasm"_t: return "application/wasm"; - } -} - -inline const char *status_message(int status) { - switch (status) { - case 100: return "Continue"; - case 101: return "Switching Protocol"; - case 102: return "Processing"; - case 103: return "Early Hints"; - case 200: return "OK"; - case 201: return "Created"; - case 202: return "Accepted"; - case 203: return "Non-Authoritative Information"; - case 204: return "No Content"; - case 205: return "Reset Content"; - case 206: return "Partial Content"; - case 207: return "Multi-Status"; - case 208: return "Already Reported"; - case 226: return "IM Used"; - case 300: return "Multiple Choice"; - case 301: return "Moved Permanently"; - case 302: return "Found"; - case 303: return "See Other"; - case 304: return "Not Modified"; - case 305: return "Use Proxy"; - case 306: return "unused"; - case 307: return "Temporary Redirect"; - case 308: return "Permanent Redirect"; - case 400: return "Bad Request"; - case 401: return "Unauthorized"; - case 402: return "Payment Required"; - case 403: return "Forbidden"; - case 404: return "Not Found"; - case 405: return "Method Not Allowed"; - case 406: return "Not Acceptable"; - case 407: return "Proxy Authentication Required"; - case 408: return "Request Timeout"; - case 409: return "Conflict"; - case 410: return "Gone"; - case 411: return "Length Required"; - case 412: return "Precondition Failed"; - case 413: return "Payload Too Large"; - case 414: return "URI Too Long"; - case 415: return "Unsupported Media Type"; - case 416: return "Range Not Satisfiable"; - case 417: return "Expectation Failed"; - case 418: return "I'm a teapot"; - case 421: return "Misdirected Request"; - case 422: return "Unprocessable Entity"; - case 423: return "Locked"; - case 424: return "Failed Dependency"; - case 425: return "Too Early"; - case 426: return "Upgrade Required"; - case 428: return "Precondition Required"; - case 429: return "Too Many Requests"; - case 431: return "Request Header Fields Too Large"; - case 451: return "Unavailable For Legal Reasons"; - case 501: return "Not Implemented"; - case 502: return "Bad Gateway"; - case 503: return "Service Unavailable"; - case 504: return "Gateway Timeout"; - case 505: return "HTTP Version Not Supported"; - case 506: return "Variant Also Negotiates"; - case 507: return "Insufficient Storage"; - case 508: return "Loop Detected"; - case 510: return "Not Extended"; - case 511: return "Network Authentication Required"; - - default: - case 500: return "Internal Server Error"; - } -} - -inline bool can_compress_content_type(const std::string &content_type) { - using udl::operator""_t; - - auto tag = str2tag(content_type); - - switch (tag) { - case "image/svg+xml"_t: - case "application/javascript"_t: - case "application/json"_t: - case "application/xml"_t: - case "application/protobuf"_t: - case "application/xhtml+xml"_t: return true; - - default: - return !content_type.rfind("text/", 0) && tag != "text/event-stream"_t; - } -} - -inline EncodingType encoding_type(const Request &req, const Response &res) { - auto ret = - detail::can_compress_content_type(res.get_header_value("Content-Type")); - if (!ret) { return EncodingType::None; } - - const auto &s = req.get_header_value("Accept-Encoding"); - (void)(s); - -#ifdef CPPHTTPLIB_BROTLI_SUPPORT - // TODO: 'Accept-Encoding' has br, not br;q=0 - ret = s.find("br") != std::string::npos; - if (ret) { return EncodingType::Brotli; } -#endif - -#ifdef CPPHTTPLIB_ZLIB_SUPPORT - // TODO: 'Accept-Encoding' has gzip, not gzip;q=0 - ret = s.find("gzip") != std::string::npos; - if (ret) { return EncodingType::Gzip; } -#endif - - return EncodingType::None; -} - -inline bool nocompressor::compress(const char *data, size_t data_length, - bool /*last*/, Callback callback) { - if (!data_length) { return true; } - return callback(data, data_length); -} - -#ifdef CPPHTTPLIB_ZLIB_SUPPORT -inline gzip_compressor::gzip_compressor() { - std::memset(&strm_, 0, sizeof(strm_)); - strm_.zalloc = Z_NULL; - strm_.zfree = Z_NULL; - strm_.opaque = Z_NULL; - - is_valid_ = deflateInit2(&strm_, Z_DEFAULT_COMPRESSION, Z_DEFLATED, 31, 8, - Z_DEFAULT_STRATEGY) == Z_OK; -} - -inline gzip_compressor::~gzip_compressor() { deflateEnd(&strm_); } - -inline bool gzip_compressor::compress(const char *data, size_t data_length, - bool last, Callback callback) { - assert(is_valid_); - - do { - constexpr size_t max_avail_in = - (std::numeric_limits::max)(); - - strm_.avail_in = static_cast( - (std::min)(data_length, max_avail_in)); - strm_.next_in = const_cast(reinterpret_cast(data)); - - data_length -= strm_.avail_in; - data += strm_.avail_in; - - auto flush = (last && data_length == 0) ? Z_FINISH : Z_NO_FLUSH; - int ret = Z_OK; - - std::array buff{}; - do { - strm_.avail_out = static_cast(buff.size()); - strm_.next_out = reinterpret_cast(buff.data()); - - ret = deflate(&strm_, flush); - if (ret == Z_STREAM_ERROR) { return false; } - - if (!callback(buff.data(), buff.size() - strm_.avail_out)) { - return false; - } - } while (strm_.avail_out == 0); - - assert((flush == Z_FINISH && ret == Z_STREAM_END) || - (flush == Z_NO_FLUSH && ret == Z_OK)); - assert(strm_.avail_in == 0); - } while (data_length > 0); - - return true; -} - -inline gzip_decompressor::gzip_decompressor() { - std::memset(&strm_, 0, sizeof(strm_)); - strm_.zalloc = Z_NULL; - strm_.zfree = Z_NULL; - strm_.opaque = Z_NULL; - - // 15 is the value of wbits, which should be at the maximum possible value - // to ensure that any gzip stream can be decoded. The offset of 32 specifies - // that the stream type should be automatically detected either gzip or - // deflate. - is_valid_ = inflateInit2(&strm_, 32 + 15) == Z_OK; -} - -inline gzip_decompressor::~gzip_decompressor() { inflateEnd(&strm_); } - -inline bool gzip_decompressor::is_valid() const { return is_valid_; } - -inline bool gzip_decompressor::decompress(const char *data, size_t data_length, - Callback callback) { - assert(is_valid_); - - int ret = Z_OK; - - do { - constexpr size_t max_avail_in = - (std::numeric_limits::max)(); - - strm_.avail_in = static_cast( - (std::min)(data_length, max_avail_in)); - strm_.next_in = const_cast(reinterpret_cast(data)); - - data_length -= strm_.avail_in; - data += strm_.avail_in; - - std::array buff{}; - while (strm_.avail_in > 0) { - strm_.avail_out = static_cast(buff.size()); - strm_.next_out = reinterpret_cast(buff.data()); - - auto prev_avail_in = strm_.avail_in; - - ret = inflate(&strm_, Z_NO_FLUSH); - - if (prev_avail_in - strm_.avail_in == 0) { return false; } - - assert(ret != Z_STREAM_ERROR); - switch (ret) { - case Z_NEED_DICT: - case Z_DATA_ERROR: - case Z_MEM_ERROR: inflateEnd(&strm_); return false; - } - - if (!callback(buff.data(), buff.size() - strm_.avail_out)) { - return false; - } - } - - if (ret != Z_OK && ret != Z_STREAM_END) return false; - - } while (data_length > 0); - - return true; -} -#endif - -#ifdef CPPHTTPLIB_BROTLI_SUPPORT -inline brotli_compressor::brotli_compressor() { - state_ = BrotliEncoderCreateInstance(nullptr, nullptr, nullptr); -} - -inline brotli_compressor::~brotli_compressor() { - BrotliEncoderDestroyInstance(state_); -} - -inline bool brotli_compressor::compress(const char *data, size_t data_length, - bool last, Callback callback) { - std::array buff{}; - - auto operation = last ? BROTLI_OPERATION_FINISH : BROTLI_OPERATION_PROCESS; - auto available_in = data_length; - auto next_in = reinterpret_cast(data); - - for (;;) { - if (last) { - if (BrotliEncoderIsFinished(state_)) { break; } - } else { - if (!available_in) { break; } - } - - auto available_out = buff.size(); - auto next_out = buff.data(); - - if (!BrotliEncoderCompressStream(state_, operation, &available_in, &next_in, - &available_out, &next_out, nullptr)) { - return false; - } - - auto output_bytes = buff.size() - available_out; - if (output_bytes) { - callback(reinterpret_cast(buff.data()), output_bytes); - } - } - - return true; -} - -inline brotli_decompressor::brotli_decompressor() { - decoder_s = BrotliDecoderCreateInstance(0, 0, 0); - decoder_r = decoder_s ? BROTLI_DECODER_RESULT_NEEDS_MORE_INPUT - : BROTLI_DECODER_RESULT_ERROR; -} - -inline brotli_decompressor::~brotli_decompressor() { - if (decoder_s) { BrotliDecoderDestroyInstance(decoder_s); } -} - -inline bool brotli_decompressor::is_valid() const { return decoder_s; } - -inline bool brotli_decompressor::decompress(const char *data, - size_t data_length, - Callback callback) { - if (decoder_r == BROTLI_DECODER_RESULT_SUCCESS || - decoder_r == BROTLI_DECODER_RESULT_ERROR) { - return 0; - } - - const uint8_t *next_in = (const uint8_t *)data; - size_t avail_in = data_length; - size_t total_out; - - decoder_r = BROTLI_DECODER_RESULT_NEEDS_MORE_OUTPUT; - - std::array buff{}; - while (decoder_r == BROTLI_DECODER_RESULT_NEEDS_MORE_OUTPUT) { - char *next_out = buff.data(); - size_t avail_out = buff.size(); - - decoder_r = BrotliDecoderDecompressStream( - decoder_s, &avail_in, &next_in, &avail_out, - reinterpret_cast(&next_out), &total_out); - - if (decoder_r == BROTLI_DECODER_RESULT_ERROR) { return false; } - - if (!callback(buff.data(), buff.size() - avail_out)) { return false; } - } - - return decoder_r == BROTLI_DECODER_RESULT_SUCCESS || - decoder_r == BROTLI_DECODER_RESULT_NEEDS_MORE_INPUT; -} -#endif - -inline bool has_header(const Headers &headers, const std::string &key) { - return headers.find(key) != headers.end(); -} - -inline const char *get_header_value(const Headers &headers, - const std::string &key, size_t id, - const char *def) { - auto rng = headers.equal_range(key); - auto it = rng.first; - std::advance(it, static_cast(id)); - if (it != rng.second) { return it->second.c_str(); } - return def; -} - -inline bool compare_case_ignore(const std::string &a, const std::string &b) { - if (a.size() != b.size()) { return false; } - for (size_t i = 0; i < b.size(); i++) { - if (::tolower(a[i]) != ::tolower(b[i])) { return false; } - } - return true; -} - -template -inline bool parse_header(const char *beg, const char *end, T fn) { - // Skip trailing spaces and tabs. - while (beg < end && is_space_or_tab(end[-1])) { - end--; - } - - auto p = beg; - while (p < end && *p != ':') { - p++; - } - - if (p == end) { return false; } - - auto key_end = p; - - if (*p++ != ':') { return false; } - - while (p < end && is_space_or_tab(*p)) { - p++; - } - - if (p < end) { - auto key = std::string(beg, key_end); - auto val = compare_case_ignore(key, "Location") - ? std::string(p, end) - : decode_url(std::string(p, end), false); - fn(std::move(key), std::move(val)); - return true; - } - - return false; -} - -inline bool read_headers(Stream &strm, Headers &headers) { - const auto bufsiz = 2048; - char buf[bufsiz]; - stream_line_reader line_reader(strm, buf, bufsiz); - - for (;;) { - if (!line_reader.getline()) { return false; } - - // Check if the line ends with CRLF. - auto line_terminator_len = 2; - if (line_reader.end_with_crlf()) { - // Blank line indicates end of headers. - if (line_reader.size() == 2) { break; } -#ifdef CPPHTTPLIB_ALLOW_LF_AS_LINE_TERMINATOR - } else { - // Blank line indicates end of headers. - if (line_reader.size() == 1) { break; } - line_terminator_len = 1; - } -#else - } else { - continue; // Skip invalid line. - } -#endif - - if (line_reader.size() > CPPHTTPLIB_HEADER_MAX_LENGTH) { return false; } - - // Exclude line terminator - auto end = line_reader.ptr() + line_reader.size() - line_terminator_len; - - parse_header(line_reader.ptr(), end, - [&](std::string &&key, std::string &&val) { - headers.emplace(std::move(key), std::move(val)); - }); - } - - return true; -} - -inline bool read_content_with_length(Stream &strm, uint64_t len, - Progress progress, - ContentReceiverWithProgress out) { - char buf[CPPHTTPLIB_RECV_BUFSIZ]; - - uint64_t r = 0; - while (r < len) { - auto read_len = static_cast(len - r); - auto n = strm.read(buf, (std::min)(read_len, CPPHTTPLIB_RECV_BUFSIZ)); - if (n <= 0) { return false; } - - if (!out(buf, static_cast(n), r, len)) { return false; } - r += static_cast(n); - - if (progress) { - if (!progress(r, len)) { return false; } - } - } - - return true; -} - -inline void skip_content_with_length(Stream &strm, uint64_t len) { - char buf[CPPHTTPLIB_RECV_BUFSIZ]; - uint64_t r = 0; - while (r < len) { - auto read_len = static_cast(len - r); - auto n = strm.read(buf, (std::min)(read_len, CPPHTTPLIB_RECV_BUFSIZ)); - if (n <= 0) { return; } - r += static_cast(n); - } -} - -inline bool read_content_without_length(Stream &strm, - ContentReceiverWithProgress out) { - char buf[CPPHTTPLIB_RECV_BUFSIZ]; - uint64_t r = 0; - for (;;) { - auto n = strm.read(buf, CPPHTTPLIB_RECV_BUFSIZ); - if (n < 0) { - return false; - } else if (n == 0) { - return true; - } - - if (!out(buf, static_cast(n), r, 0)) { return false; } - r += static_cast(n); - } - - return true; -} - -template -inline bool read_content_chunked(Stream &strm, T &x, - ContentReceiverWithProgress out) { - const auto bufsiz = 16; - char buf[bufsiz]; - - stream_line_reader line_reader(strm, buf, bufsiz); - - if (!line_reader.getline()) { return false; } - - unsigned long chunk_len; - while (true) { - char *end_ptr; - - chunk_len = std::strtoul(line_reader.ptr(), &end_ptr, 16); - - if (end_ptr == line_reader.ptr()) { return false; } - if (chunk_len == ULONG_MAX) { return false; } - - if (chunk_len == 0) { break; } - - if (!read_content_with_length(strm, chunk_len, nullptr, out)) { - return false; - } - - if (!line_reader.getline()) { return false; } - - if (strcmp(line_reader.ptr(), "\r\n")) { return false; } - - if (!line_reader.getline()) { return false; } - } - - assert(chunk_len == 0); - - // Trailer - if (!line_reader.getline()) { return false; } - - while (strcmp(line_reader.ptr(), "\r\n")) { - if (line_reader.size() > CPPHTTPLIB_HEADER_MAX_LENGTH) { return false; } - - // Exclude line terminator - constexpr auto line_terminator_len = 2; - auto end = line_reader.ptr() + line_reader.size() - line_terminator_len; - - parse_header(line_reader.ptr(), end, - [&](std::string &&key, std::string &&val) { - x.headers.emplace(std::move(key), std::move(val)); - }); - - if (!line_reader.getline()) { return false; } - } - - return true; -} - -inline bool is_chunked_transfer_encoding(const Headers &headers) { - return !strcasecmp(get_header_value(headers, "Transfer-Encoding", 0, ""), - "chunked"); -} - -template -bool prepare_content_receiver(T &x, int &status, - ContentReceiverWithProgress receiver, - bool decompress, U callback) { - if (decompress) { - std::string encoding = x.get_header_value("Content-Encoding"); - std::unique_ptr decompressor; - - if (encoding == "gzip" || encoding == "deflate") { -#ifdef CPPHTTPLIB_ZLIB_SUPPORT - decompressor = detail::make_unique(); -#else - status = 415; - return false; -#endif - } else if (encoding.find("br") != std::string::npos) { -#ifdef CPPHTTPLIB_BROTLI_SUPPORT - decompressor = detail::make_unique(); -#else - status = 415; - return false; -#endif - } - - if (decompressor) { - if (decompressor->is_valid()) { - ContentReceiverWithProgress out = [&](const char *buf, size_t n, - uint64_t off, uint64_t len) { - return decompressor->decompress(buf, n, - [&](const char *buf2, size_t n2) { - return receiver(buf2, n2, off, len); - }); - }; - return callback(std::move(out)); - } else { - status = 500; - return false; - } - } - } - - ContentReceiverWithProgress out = [&](const char *buf, size_t n, uint64_t off, - uint64_t len) { - return receiver(buf, n, off, len); - }; - return callback(std::move(out)); -} - -template -bool read_content(Stream &strm, T &x, size_t payload_max_length, int &status, - Progress progress, ContentReceiverWithProgress receiver, - bool decompress) { - return prepare_content_receiver( - x, status, std::move(receiver), decompress, - [&](const ContentReceiverWithProgress &out) { - auto ret = true; - auto exceed_payload_max_length = false; - - if (is_chunked_transfer_encoding(x.headers)) { - ret = read_content_chunked(strm, x, out); - } else if (!has_header(x.headers, "Content-Length")) { - ret = read_content_without_length(strm, out); - } else { - auto len = get_header_value(x.headers, "Content-Length"); - if (len > payload_max_length) { - exceed_payload_max_length = true; - skip_content_with_length(strm, len); - ret = false; - } else if (len > 0) { - ret = read_content_with_length(strm, len, std::move(progress), out); - } - } - - if (!ret) { status = exceed_payload_max_length ? 413 : 400; } - return ret; - }); -} // namespace detail - -inline ssize_t write_headers(Stream &strm, const Headers &headers) { - ssize_t write_len = 0; - for (const auto &x : headers) { - auto len = - strm.write_format("%s: %s\r\n", x.first.c_str(), x.second.c_str()); - if (len < 0) { return len; } - write_len += len; - } - auto len = strm.write("\r\n"); - if (len < 0) { return len; } - write_len += len; - return write_len; -} - -inline bool write_data(Stream &strm, const char *d, size_t l) { - size_t offset = 0; - while (offset < l) { - auto length = strm.write(d + offset, l - offset); - if (length < 0) { return false; } - offset += static_cast(length); - } - return true; -} - -template -inline bool write_content(Stream &strm, const ContentProvider &content_provider, - size_t offset, size_t length, T is_shutting_down, - Error &error) { - size_t end_offset = offset + length; - auto ok = true; - DataSink data_sink; - - data_sink.write = [&](const char *d, size_t l) -> bool { - if (ok) { - if (strm.is_writable() && write_data(strm, d, l)) { - offset += l; - } else { - ok = false; - } - } - return ok; - }; - - while (offset < end_offset && !is_shutting_down()) { - if (!strm.is_writable()) { - error = Error::Write; - return false; - } else if (!content_provider(offset, end_offset - offset, data_sink)) { - error = Error::Canceled; - return false; - } else if (!ok) { - error = Error::Write; - return false; - } - } - - error = Error::Success; - return true; -} - -template -inline bool write_content(Stream &strm, const ContentProvider &content_provider, - size_t offset, size_t length, - const T &is_shutting_down) { - auto error = Error::Success; - return write_content(strm, content_provider, offset, length, is_shutting_down, - error); -} - -template -inline bool -write_content_without_length(Stream &strm, - const ContentProvider &content_provider, - const T &is_shutting_down) { - size_t offset = 0; - auto data_available = true; - auto ok = true; - DataSink data_sink; - - data_sink.write = [&](const char *d, size_t l) -> bool { - if (ok) { - offset += l; - if (!strm.is_writable() || !write_data(strm, d, l)) { ok = false; } - } - return ok; - }; - - data_sink.done = [&](void) { data_available = false; }; - - while (data_available && !is_shutting_down()) { - if (!strm.is_writable()) { - return false; - } else if (!content_provider(offset, 0, data_sink)) { - return false; - } else if (!ok) { - return false; - } - } - return true; -} - -template -inline bool -write_content_chunked(Stream &strm, const ContentProvider &content_provider, - const T &is_shutting_down, U &compressor, Error &error) { - size_t offset = 0; - auto data_available = true; - auto ok = true; - DataSink data_sink; - - data_sink.write = [&](const char *d, size_t l) -> bool { - if (ok) { - data_available = l > 0; - offset += l; - - std::string payload; - if (compressor.compress(d, l, false, - [&](const char *data, size_t data_len) { - payload.append(data, data_len); - return true; - })) { - if (!payload.empty()) { - // Emit chunked response header and footer for each chunk - auto chunk = - from_i_to_hex(payload.size()) + "\r\n" + payload + "\r\n"; - if (!strm.is_writable() || - !write_data(strm, chunk.data(), chunk.size())) { - ok = false; - } - } - } else { - ok = false; - } - } - return ok; - }; - - auto done_with_trailer = [&](const Headers *trailer) { - if (!ok) { return; } - - data_available = false; - - std::string payload; - if (!compressor.compress(nullptr, 0, true, - [&](const char *data, size_t data_len) { - payload.append(data, data_len); - return true; - })) { - ok = false; - return; - } - - if (!payload.empty()) { - // Emit chunked response header and footer for each chunk - auto chunk = from_i_to_hex(payload.size()) + "\r\n" + payload + "\r\n"; - if (!strm.is_writable() || - !write_data(strm, chunk.data(), chunk.size())) { - ok = false; - return; - } - } - - static const std::string done_marker("0\r\n"); - if (!write_data(strm, done_marker.data(), done_marker.size())) { - ok = false; - } - - // Trailer - if (trailer) { - for (const auto &kv : *trailer) { - std::string field_line = kv.first + ": " + kv.second + "\r\n"; - if (!write_data(strm, field_line.data(), field_line.size())) { - ok = false; - } - } - } - - static const std::string crlf("\r\n"); - if (!write_data(strm, crlf.data(), crlf.size())) { ok = false; } - }; - - data_sink.done = [&](void) { done_with_trailer(nullptr); }; - - data_sink.done_with_trailer = [&](const Headers &trailer) { - done_with_trailer(&trailer); - }; - - while (data_available && !is_shutting_down()) { - if (!strm.is_writable()) { - error = Error::Write; - return false; - } else if (!content_provider(offset, 0, data_sink)) { - error = Error::Canceled; - return false; - } else if (!ok) { - error = Error::Write; - return false; - } - } - - error = Error::Success; - return true; -} - -template -inline bool write_content_chunked(Stream &strm, - const ContentProvider &content_provider, - const T &is_shutting_down, U &compressor) { - auto error = Error::Success; - return write_content_chunked(strm, content_provider, is_shutting_down, - compressor, error); -} - -template -inline bool redirect(T &cli, Request &req, Response &res, - const std::string &path, const std::string &location, - Error &error) { - Request new_req = req; - new_req.path = path; - new_req.redirect_count_ -= 1; - - if (res.status == 303 && (req.method != "GET" && req.method != "HEAD")) { - new_req.method = "GET"; - new_req.body.clear(); - new_req.headers.clear(); - } - - Response new_res; - - auto ret = cli.send(new_req, new_res, error); - if (ret) { - req = new_req; - res = new_res; - res.location = location; - } - return ret; -} - -inline std::string params_to_query_str(const Params ¶ms) { - std::string query; - - for (auto it = params.begin(); it != params.end(); ++it) { - if (it != params.begin()) { query += "&"; } - query += it->first; - query += "="; - query += encode_query_param(it->second); - } - return query; -} - -inline void parse_query_text(const std::string &s, Params ¶ms) { - std::set cache; - split(s.data(), s.data() + s.size(), '&', [&](const char *b, const char *e) { - std::string kv(b, e); - if (cache.find(kv) != cache.end()) { return; } - cache.insert(kv); - - std::string key; - std::string val; - split(b, e, '=', [&](const char *b2, const char *e2) { - if (key.empty()) { - key.assign(b2, e2); - } else { - val.assign(b2, e2); - } - }); - - if (!key.empty()) { - params.emplace(decode_url(key, true), decode_url(val, true)); - } - }); -} - -inline bool parse_multipart_boundary(const std::string &content_type, - std::string &boundary) { - auto boundary_keyword = "boundary="; - auto pos = content_type.find(boundary_keyword); - if (pos == std::string::npos) { return false; } - auto end = content_type.find(';', pos); - auto beg = pos + strlen(boundary_keyword); - boundary = content_type.substr(beg, end - beg); - if (boundary.length() >= 2 && boundary.front() == '"' && - boundary.back() == '"') { - boundary = boundary.substr(1, boundary.size() - 2); - } - return !boundary.empty(); -} - -#ifdef CPPHTTPLIB_NO_EXCEPTIONS -inline bool parse_range_header(const std::string &s, Ranges &ranges) { -#else -inline bool parse_range_header(const std::string &s, Ranges &ranges) try { -#endif - static auto re_first_range = std::regex(R"(bytes=(\d*-\d*(?:,\s*\d*-\d*)*))"); - std::smatch m; - if (std::regex_match(s, m, re_first_range)) { - auto pos = static_cast(m.position(1)); - auto len = static_cast(m.length(1)); - bool all_valid_ranges = true; - split(&s[pos], &s[pos + len], ',', [&](const char *b, const char *e) { - if (!all_valid_ranges) return; - static auto re_another_range = std::regex(R"(\s*(\d*)-(\d*))"); - std::cmatch cm; - if (std::regex_match(b, e, cm, re_another_range)) { - ssize_t first = -1; - if (!cm.str(1).empty()) { - first = static_cast(std::stoll(cm.str(1))); - } - - ssize_t last = -1; - if (!cm.str(2).empty()) { - last = static_cast(std::stoll(cm.str(2))); - } - - if (first != -1 && last != -1 && first > last) { - all_valid_ranges = false; - return; - } - ranges.emplace_back(std::make_pair(first, last)); - } - }); - return all_valid_ranges; - } - return false; -#ifdef CPPHTTPLIB_NO_EXCEPTIONS -} -#else -} catch (...) { return false; } -#endif - -class MultipartFormDataParser { -public: - MultipartFormDataParser() = default; - - void set_boundary(std::string &&boundary) { - boundary_ = boundary; - dash_boundary_crlf_ = dash_ + boundary_ + crlf_; - crlf_dash_boundary_ = crlf_ + dash_ + boundary_; - } - - bool is_valid() const { return is_valid_; } - - bool parse(const char *buf, size_t n, const ContentReceiver &content_callback, - const MultipartContentHeader &header_callback) { - - // TODO: support 'filename*' - static const std::regex re_content_disposition( - R"~(^Content-Disposition:\s*form-data;\s*name="(.*?)"(?:;\s*filename="(.*?)")?(?:;\s*filename\*=\S+)?\s*$)~", - std::regex_constants::icase); - - buf_append(buf, n); - - while (buf_size() > 0) { - switch (state_) { - case 0: { // Initial boundary - buf_erase(buf_find(dash_boundary_crlf_)); - if (dash_boundary_crlf_.size() > buf_size()) { return true; } - if (!buf_start_with(dash_boundary_crlf_)) { return false; } - buf_erase(dash_boundary_crlf_.size()); - state_ = 1; - break; - } - case 1: { // New entry - clear_file_info(); - state_ = 2; - break; - } - case 2: { // Headers - auto pos = buf_find(crlf_); - if (pos > CPPHTTPLIB_HEADER_MAX_LENGTH) { return false; } - while (pos < buf_size()) { - // Empty line - if (pos == 0) { - if (!header_callback(file_)) { - is_valid_ = false; - return false; - } - buf_erase(crlf_.size()); - state_ = 3; - break; - } - - static const std::string header_name = "content-type:"; - const auto header = buf_head(pos); - if (start_with_case_ignore(header, header_name)) { - file_.content_type = trim_copy(header.substr(header_name.size())); - } else { - std::smatch m; - if (std::regex_match(header, m, re_content_disposition)) { - file_.name = m[1]; - file_.filename = m[2]; - } else { - is_valid_ = false; - return false; - } - } - buf_erase(pos + crlf_.size()); - pos = buf_find(crlf_); - } - if (state_ != 3) { return true; } - break; - } - case 3: { // Body - if (crlf_dash_boundary_.size() > buf_size()) { return true; } - auto pos = buf_find(crlf_dash_boundary_); - if (pos < buf_size()) { - if (!content_callback(buf_data(), pos)) { - is_valid_ = false; - return false; - } - buf_erase(pos + crlf_dash_boundary_.size()); - state_ = 4; - } else { - auto len = buf_size() - crlf_dash_boundary_.size(); - if (len > 0) { - if (!content_callback(buf_data(), len)) { - is_valid_ = false; - return false; - } - buf_erase(len); - } - return true; - } - break; - } - case 4: { // Boundary - if (crlf_.size() > buf_size()) { return true; } - if (buf_start_with(crlf_)) { - buf_erase(crlf_.size()); - state_ = 1; - } else { - if (dash_crlf_.size() > buf_size()) { return true; } - if (buf_start_with(dash_crlf_)) { - buf_erase(dash_crlf_.size()); - is_valid_ = true; - buf_erase(buf_size()); // Remove epilogue - } else { - return true; - } - } - break; - } - } - } - - return true; - } - -private: - void clear_file_info() { - file_.name.clear(); - file_.filename.clear(); - file_.content_type.clear(); - } - - bool start_with_case_ignore(const std::string &a, - const std::string &b) const { - if (a.size() < b.size()) { return false; } - for (size_t i = 0; i < b.size(); i++) { - if (::tolower(a[i]) != ::tolower(b[i])) { return false; } - } - return true; - } - - const std::string dash_ = "--"; - const std::string crlf_ = "\r\n"; - const std::string dash_crlf_ = "--\r\n"; - std::string boundary_; - std::string dash_boundary_crlf_; - std::string crlf_dash_boundary_; - - size_t state_ = 0; - bool is_valid_ = false; - MultipartFormData file_; - - // Buffer - bool start_with(const std::string &a, size_t spos, size_t epos, - const std::string &b) const { - if (epos - spos < b.size()) { return false; } - for (size_t i = 0; i < b.size(); i++) { - if (a[i + spos] != b[i]) { return false; } - } - return true; - } - - size_t buf_size() const { return buf_epos_ - buf_spos_; } - - const char *buf_data() const { return &buf_[buf_spos_]; } - - std::string buf_head(size_t l) const { return buf_.substr(buf_spos_, l); } - - bool buf_start_with(const std::string &s) const { - return start_with(buf_, buf_spos_, buf_epos_, s); - } - - size_t buf_find(const std::string &s) const { - auto c = s.front(); - - size_t off = buf_spos_; - while (off < buf_epos_) { - auto pos = off; - while (true) { - if (pos == buf_epos_) { return buf_size(); } - if (buf_[pos] == c) { break; } - pos++; - } - - auto remaining_size = buf_epos_ - pos; - if (s.size() > remaining_size) { return buf_size(); } - - if (start_with(buf_, pos, buf_epos_, s)) { return pos - buf_spos_; } - - off = pos + 1; - } - - return buf_size(); - } - - void buf_append(const char *data, size_t n) { - auto remaining_size = buf_size(); - if (remaining_size > 0 && buf_spos_ > 0) { - for (size_t i = 0; i < remaining_size; i++) { - buf_[i] = buf_[buf_spos_ + i]; - } - } - buf_spos_ = 0; - buf_epos_ = remaining_size; - - if (remaining_size + n > buf_.size()) { buf_.resize(remaining_size + n); } - - for (size_t i = 0; i < n; i++) { - buf_[buf_epos_ + i] = data[i]; - } - buf_epos_ += n; - } - - void buf_erase(size_t size) { buf_spos_ += size; } - - std::string buf_; - size_t buf_spos_ = 0; - size_t buf_epos_ = 0; -}; - -inline std::string to_lower(const char *beg, const char *end) { - std::string out; - auto it = beg; - while (it != end) { - out += static_cast(::tolower(*it)); - it++; - } - return out; -} - -inline std::string make_multipart_data_boundary() { - static const char data[] = - "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"; - - // std::random_device might actually be deterministic on some - // platforms, but due to lack of support in the c++ standard library, - // doing better requires either some ugly hacks or breaking portability. - std::random_device seed_gen; - - // Request 128 bits of entropy for initialization - std::seed_seq seed_sequence{seed_gen(), seed_gen(), seed_gen(), seed_gen()}; - std::mt19937 engine(seed_sequence); - - std::string result = "--cpp-httplib-multipart-data-"; - - for (auto i = 0; i < 16; i++) { - result += data[engine() % (sizeof(data) - 1)]; - } - - return result; -} - -inline bool is_multipart_boundary_chars_valid(const std::string &boundary) { - auto valid = true; - for (size_t i = 0; i < boundary.size(); i++) { - auto c = boundary[i]; - if (!std::isalnum(c) && c != '-' && c != '_') { - valid = false; - break; - } - } - return valid; -} - -template -inline std::string -serialize_multipart_formdata_item_begin(const T &item, - const std::string &boundary) { - std::string body = "--" + boundary + "\r\n"; - body += "Content-Disposition: form-data; name=\"" + item.name + "\""; - if (!item.filename.empty()) { - body += "; filename=\"" + item.filename + "\""; - } - body += "\r\n"; - if (!item.content_type.empty()) { - body += "Content-Type: " + item.content_type + "\r\n"; - } - body += "\r\n"; - - return body; -} - -inline std::string serialize_multipart_formdata_item_end() { return "\r\n"; } - -inline std::string -serialize_multipart_formdata_finish(const std::string &boundary) { - return "--" + boundary + "--\r\n"; -} - -inline std::string -serialize_multipart_formdata_get_content_type(const std::string &boundary) { - return "multipart/form-data; boundary=" + boundary; -} - -inline std::string -serialize_multipart_formdata(const MultipartFormDataItems &items, - const std::string &boundary, bool finish = true) { - std::string body; - - for (const auto &item : items) { - body += serialize_multipart_formdata_item_begin(item, boundary); - body += item.content + serialize_multipart_formdata_item_end(); - } - - if (finish) body += serialize_multipart_formdata_finish(boundary); - - return body; -} - -inline std::pair -get_range_offset_and_length(const Request &req, size_t content_length, - size_t index) { - auto r = req.ranges[index]; - - if (r.first == -1 && r.second == -1) { - return std::make_pair(0, content_length); - } - - auto slen = static_cast(content_length); - - if (r.first == -1) { - r.first = (std::max)(static_cast(0), slen - r.second); - r.second = slen - 1; - } - - if (r.second == -1) { r.second = slen - 1; } - return std::make_pair(r.first, static_cast(r.second - r.first) + 1); -} - -inline std::string make_content_range_header_field(size_t offset, size_t length, - size_t content_length) { - std::string field = "bytes "; - field += std::to_string(offset); - field += "-"; - field += std::to_string(offset + length - 1); - field += "/"; - field += std::to_string(content_length); - return field; -} - -template -bool process_multipart_ranges_data(const Request &req, Response &res, - const std::string &boundary, - const std::string &content_type, - SToken stoken, CToken ctoken, - Content content) { - for (size_t i = 0; i < req.ranges.size(); i++) { - ctoken("--"); - stoken(boundary); - ctoken("\r\n"); - if (!content_type.empty()) { - ctoken("Content-Type: "); - stoken(content_type); - ctoken("\r\n"); - } - - auto offsets = get_range_offset_and_length(req, res.body.size(), i); - auto offset = offsets.first; - auto length = offsets.second; - - ctoken("Content-Range: "); - stoken(make_content_range_header_field(offset, length, res.body.size())); - ctoken("\r\n"); - ctoken("\r\n"); - if (!content(offset, length)) { return false; } - ctoken("\r\n"); - } - - ctoken("--"); - stoken(boundary); - ctoken("--\r\n"); - - return true; -} - -inline bool make_multipart_ranges_data(const Request &req, Response &res, - const std::string &boundary, - const std::string &content_type, - std::string &data) { - return process_multipart_ranges_data( - req, res, boundary, content_type, - [&](const std::string &token) { data += token; }, - [&](const std::string &token) { data += token; }, - [&](size_t offset, size_t length) { - if (offset < res.body.size()) { - data += res.body.substr(offset, length); - return true; - } - return false; - }); -} - -inline size_t -get_multipart_ranges_data_length(const Request &req, Response &res, - const std::string &boundary, - const std::string &content_type) { - size_t data_length = 0; - - process_multipart_ranges_data( - req, res, boundary, content_type, - [&](const std::string &token) { data_length += token.size(); }, - [&](const std::string &token) { data_length += token.size(); }, - [&](size_t /*offset*/, size_t length) { - data_length += length; - return true; - }); - - return data_length; -} - -template -inline bool write_multipart_ranges_data(Stream &strm, const Request &req, - Response &res, - const std::string &boundary, - const std::string &content_type, - const T &is_shutting_down) { - return process_multipart_ranges_data( - req, res, boundary, content_type, - [&](const std::string &token) { strm.write(token); }, - [&](const std::string &token) { strm.write(token); }, - [&](size_t offset, size_t length) { - return write_content(strm, res.content_provider_, offset, length, - is_shutting_down); - }); -} - -inline std::pair -get_range_offset_and_length(const Request &req, const Response &res, - size_t index) { - auto r = req.ranges[index]; - - if (r.second == -1) { - r.second = static_cast(res.content_length_) - 1; - } - - return std::make_pair(r.first, r.second - r.first + 1); -} - -inline bool expect_content(const Request &req) { - if (req.method == "POST" || req.method == "PUT" || req.method == "PATCH" || - req.method == "PRI" || req.method == "DELETE") { - return true; - } - // TODO: check if Content-Length is set - return false; -} - -inline bool has_crlf(const std::string &s) { - auto p = s.c_str(); - while (*p) { - if (*p == '\r' || *p == '\n') { return true; } - p++; - } - return false; -} - -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT -inline std::string message_digest(const std::string &s, const EVP_MD *algo) { - auto context = std::unique_ptr( - EVP_MD_CTX_new(), EVP_MD_CTX_free); - - unsigned int hash_length = 0; - unsigned char hash[EVP_MAX_MD_SIZE]; - - EVP_DigestInit_ex(context.get(), algo, nullptr); - EVP_DigestUpdate(context.get(), s.c_str(), s.size()); - EVP_DigestFinal_ex(context.get(), hash, &hash_length); - - std::stringstream ss; - for (auto i = 0u; i < hash_length; ++i) { - ss << std::hex << std::setw(2) << std::setfill('0') - << (unsigned int)hash[i]; - } - - return ss.str(); -} - -inline std::string MD5(const std::string &s) { - return message_digest(s, EVP_md5()); -} - -inline std::string SHA_256(const std::string &s) { - return message_digest(s, EVP_sha256()); -} - -inline std::string SHA_512(const std::string &s) { - return message_digest(s, EVP_sha512()); -} -#endif - -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT -#ifdef _WIN32 -// NOTE: This code came up with the following stackoverflow post: -// https://stackoverflow.com/questions/9507184/can-openssl-on-windows-use-the-system-certificate-store -inline bool load_system_certs_on_windows(X509_STORE *store) { - auto hStore = CertOpenSystemStoreW((HCRYPTPROV_LEGACY)NULL, L"ROOT"); - if (!hStore) { return false; } - - auto result = false; - PCCERT_CONTEXT pContext = NULL; - while ((pContext = CertEnumCertificatesInStore(hStore, pContext)) != - nullptr) { - auto encoded_cert = - static_cast(pContext->pbCertEncoded); - - auto x509 = d2i_X509(NULL, &encoded_cert, pContext->cbCertEncoded); - if (x509) { - X509_STORE_add_cert(store, x509); - X509_free(x509); - result = true; - } - } - - CertFreeCertificateContext(pContext); - CertCloseStore(hStore, 0); - - return result; -} -#elif defined(CPPHTTPLIB_USE_CERTS_FROM_MACOSX_KEYCHAIN) && defined(__APPLE__) -#if TARGET_OS_OSX -template -using CFObjectPtr = - std::unique_ptr::type, void (*)(CFTypeRef)>; - -inline void cf_object_ptr_deleter(CFTypeRef obj) { - if (obj) { CFRelease(obj); } -} - -inline bool retrieve_certs_from_keychain(CFObjectPtr &certs) { - CFStringRef keys[] = {kSecClass, kSecMatchLimit, kSecReturnRef}; - CFTypeRef values[] = {kSecClassCertificate, kSecMatchLimitAll, - kCFBooleanTrue}; - - CFObjectPtr query( - CFDictionaryCreate(nullptr, reinterpret_cast(keys), values, - sizeof(keys) / sizeof(keys[0]), - &kCFTypeDictionaryKeyCallBacks, - &kCFTypeDictionaryValueCallBacks), - cf_object_ptr_deleter); - - if (!query) { return false; } - - CFTypeRef security_items = nullptr; - if (SecItemCopyMatching(query.get(), &security_items) != errSecSuccess || - CFArrayGetTypeID() != CFGetTypeID(security_items)) { - return false; - } - - certs.reset(reinterpret_cast(security_items)); - return true; -} - -inline bool retrieve_root_certs_from_keychain(CFObjectPtr &certs) { - CFArrayRef root_security_items = nullptr; - if (SecTrustCopyAnchorCertificates(&root_security_items) != errSecSuccess) { - return false; - } - - certs.reset(root_security_items); - return true; -} - -inline bool add_certs_to_x509_store(CFArrayRef certs, X509_STORE *store) { - auto result = false; - for (int i = 0; i < CFArrayGetCount(certs); ++i) { - const auto cert = reinterpret_cast( - CFArrayGetValueAtIndex(certs, i)); - - if (SecCertificateGetTypeID() != CFGetTypeID(cert)) { continue; } - - CFDataRef cert_data = nullptr; - if (SecItemExport(cert, kSecFormatX509Cert, 0, nullptr, &cert_data) != - errSecSuccess) { - continue; - } - - CFObjectPtr cert_data_ptr(cert_data, cf_object_ptr_deleter); - - auto encoded_cert = static_cast( - CFDataGetBytePtr(cert_data_ptr.get())); - - auto x509 = - d2i_X509(NULL, &encoded_cert, CFDataGetLength(cert_data_ptr.get())); - - if (x509) { - X509_STORE_add_cert(store, x509); - X509_free(x509); - result = true; - } - } - - return result; -} - -inline bool load_system_certs_on_macos(X509_STORE *store) { - auto result = false; - CFObjectPtr certs(nullptr, cf_object_ptr_deleter); - if (retrieve_certs_from_keychain(certs) && certs) { - result = add_certs_to_x509_store(certs.get(), store); - } - - if (retrieve_root_certs_from_keychain(certs) && certs) { - result = add_certs_to_x509_store(certs.get(), store) || result; - } - - return result; -} -#endif // TARGET_OS_OSX -#endif // _WIN32 -#endif // CPPHTTPLIB_OPENSSL_SUPPORT - -#ifdef _WIN32 -class WSInit { -public: - WSInit() { - WSADATA wsaData; - if (WSAStartup(0x0002, &wsaData) == 0) is_valid_ = true; - } - - ~WSInit() { - if (is_valid_) WSACleanup(); - } - - bool is_valid_ = false; -}; - -static WSInit wsinit_; -#endif - -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT -inline std::pair make_digest_authentication_header( - const Request &req, const std::map &auth, - size_t cnonce_count, const std::string &cnonce, const std::string &username, - const std::string &password, bool is_proxy = false) { - std::string nc; - { - std::stringstream ss; - ss << std::setfill('0') << std::setw(8) << std::hex << cnonce_count; - nc = ss.str(); - } - - std::string qop; - if (auth.find("qop") != auth.end()) { - qop = auth.at("qop"); - if (qop.find("auth-int") != std::string::npos) { - qop = "auth-int"; - } else if (qop.find("auth") != std::string::npos) { - qop = "auth"; - } else { - qop.clear(); - } - } - - std::string algo = "MD5"; - if (auth.find("algorithm") != auth.end()) { algo = auth.at("algorithm"); } - - std::string response; - { - auto H = algo == "SHA-256" ? detail::SHA_256 - : algo == "SHA-512" ? detail::SHA_512 - : detail::MD5; - - auto A1 = username + ":" + auth.at("realm") + ":" + password; - - auto A2 = req.method + ":" + req.path; - if (qop == "auth-int") { A2 += ":" + H(req.body); } - - if (qop.empty()) { - response = H(H(A1) + ":" + auth.at("nonce") + ":" + H(A2)); - } else { - response = H(H(A1) + ":" + auth.at("nonce") + ":" + nc + ":" + cnonce + - ":" + qop + ":" + H(A2)); - } - } - - auto opaque = (auth.find("opaque") != auth.end()) ? auth.at("opaque") : ""; - - auto field = "Digest username=\"" + username + "\", realm=\"" + - auth.at("realm") + "\", nonce=\"" + auth.at("nonce") + - "\", uri=\"" + req.path + "\", algorithm=" + algo + - (qop.empty() ? ", response=\"" - : ", qop=" + qop + ", nc=" + nc + ", cnonce=\"" + - cnonce + "\", response=\"") + - response + "\"" + - (opaque.empty() ? "" : ", opaque=\"" + opaque + "\""); - - auto key = is_proxy ? "Proxy-Authorization" : "Authorization"; - return std::make_pair(key, field); -} -#endif - -inline bool parse_www_authenticate(const Response &res, - std::map &auth, - bool is_proxy) { - auto auth_key = is_proxy ? "Proxy-Authenticate" : "WWW-Authenticate"; - if (res.has_header(auth_key)) { - static auto re = std::regex(R"~((?:(?:,\s*)?(.+?)=(?:"(.*?)"|([^,]*))))~"); - auto s = res.get_header_value(auth_key); - auto pos = s.find(' '); - if (pos != std::string::npos) { - auto type = s.substr(0, pos); - if (type == "Basic") { - return false; - } else if (type == "Digest") { - s = s.substr(pos + 1); - auto beg = std::sregex_iterator(s.begin(), s.end(), re); - for (auto i = beg; i != std::sregex_iterator(); ++i) { - auto m = *i; - auto key = s.substr(static_cast(m.position(1)), - static_cast(m.length(1))); - auto val = m.length(2) > 0 - ? s.substr(static_cast(m.position(2)), - static_cast(m.length(2))) - : s.substr(static_cast(m.position(3)), - static_cast(m.length(3))); - auth[key] = val; - } - return true; - } - } - } - return false; -} - -// https://stackoverflow.com/questions/440133/how-do-i-create-a-random-alpha-numeric-string-in-c/440240#answer-440240 -inline std::string random_string(size_t length) { - auto randchar = []() -> char { - const char charset[] = "0123456789" - "ABCDEFGHIJKLMNOPQRSTUVWXYZ" - "abcdefghijklmnopqrstuvwxyz"; - const size_t max_index = (sizeof(charset) - 1); - return charset[static_cast(std::rand()) % max_index]; - }; - std::string str(length, 0); - std::generate_n(str.begin(), length, randchar); - return str; -} - -class ContentProviderAdapter { -public: - explicit ContentProviderAdapter( - ContentProviderWithoutLength &&content_provider) - : content_provider_(content_provider) {} - - bool operator()(size_t offset, size_t, DataSink &sink) { - return content_provider_(offset, sink); - } - -private: - ContentProviderWithoutLength content_provider_; -}; - -} // namespace detail - -inline std::string hosted_at(const std::string &hostname) { - std::vector addrs; - hosted_at(hostname, addrs); - if (addrs.empty()) { return std::string(); } - return addrs[0]; -} - -inline void hosted_at(const std::string &hostname, - std::vector &addrs) { - struct addrinfo hints; - struct addrinfo *result; - - memset(&hints, 0, sizeof(struct addrinfo)); - hints.ai_family = AF_UNSPEC; - hints.ai_socktype = SOCK_STREAM; - hints.ai_protocol = 0; - - if (getaddrinfo(hostname.c_str(), nullptr, &hints, &result)) { -#if defined __linux__ && !defined __ANDROID__ - res_init(); -#endif - return; - } - - for (auto rp = result; rp; rp = rp->ai_next) { - const auto &addr = - *reinterpret_cast(rp->ai_addr); - std::string ip; - int dummy = -1; - if (detail::get_ip_and_port(addr, sizeof(struct sockaddr_storage), ip, - dummy)) { - addrs.push_back(ip); - } - } - - freeaddrinfo(result); -} - -inline std::string append_query_params(const std::string &path, - const Params ¶ms) { - std::string path_with_query = path; - const static std::regex re("[^?]+\\?.*"); - auto delm = std::regex_match(path, re) ? '&' : '?'; - path_with_query += delm + detail::params_to_query_str(params); - return path_with_query; -} - -// Header utilities -inline std::pair make_range_header(Ranges ranges) { - std::string field = "bytes="; - auto i = 0; - for (auto r : ranges) { - if (i != 0) { field += ", "; } - if (r.first != -1) { field += std::to_string(r.first); } - field += '-'; - if (r.second != -1) { field += std::to_string(r.second); } - i++; - } - return std::make_pair("Range", std::move(field)); -} - -inline std::pair -make_basic_authentication_header(const std::string &username, - const std::string &password, bool is_proxy) { - auto field = "Basic " + detail::base64_encode(username + ":" + password); - auto key = is_proxy ? "Proxy-Authorization" : "Authorization"; - return std::make_pair(key, std::move(field)); -} - -inline std::pair -make_bearer_token_authentication_header(const std::string &token, - bool is_proxy = false) { - auto field = "Bearer " + token; - auto key = is_proxy ? "Proxy-Authorization" : "Authorization"; - return std::make_pair(key, std::move(field)); -} - -// Request implementation -inline bool Request::has_header(const std::string &key) const { - return detail::has_header(headers, key); -} - -inline std::string Request::get_header_value(const std::string &key, - size_t id) const { - return detail::get_header_value(headers, key, id, ""); -} - -inline size_t Request::get_header_value_count(const std::string &key) const { - auto r = headers.equal_range(key); - return static_cast(std::distance(r.first, r.second)); -} - -inline void Request::set_header(const std::string &key, - const std::string &val) { - if (!detail::has_crlf(key) && !detail::has_crlf(val)) { - headers.emplace(key, val); - } -} - -inline bool Request::has_param(const std::string &key) const { - return params.find(key) != params.end(); -} - -inline std::string Request::get_param_value(const std::string &key, - size_t id) const { - auto rng = params.equal_range(key); - auto it = rng.first; - std::advance(it, static_cast(id)); - if (it != rng.second) { return it->second; } - return std::string(); -} - -inline size_t Request::get_param_value_count(const std::string &key) const { - auto r = params.equal_range(key); - return static_cast(std::distance(r.first, r.second)); -} - -inline bool Request::is_multipart_form_data() const { - const auto &content_type = get_header_value("Content-Type"); - return !content_type.rfind("multipart/form-data", 0); -} - -inline bool Request::has_file(const std::string &key) const { - return files.find(key) != files.end(); -} - -inline MultipartFormData Request::get_file_value(const std::string &key) const { - auto it = files.find(key); - if (it != files.end()) { return it->second; } - return MultipartFormData(); -} - -inline std::vector -Request::get_file_values(const std::string &key) const { - std::vector values; - auto rng = files.equal_range(key); - for (auto it = rng.first; it != rng.second; it++) { - values.push_back(it->second); - } - return values; -} - -// Response implementation -inline bool Response::has_header(const std::string &key) const { - return headers.find(key) != headers.end(); -} - -inline std::string Response::get_header_value(const std::string &key, - size_t id) const { - return detail::get_header_value(headers, key, id, ""); -} - -inline size_t Response::get_header_value_count(const std::string &key) const { - auto r = headers.equal_range(key); - return static_cast(std::distance(r.first, r.second)); -} - -inline void Response::set_header(const std::string &key, - const std::string &val) { - if (!detail::has_crlf(key) && !detail::has_crlf(val)) { - headers.emplace(key, val); - } -} - -inline void Response::set_redirect(const std::string &url, int stat) { - if (!detail::has_crlf(url)) { - set_header("Location", url); - if (300 <= stat && stat < 400) { - this->status = stat; - } else { - this->status = 302; - } - } -} - -inline void Response::set_content(const char *s, size_t n, - const std::string &content_type) { - body.assign(s, n); - - auto rng = headers.equal_range("Content-Type"); - headers.erase(rng.first, rng.second); - set_header("Content-Type", content_type); -} - -inline void Response::set_content(const std::string &s, - const std::string &content_type) { - set_content(s.data(), s.size(), content_type); -} - -inline void Response::set_content_provider( - size_t in_length, const std::string &content_type, ContentProvider provider, - ContentProviderResourceReleaser resource_releaser) { - set_header("Content-Type", content_type); - content_length_ = in_length; - if (in_length > 0) { content_provider_ = std::move(provider); } - content_provider_resource_releaser_ = resource_releaser; - is_chunked_content_provider_ = false; -} - -inline void Response::set_content_provider( - const std::string &content_type, ContentProviderWithoutLength provider, - ContentProviderResourceReleaser resource_releaser) { - set_header("Content-Type", content_type); - content_length_ = 0; - content_provider_ = detail::ContentProviderAdapter(std::move(provider)); - content_provider_resource_releaser_ = resource_releaser; - is_chunked_content_provider_ = false; -} - -inline void Response::set_chunked_content_provider( - const std::string &content_type, ContentProviderWithoutLength provider, - ContentProviderResourceReleaser resource_releaser) { - set_header("Content-Type", content_type); - content_length_ = 0; - content_provider_ = detail::ContentProviderAdapter(std::move(provider)); - content_provider_resource_releaser_ = resource_releaser; - is_chunked_content_provider_ = true; -} - -// Result implementation -inline bool Result::has_request_header(const std::string &key) const { - return request_headers_.find(key) != request_headers_.end(); -} - -inline std::string Result::get_request_header_value(const std::string &key, - size_t id) const { - return detail::get_header_value(request_headers_, key, id, ""); -} - -inline size_t -Result::get_request_header_value_count(const std::string &key) const { - auto r = request_headers_.equal_range(key); - return static_cast(std::distance(r.first, r.second)); -} - -// Stream implementation -inline ssize_t Stream::write(const char *ptr) { - return write(ptr, strlen(ptr)); -} - -inline ssize_t Stream::write(const std::string &s) { - return write(s.data(), s.size()); -} - -namespace detail { - -// Socket stream implementation -inline SocketStream::SocketStream(socket_t sock, time_t read_timeout_sec, - time_t read_timeout_usec, - time_t write_timeout_sec, - time_t write_timeout_usec) - : sock_(sock), read_timeout_sec_(read_timeout_sec), - read_timeout_usec_(read_timeout_usec), - write_timeout_sec_(write_timeout_sec), - write_timeout_usec_(write_timeout_usec), read_buff_(read_buff_size_, 0) {} - -inline SocketStream::~SocketStream() {} - -inline bool SocketStream::is_readable() const { - return select_read(sock_, read_timeout_sec_, read_timeout_usec_) > 0; -} - -inline bool SocketStream::is_writable() const { - return select_write(sock_, write_timeout_sec_, write_timeout_usec_) > 0 && - is_socket_alive(sock_); -} - -inline ssize_t SocketStream::read(char *ptr, size_t size) { -#ifdef _WIN32 - size = - (std::min)(size, static_cast((std::numeric_limits::max)())); -#else - size = (std::min)(size, - static_cast((std::numeric_limits::max)())); -#endif - - if (read_buff_off_ < read_buff_content_size_) { - auto remaining_size = read_buff_content_size_ - read_buff_off_; - if (size <= remaining_size) { - memcpy(ptr, read_buff_.data() + read_buff_off_, size); - read_buff_off_ += size; - return static_cast(size); - } else { - memcpy(ptr, read_buff_.data() + read_buff_off_, remaining_size); - read_buff_off_ += remaining_size; - return static_cast(remaining_size); - } - } - - if (!is_readable()) { return -1; } - - read_buff_off_ = 0; - read_buff_content_size_ = 0; - - if (size < read_buff_size_) { - auto n = read_socket(sock_, read_buff_.data(), read_buff_size_, - CPPHTTPLIB_RECV_FLAGS); - if (n <= 0) { - return n; - } else if (n <= static_cast(size)) { - memcpy(ptr, read_buff_.data(), static_cast(n)); - return n; - } else { - memcpy(ptr, read_buff_.data(), size); - read_buff_off_ = size; - read_buff_content_size_ = static_cast(n); - return static_cast(size); - } - } else { - return read_socket(sock_, ptr, size, CPPHTTPLIB_RECV_FLAGS); - } -} - -inline ssize_t SocketStream::write(const char *ptr, size_t size) { - if (!is_writable()) { return -1; } - -#if defined(_WIN32) && !defined(_WIN64) - size = - (std::min)(size, static_cast((std::numeric_limits::max)())); -#endif - - return send_socket(sock_, ptr, size, CPPHTTPLIB_SEND_FLAGS); -} - -inline void SocketStream::get_remote_ip_and_port(std::string &ip, - int &port) const { - return detail::get_remote_ip_and_port(sock_, ip, port); -} - -inline void SocketStream::get_local_ip_and_port(std::string &ip, - int &port) const { - return detail::get_local_ip_and_port(sock_, ip, port); -} - -inline socket_t SocketStream::socket() const { return sock_; } - -// Buffer stream implementation -inline bool BufferStream::is_readable() const { return true; } - -inline bool BufferStream::is_writable() const { return true; } - -inline ssize_t BufferStream::read(char *ptr, size_t size) { -#if defined(_MSC_VER) && _MSC_VER < 1910 - auto len_read = buffer._Copy_s(ptr, size, size, position); -#else - auto len_read = buffer.copy(ptr, size, position); -#endif - position += static_cast(len_read); - return static_cast(len_read); -} - -inline ssize_t BufferStream::write(const char *ptr, size_t size) { - buffer.append(ptr, size); - return static_cast(size); -} - -inline void BufferStream::get_remote_ip_and_port(std::string & /*ip*/, - int & /*port*/) const {} - -inline void BufferStream::get_local_ip_and_port(std::string & /*ip*/, - int & /*port*/) const {} - -inline socket_t BufferStream::socket() const { return 0; } - -inline const std::string &BufferStream::get_buffer() const { return buffer; } - -} // namespace detail - -// HTTP server implementation -inline Server::Server() - : new_task_queue( - [] { return new ThreadPool(CPPHTTPLIB_THREAD_POOL_COUNT); }) { -#ifndef _WIN32 - signal(SIGPIPE, SIG_IGN); -#endif -} - -inline Server::~Server() {} - -inline Server &Server::Get(const std::string &pattern, Handler handler) { - get_handlers_.push_back( - std::make_pair(std::regex(pattern), std::move(handler))); - return *this; -} - -inline Server &Server::Post(const std::string &pattern, Handler handler) { - post_handlers_.push_back( - std::make_pair(std::regex(pattern), std::move(handler))); - return *this; -} - -inline Server &Server::Post(const std::string &pattern, - HandlerWithContentReader handler) { - post_handlers_for_content_reader_.push_back( - std::make_pair(std::regex(pattern), std::move(handler))); - return *this; -} - -inline Server &Server::Put(const std::string &pattern, Handler handler) { - put_handlers_.push_back( - std::make_pair(std::regex(pattern), std::move(handler))); - return *this; -} - -inline Server &Server::Put(const std::string &pattern, - HandlerWithContentReader handler) { - put_handlers_for_content_reader_.push_back( - std::make_pair(std::regex(pattern), std::move(handler))); - return *this; -} - -inline Server &Server::Patch(const std::string &pattern, Handler handler) { - patch_handlers_.push_back( - std::make_pair(std::regex(pattern), std::move(handler))); - return *this; -} - -inline Server &Server::Patch(const std::string &pattern, - HandlerWithContentReader handler) { - patch_handlers_for_content_reader_.push_back( - std::make_pair(std::regex(pattern), std::move(handler))); - return *this; -} - -inline Server &Server::Delete(const std::string &pattern, Handler handler) { - delete_handlers_.push_back( - std::make_pair(std::regex(pattern), std::move(handler))); - return *this; -} - -inline Server &Server::Delete(const std::string &pattern, - HandlerWithContentReader handler) { - delete_handlers_for_content_reader_.push_back( - std::make_pair(std::regex(pattern), std::move(handler))); - return *this; -} - -inline Server &Server::Options(const std::string &pattern, Handler handler) { - options_handlers_.push_back( - std::make_pair(std::regex(pattern), std::move(handler))); - return *this; -} - -inline bool Server::set_base_dir(const std::string &dir, - const std::string &mount_point) { - return set_mount_point(mount_point, dir); -} - -inline bool Server::set_mount_point(const std::string &mount_point, - const std::string &dir, Headers headers) { - if (detail::is_dir(dir)) { - std::string mnt = !mount_point.empty() ? mount_point : "/"; - if (!mnt.empty() && mnt[0] == '/') { - base_dirs_.push_back({mnt, dir, std::move(headers)}); - return true; - } - } - return false; -} - -inline bool Server::remove_mount_point(const std::string &mount_point) { - for (auto it = base_dirs_.begin(); it != base_dirs_.end(); ++it) { - if (it->mount_point == mount_point) { - base_dirs_.erase(it); - return true; - } - } - return false; -} - -inline Server & -Server::set_file_extension_and_mimetype_mapping(const std::string &ext, - const std::string &mime) { - file_extension_and_mimetype_map_[ext] = mime; - return *this; -} - -inline Server &Server::set_file_request_handler(Handler handler) { - file_request_handler_ = std::move(handler); - return *this; -} - -inline Server &Server::set_error_handler(HandlerWithResponse handler) { - error_handler_ = std::move(handler); - return *this; -} - -inline Server &Server::set_error_handler(Handler handler) { - error_handler_ = [handler](const Request &req, Response &res) { - handler(req, res); - return HandlerResponse::Handled; - }; - return *this; -} - -inline Server &Server::set_exception_handler(ExceptionHandler handler) { - exception_handler_ = std::move(handler); - return *this; -} - -inline Server &Server::set_pre_routing_handler(HandlerWithResponse handler) { - pre_routing_handler_ = std::move(handler); - return *this; -} - -inline Server &Server::set_post_routing_handler(Handler handler) { - post_routing_handler_ = std::move(handler); - return *this; -} - -inline Server &Server::set_logger(Logger logger) { - logger_ = std::move(logger); - return *this; -} - -inline Server & -Server::set_expect_100_continue_handler(Expect100ContinueHandler handler) { - expect_100_continue_handler_ = std::move(handler); - - return *this; -} - -inline Server &Server::set_address_family(int family) { - address_family_ = family; - return *this; -} - -inline Server &Server::set_tcp_nodelay(bool on) { - tcp_nodelay_ = on; - return *this; -} - -inline Server &Server::set_socket_options(SocketOptions socket_options) { - socket_options_ = std::move(socket_options); - return *this; -} - -inline Server &Server::set_default_headers(Headers headers) { - default_headers_ = std::move(headers); - return *this; -} - -inline Server &Server::set_keep_alive_max_count(size_t count) { - keep_alive_max_count_ = count; - return *this; -} - -inline Server &Server::set_keep_alive_timeout(time_t sec) { - keep_alive_timeout_sec_ = sec; - return *this; -} - -inline Server &Server::set_read_timeout(time_t sec, time_t usec) { - read_timeout_sec_ = sec; - read_timeout_usec_ = usec; - return *this; -} - -inline Server &Server::set_write_timeout(time_t sec, time_t usec) { - write_timeout_sec_ = sec; - write_timeout_usec_ = usec; - return *this; -} - -inline Server &Server::set_idle_interval(time_t sec, time_t usec) { - idle_interval_sec_ = sec; - idle_interval_usec_ = usec; - return *this; -} - -inline Server &Server::set_payload_max_length(size_t length) { - payload_max_length_ = length; - return *this; -} - -inline bool Server::bind_to_port(const std::string &host, int port, - int socket_flags) { - if (bind_internal(host, port, socket_flags) < 0) return false; - return true; -} -inline int Server::bind_to_any_port(const std::string &host, int socket_flags) { - return bind_internal(host, 0, socket_flags); -} - -inline bool Server::listen_after_bind() { - auto se = detail::scope_exit([&]() { done_ = true; }); - return listen_internal(); -} - -inline bool Server::listen(const std::string &host, int port, - int socket_flags) { - auto se = detail::scope_exit([&]() { done_ = true; }); - return bind_to_port(host, port, socket_flags) && listen_internal(); -} - -inline bool Server::is_running() const { return is_running_; } - -inline void Server::wait_until_ready() const { - while (!is_running() && !done_) { - std::this_thread::sleep_for(std::chrono::milliseconds{1}); - } -} - -inline void Server::stop() { - if (is_running_) { - assert(svr_sock_ != INVALID_SOCKET); - std::atomic sock(svr_sock_.exchange(INVALID_SOCKET)); - detail::shutdown_socket(sock); - detail::close_socket(sock); - } -} - -inline bool Server::parse_request_line(const char *s, Request &req) { - auto len = strlen(s); - if (len < 2 || s[len - 2] != '\r' || s[len - 1] != '\n') { return false; } - len -= 2; - - { - size_t count = 0; - - detail::split(s, s + len, ' ', [&](const char *b, const char *e) { - switch (count) { - case 0: req.method = std::string(b, e); break; - case 1: req.target = std::string(b, e); break; - case 2: req.version = std::string(b, e); break; - default: break; - } - count++; - }); - - if (count != 3) { return false; } - } - - static const std::set methods{ - "GET", "HEAD", "POST", "PUT", "DELETE", - "CONNECT", "OPTIONS", "TRACE", "PATCH", "PRI"}; - - if (methods.find(req.method) == methods.end()) { return false; } - - if (req.version != "HTTP/1.1" && req.version != "HTTP/1.0") { return false; } - - { - // Skip URL fragment - for (size_t i = 0; i < req.target.size(); i++) { - if (req.target[i] == '#') { - req.target.erase(i); - break; - } - } - - size_t count = 0; - - detail::split(req.target.data(), req.target.data() + req.target.size(), '?', - [&](const char *b, const char *e) { - switch (count) { - case 0: - req.path = detail::decode_url(std::string(b, e), false); - break; - case 1: { - if (e - b > 0) { - detail::parse_query_text(std::string(b, e), req.params); - } - break; - } - default: break; - } - count++; - }); - - if (count > 2) { return false; } - } - - return true; -} - -inline bool Server::write_response(Stream &strm, bool close_connection, - const Request &req, Response &res) { - return write_response_core(strm, close_connection, req, res, false); -} - -inline bool Server::write_response_with_content(Stream &strm, - bool close_connection, - const Request &req, - Response &res) { - return write_response_core(strm, close_connection, req, res, true); -} - -inline bool Server::write_response_core(Stream &strm, bool close_connection, - const Request &req, Response &res, - bool need_apply_ranges) { - assert(res.status != -1); - - if (400 <= res.status && error_handler_ && - error_handler_(req, res) == HandlerResponse::Handled) { - need_apply_ranges = true; - } - - std::string content_type; - std::string boundary; - if (need_apply_ranges) { apply_ranges(req, res, content_type, boundary); } - - // Prepare additional headers - if (close_connection || req.get_header_value("Connection") == "close") { - res.set_header("Connection", "close"); - } else { - std::stringstream ss; - ss << "timeout=" << keep_alive_timeout_sec_ - << ", max=" << keep_alive_max_count_; - res.set_header("Keep-Alive", ss.str()); - } - - if (!res.has_header("Content-Type") && - (!res.body.empty() || res.content_length_ > 0 || res.content_provider_)) { - res.set_header("Content-Type", "text/plain"); - } - - if (!res.has_header("Content-Length") && res.body.empty() && - !res.content_length_ && !res.content_provider_) { - res.set_header("Content-Length", "0"); - } - - if (!res.has_header("Accept-Ranges") && req.method == "HEAD") { - res.set_header("Accept-Ranges", "bytes"); - } - - if (post_routing_handler_) { post_routing_handler_(req, res); } - - // Response line and headers - { - detail::BufferStream bstrm; - - if (!bstrm.write_format("HTTP/1.1 %d %s\r\n", res.status, - detail::status_message(res.status))) { - return false; - } - - if (!detail::write_headers(bstrm, res.headers)) { return false; } - - // Flush buffer - auto &data = bstrm.get_buffer(); - detail::write_data(strm, data.data(), data.size()); - } - - // Body - auto ret = true; - if (req.method != "HEAD") { - if (!res.body.empty()) { - if (!detail::write_data(strm, res.body.data(), res.body.size())) { - ret = false; - } - } else if (res.content_provider_) { - if (write_content_with_provider(strm, req, res, boundary, content_type)) { - res.content_provider_success_ = true; - } else { - res.content_provider_success_ = false; - ret = false; - } - } - } - - // Log - if (logger_) { logger_(req, res); } - - return ret; -} - -inline bool -Server::write_content_with_provider(Stream &strm, const Request &req, - Response &res, const std::string &boundary, - const std::string &content_type) { - auto is_shutting_down = [this]() { - return this->svr_sock_ == INVALID_SOCKET; - }; - - if (res.content_length_ > 0) { - if (req.ranges.empty()) { - return detail::write_content(strm, res.content_provider_, 0, - res.content_length_, is_shutting_down); - } else if (req.ranges.size() == 1) { - auto offsets = - detail::get_range_offset_and_length(req, res.content_length_, 0); - auto offset = offsets.first; - auto length = offsets.second; - return detail::write_content(strm, res.content_provider_, offset, length, - is_shutting_down); - } else { - return detail::write_multipart_ranges_data( - strm, req, res, boundary, content_type, is_shutting_down); - } - } else { - if (res.is_chunked_content_provider_) { - auto type = detail::encoding_type(req, res); - - std::unique_ptr compressor; - if (type == detail::EncodingType::Gzip) { -#ifdef CPPHTTPLIB_ZLIB_SUPPORT - compressor = detail::make_unique(); -#endif - } else if (type == detail::EncodingType::Brotli) { -#ifdef CPPHTTPLIB_BROTLI_SUPPORT - compressor = detail::make_unique(); -#endif - } else { - compressor = detail::make_unique(); - } - assert(compressor != nullptr); - - return detail::write_content_chunked(strm, res.content_provider_, - is_shutting_down, *compressor); - } else { - return detail::write_content_without_length(strm, res.content_provider_, - is_shutting_down); - } - } -} - -inline bool Server::read_content(Stream &strm, Request &req, Response &res) { - MultipartFormDataMap::iterator cur; - auto file_count = 0; - if (read_content_core( - strm, req, res, - // Regular - [&](const char *buf, size_t n) { - if (req.body.size() + n > req.body.max_size()) { return false; } - req.body.append(buf, n); - return true; - }, - // Multipart - [&](const MultipartFormData &file) { - if (file_count++ == CPPHTTPLIB_MULTIPART_FORM_DATA_FILE_MAX_COUNT) { - return false; - } - cur = req.files.emplace(file.name, file); - return true; - }, - [&](const char *buf, size_t n) { - auto &content = cur->second.content; - if (content.size() + n > content.max_size()) { return false; } - content.append(buf, n); - return true; - })) { - const auto &content_type = req.get_header_value("Content-Type"); - if (!content_type.find("application/x-www-form-urlencoded")) { - if (req.body.size() > CPPHTTPLIB_FORM_URL_ENCODED_PAYLOAD_MAX_LENGTH) { - res.status = 413; // NOTE: should be 414? - return false; - } - detail::parse_query_text(req.body, req.params); - } - return true; - } - return false; -} - -inline bool Server::read_content_with_content_receiver( - Stream &strm, Request &req, Response &res, ContentReceiver receiver, - MultipartContentHeader multipart_header, - ContentReceiver multipart_receiver) { - return read_content_core(strm, req, res, std::move(receiver), - std::move(multipart_header), - std::move(multipart_receiver)); -} - -inline bool Server::read_content_core(Stream &strm, Request &req, Response &res, - ContentReceiver receiver, - MultipartContentHeader multipart_header, - ContentReceiver multipart_receiver) { - detail::MultipartFormDataParser multipart_form_data_parser; - ContentReceiverWithProgress out; - - if (req.is_multipart_form_data()) { - const auto &content_type = req.get_header_value("Content-Type"); - std::string boundary; - if (!detail::parse_multipart_boundary(content_type, boundary)) { - res.status = 400; - return false; - } - - multipart_form_data_parser.set_boundary(std::move(boundary)); - out = [&](const char *buf, size_t n, uint64_t /*off*/, uint64_t /*len*/) { - /* For debug - size_t pos = 0; - while (pos < n) { - auto read_size = (std::min)(1, n - pos); - auto ret = multipart_form_data_parser.parse( - buf + pos, read_size, multipart_receiver, multipart_header); - if (!ret) { return false; } - pos += read_size; - } - return true; - */ - return multipart_form_data_parser.parse(buf, n, multipart_receiver, - multipart_header); - }; - } else { - out = [receiver](const char *buf, size_t n, uint64_t /*off*/, - uint64_t /*len*/) { return receiver(buf, n); }; - } - - if (req.method == "DELETE" && !req.has_header("Content-Length")) { - return true; - } - - if (!detail::read_content(strm, req, payload_max_length_, res.status, nullptr, - out, true)) { - return false; - } - - if (req.is_multipart_form_data()) { - if (!multipart_form_data_parser.is_valid()) { - res.status = 400; - return false; - } - } - - return true; -} - -inline bool Server::handle_file_request(const Request &req, Response &res, - bool head) { - for (const auto &entry : base_dirs_) { - // Prefix match - if (!req.path.compare(0, entry.mount_point.size(), entry.mount_point)) { - std::string sub_path = "/" + req.path.substr(entry.mount_point.size()); - if (detail::is_valid_path(sub_path)) { - auto path = entry.base_dir + sub_path; - if (path.back() == '/') { path += "index.html"; } - - if (detail::is_file(path)) { - detail::read_file(path, res.body); - auto type = - detail::find_content_type(path, file_extension_and_mimetype_map_); - if (type) { res.set_header("Content-Type", type); } - for (const auto &kv : entry.headers) { - res.set_header(kv.first.c_str(), kv.second); - } - res.status = req.has_header("Range") ? 206 : 200; - if (!head && file_request_handler_) { - file_request_handler_(req, res); - } - return true; - } - } - } - } - return false; -} - -inline socket_t -Server::create_server_socket(const std::string &host, int port, - int socket_flags, - SocketOptions socket_options) const { - return detail::create_socket( - host, std::string(), port, address_family_, socket_flags, tcp_nodelay_, - std::move(socket_options), - [](socket_t sock, struct addrinfo &ai) -> bool { - if (::bind(sock, ai.ai_addr, static_cast(ai.ai_addrlen))) { - return false; - } - if (::listen(sock, CPPHTTPLIB_LISTEN_BACKLOG)) { return false; } - return true; - }); -} - -inline int Server::bind_internal(const std::string &host, int port, - int socket_flags) { - if (!is_valid()) { return -1; } - - svr_sock_ = create_server_socket(host, port, socket_flags, socket_options_); - if (svr_sock_ == INVALID_SOCKET) { return -1; } - - if (port == 0) { - struct sockaddr_storage addr; - socklen_t addr_len = sizeof(addr); - if (getsockname(svr_sock_, reinterpret_cast(&addr), - &addr_len) == -1) { - return -1; - } - if (addr.ss_family == AF_INET) { - return ntohs(reinterpret_cast(&addr)->sin_port); - } else if (addr.ss_family == AF_INET6) { - return ntohs(reinterpret_cast(&addr)->sin6_port); - } else { - return -1; - } - } else { - return port; - } -} - -inline bool Server::listen_internal() { - auto ret = true; - is_running_ = true; - auto se = detail::scope_exit([&]() { is_running_ = false; }); - - { - std::unique_ptr task_queue(new_task_queue()); - - while (svr_sock_ != INVALID_SOCKET) { -#ifndef _WIN32 - if (idle_interval_sec_ > 0 || idle_interval_usec_ > 0) { -#endif - auto val = detail::select_read(svr_sock_, idle_interval_sec_, - idle_interval_usec_); - if (val == 0) { // Timeout - task_queue->on_idle(); - continue; - } -#ifndef _WIN32 - } -#endif - socket_t sock = accept(svr_sock_, nullptr, nullptr); - - if (sock == INVALID_SOCKET) { - if (errno == EMFILE) { - // The per-process limit of open file descriptors has been reached. - // Try to accept new connections after a short sleep. - std::this_thread::sleep_for(std::chrono::milliseconds(1)); - continue; - } else if (errno == EINTR || errno == EAGAIN) { - continue; - } - if (svr_sock_ != INVALID_SOCKET) { - detail::close_socket(svr_sock_); - ret = false; - } else { - ; // The server socket was closed by user. - } - break; - } - - { -#ifdef _WIN32 - auto timeout = static_cast(read_timeout_sec_ * 1000 + - read_timeout_usec_ / 1000); - setsockopt(sock, SOL_SOCKET, SO_RCVTIMEO, (char *)&timeout, - sizeof(timeout)); -#else - timeval tv; - tv.tv_sec = static_cast(read_timeout_sec_); - tv.tv_usec = static_cast(read_timeout_usec_); - setsockopt(sock, SOL_SOCKET, SO_RCVTIMEO, (char *)&tv, sizeof(tv)); -#endif - } - { - -#ifdef _WIN32 - auto timeout = static_cast(write_timeout_sec_ * 1000 + - write_timeout_usec_ / 1000); - setsockopt(sock, SOL_SOCKET, SO_SNDTIMEO, (char *)&timeout, - sizeof(timeout)); -#else - timeval tv; - tv.tv_sec = static_cast(write_timeout_sec_); - tv.tv_usec = static_cast(write_timeout_usec_); - setsockopt(sock, SOL_SOCKET, SO_SNDTIMEO, (char *)&tv, sizeof(tv)); -#endif - } - - task_queue->enqueue([this, sock]() { process_and_close_socket(sock); }); - } - - task_queue->shutdown(); - } - - return ret; -} - -inline bool Server::routing(Request &req, Response &res, Stream &strm) { - if (pre_routing_handler_ && - pre_routing_handler_(req, res) == HandlerResponse::Handled) { - return true; - } - - // File handler - bool is_head_request = req.method == "HEAD"; - if ((req.method == "GET" || is_head_request) && - handle_file_request(req, res, is_head_request)) { - return true; - } - - if (detail::expect_content(req)) { - // Content reader handler - { - ContentReader reader( - [&](ContentReceiver receiver) { - return read_content_with_content_receiver( - strm, req, res, std::move(receiver), nullptr, nullptr); - }, - [&](MultipartContentHeader header, ContentReceiver receiver) { - return read_content_with_content_receiver(strm, req, res, nullptr, - std::move(header), - std::move(receiver)); - }); - - if (req.method == "POST") { - if (dispatch_request_for_content_reader( - req, res, std::move(reader), - post_handlers_for_content_reader_)) { - return true; - } - } else if (req.method == "PUT") { - if (dispatch_request_for_content_reader( - req, res, std::move(reader), - put_handlers_for_content_reader_)) { - return true; - } - } else if (req.method == "PATCH") { - if (dispatch_request_for_content_reader( - req, res, std::move(reader), - patch_handlers_for_content_reader_)) { - return true; - } - } else if (req.method == "DELETE") { - if (dispatch_request_for_content_reader( - req, res, std::move(reader), - delete_handlers_for_content_reader_)) { - return true; - } - } - } - - // Read content into `req.body` - if (!read_content(strm, req, res)) { return false; } - } - - // Regular handler - if (req.method == "GET" || req.method == "HEAD") { - return dispatch_request(req, res, get_handlers_); - } else if (req.method == "POST") { - return dispatch_request(req, res, post_handlers_); - } else if (req.method == "PUT") { - return dispatch_request(req, res, put_handlers_); - } else if (req.method == "DELETE") { - return dispatch_request(req, res, delete_handlers_); - } else if (req.method == "OPTIONS") { - return dispatch_request(req, res, options_handlers_); - } else if (req.method == "PATCH") { - return dispatch_request(req, res, patch_handlers_); - } - - res.status = 400; - return false; -} - -inline bool Server::dispatch_request(Request &req, Response &res, - const Handlers &handlers) { - for (const auto &x : handlers) { - const auto &pattern = x.first; - const auto &handler = x.second; - - if (std::regex_match(req.path, req.matches, pattern)) { - handler(req, res); - return true; - } - } - return false; -} - -inline void Server::apply_ranges(const Request &req, Response &res, - std::string &content_type, - std::string &boundary) { - if (req.ranges.size() > 1) { - boundary = detail::make_multipart_data_boundary(); - - auto it = res.headers.find("Content-Type"); - if (it != res.headers.end()) { - content_type = it->second; - res.headers.erase(it); - } - - res.headers.emplace("Content-Type", - "multipart/byteranges; boundary=" + boundary); - } - - auto type = detail::encoding_type(req, res); - - if (res.body.empty()) { - if (res.content_length_ > 0) { - size_t length = 0; - if (req.ranges.empty()) { - length = res.content_length_; - } else if (req.ranges.size() == 1) { - auto offsets = - detail::get_range_offset_and_length(req, res.content_length_, 0); - auto offset = offsets.first; - length = offsets.second; - auto content_range = detail::make_content_range_header_field( - offset, length, res.content_length_); - res.set_header("Content-Range", content_range); - } else { - length = detail::get_multipart_ranges_data_length(req, res, boundary, - content_type); - } - res.set_header("Content-Length", std::to_string(length)); - } else { - if (res.content_provider_) { - if (res.is_chunked_content_provider_) { - res.set_header("Transfer-Encoding", "chunked"); - if (type == detail::EncodingType::Gzip) { - res.set_header("Content-Encoding", "gzip"); - } else if (type == detail::EncodingType::Brotli) { - res.set_header("Content-Encoding", "br"); - } - } - } - } - } else { - if (req.ranges.empty()) { - ; - } else if (req.ranges.size() == 1) { - auto offsets = - detail::get_range_offset_and_length(req, res.body.size(), 0); - auto offset = offsets.first; - auto length = offsets.second; - auto content_range = detail::make_content_range_header_field( - offset, length, res.body.size()); - res.set_header("Content-Range", content_range); - if (offset < res.body.size()) { - res.body = res.body.substr(offset, length); - } else { - res.body.clear(); - res.status = 416; - } - } else { - std::string data; - if (detail::make_multipart_ranges_data(req, res, boundary, content_type, - data)) { - res.body.swap(data); - } else { - res.body.clear(); - res.status = 416; - } - } - - if (type != detail::EncodingType::None) { - std::unique_ptr compressor; - std::string content_encoding; - - if (type == detail::EncodingType::Gzip) { -#ifdef CPPHTTPLIB_ZLIB_SUPPORT - compressor = detail::make_unique(); - content_encoding = "gzip"; -#endif - } else if (type == detail::EncodingType::Brotli) { -#ifdef CPPHTTPLIB_BROTLI_SUPPORT - compressor = detail::make_unique(); - content_encoding = "br"; -#endif - } - - if (compressor) { - std::string compressed; - if (compressor->compress(res.body.data(), res.body.size(), true, - [&](const char *data, size_t data_len) { - compressed.append(data, data_len); - return true; - })) { - res.body.swap(compressed); - res.set_header("Content-Encoding", content_encoding); - } - } - } - - auto length = std::to_string(res.body.size()); - res.set_header("Content-Length", length); - } -} - -inline bool Server::dispatch_request_for_content_reader( - Request &req, Response &res, ContentReader content_reader, - const HandlersForContentReader &handlers) { - for (const auto &x : handlers) { - const auto &pattern = x.first; - const auto &handler = x.second; - - if (std::regex_match(req.path, req.matches, pattern)) { - handler(req, res, content_reader); - return true; - } - } - return false; -} - -inline bool -Server::process_request(Stream &strm, bool close_connection, - bool &connection_closed, - const std::function &setup_request) { - std::array buf{}; - - detail::stream_line_reader line_reader(strm, buf.data(), buf.size()); - - // Connection has been closed on client - if (!line_reader.getline()) { return false; } - - Request req; - Response res; - - res.version = "HTTP/1.1"; - - for (const auto &header : default_headers_) { - if (res.headers.find(header.first) == res.headers.end()) { - res.headers.insert(header); - } - } - -#ifdef _WIN32 - // TODO: Increase FD_SETSIZE statically (libzmq), dynamically (MySQL). -#else -#ifndef CPPHTTPLIB_USE_POLL - // Socket file descriptor exceeded FD_SETSIZE... - if (strm.socket() >= FD_SETSIZE) { - Headers dummy; - detail::read_headers(strm, dummy); - res.status = 500; - return write_response(strm, close_connection, req, res); - } -#endif -#endif - - // Check if the request URI doesn't exceed the limit - if (line_reader.size() > CPPHTTPLIB_REQUEST_URI_MAX_LENGTH) { - Headers dummy; - detail::read_headers(strm, dummy); - res.status = 414; - return write_response(strm, close_connection, req, res); - } - - // Request line and headers - if (!parse_request_line(line_reader.ptr(), req) || - !detail::read_headers(strm, req.headers)) { - res.status = 400; - return write_response(strm, close_connection, req, res); - } - - if (req.get_header_value("Connection") == "close") { - connection_closed = true; - } - - if (req.version == "HTTP/1.0" && - req.get_header_value("Connection") != "Keep-Alive") { - connection_closed = true; - } - - strm.get_remote_ip_and_port(req.remote_addr, req.remote_port); - req.set_header("REMOTE_ADDR", req.remote_addr); - req.set_header("REMOTE_PORT", std::to_string(req.remote_port)); - - strm.get_local_ip_and_port(req.local_addr, req.local_port); - req.set_header("LOCAL_ADDR", req.local_addr); - req.set_header("LOCAL_PORT", std::to_string(req.local_port)); - - if (req.has_header("Range")) { - const auto &range_header_value = req.get_header_value("Range"); - if (!detail::parse_range_header(range_header_value, req.ranges)) { - res.status = 416; - return write_response(strm, close_connection, req, res); - } - } - - if (setup_request) { setup_request(req); } - - if (req.get_header_value("Expect") == "100-continue") { - auto status = 100; - if (expect_100_continue_handler_) { - status = expect_100_continue_handler_(req, res); - } - switch (status) { - case 100: - case 417: - strm.write_format("HTTP/1.1 %d %s\r\n\r\n", status, - detail::status_message(status)); - break; - default: return write_response(strm, close_connection, req, res); - } - } - - // Rounting - bool routed = false; -#ifdef CPPHTTPLIB_NO_EXCEPTIONS - routed = routing(req, res, strm); -#else - try { - routed = routing(req, res, strm); - } catch (std::exception &e) { - if (exception_handler_) { - auto ep = std::current_exception(); - exception_handler_(req, res, ep); - routed = true; - } else { - res.status = 500; - std::string val; - auto s = e.what(); - for (size_t i = 0; s[i]; i++) { - switch (s[i]) { - case '\r': val += "\\r"; break; - case '\n': val += "\\n"; break; - default: val += s[i]; break; - } - } - res.set_header("EXCEPTION_WHAT", val); - } - } catch (...) { - if (exception_handler_) { - auto ep = std::current_exception(); - exception_handler_(req, res, ep); - routed = true; - } else { - res.status = 500; - res.set_header("EXCEPTION_WHAT", "UNKNOWN"); - } - } -#endif - - if (routed) { - if (res.status == -1) { res.status = req.ranges.empty() ? 200 : 206; } - return write_response_with_content(strm, close_connection, req, res); - } else { - if (res.status == -1) { res.status = 404; } - return write_response(strm, close_connection, req, res); - } -} - -inline bool Server::is_valid() const { return true; } - -inline bool Server::process_and_close_socket(socket_t sock) { - auto ret = detail::process_server_socket( - svr_sock_, sock, keep_alive_max_count_, keep_alive_timeout_sec_, - read_timeout_sec_, read_timeout_usec_, write_timeout_sec_, - write_timeout_usec_, - [this](Stream &strm, bool close_connection, bool &connection_closed) { - return process_request(strm, close_connection, connection_closed, - nullptr); - }); - - detail::shutdown_socket(sock); - detail::close_socket(sock); - return ret; -} - -// HTTP client implementation -inline ClientImpl::ClientImpl(const std::string &host) - : ClientImpl(host, 80, std::string(), std::string()) {} - -inline ClientImpl::ClientImpl(const std::string &host, int port) - : ClientImpl(host, port, std::string(), std::string()) {} - -inline ClientImpl::ClientImpl(const std::string &host, int port, - const std::string &client_cert_path, - const std::string &client_key_path) - : host_(host), port_(port), - host_and_port_(adjust_host_string(host) + ":" + std::to_string(port)), - client_cert_path_(client_cert_path), client_key_path_(client_key_path) {} - -inline ClientImpl::~ClientImpl() { - std::lock_guard guard(socket_mutex_); - shutdown_socket(socket_); - close_socket(socket_); -} - -inline bool ClientImpl::is_valid() const { return true; } - -inline void ClientImpl::copy_settings(const ClientImpl &rhs) { - client_cert_path_ = rhs.client_cert_path_; - client_key_path_ = rhs.client_key_path_; - connection_timeout_sec_ = rhs.connection_timeout_sec_; - read_timeout_sec_ = rhs.read_timeout_sec_; - read_timeout_usec_ = rhs.read_timeout_usec_; - write_timeout_sec_ = rhs.write_timeout_sec_; - write_timeout_usec_ = rhs.write_timeout_usec_; - basic_auth_username_ = rhs.basic_auth_username_; - basic_auth_password_ = rhs.basic_auth_password_; - bearer_token_auth_token_ = rhs.bearer_token_auth_token_; -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT - digest_auth_username_ = rhs.digest_auth_username_; - digest_auth_password_ = rhs.digest_auth_password_; -#endif - keep_alive_ = rhs.keep_alive_; - follow_location_ = rhs.follow_location_; - url_encode_ = rhs.url_encode_; - address_family_ = rhs.address_family_; - tcp_nodelay_ = rhs.tcp_nodelay_; - socket_options_ = rhs.socket_options_; - compress_ = rhs.compress_; - decompress_ = rhs.decompress_; - interface_ = rhs.interface_; - proxy_host_ = rhs.proxy_host_; - proxy_port_ = rhs.proxy_port_; - proxy_basic_auth_username_ = rhs.proxy_basic_auth_username_; - proxy_basic_auth_password_ = rhs.proxy_basic_auth_password_; - proxy_bearer_token_auth_token_ = rhs.proxy_bearer_token_auth_token_; -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT - proxy_digest_auth_username_ = rhs.proxy_digest_auth_username_; - proxy_digest_auth_password_ = rhs.proxy_digest_auth_password_; -#endif -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT - ca_cert_file_path_ = rhs.ca_cert_file_path_; - ca_cert_dir_path_ = rhs.ca_cert_dir_path_; - ca_cert_store_ = rhs.ca_cert_store_; -#endif -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT - server_certificate_verification_ = rhs.server_certificate_verification_; -#endif - logger_ = rhs.logger_; -} - -inline socket_t ClientImpl::create_client_socket(Error &error) const { - if (!proxy_host_.empty() && proxy_port_ != -1) { - return detail::create_client_socket( - proxy_host_, std::string(), proxy_port_, address_family_, tcp_nodelay_, - socket_options_, connection_timeout_sec_, connection_timeout_usec_, - read_timeout_sec_, read_timeout_usec_, write_timeout_sec_, - write_timeout_usec_, interface_, error); - } - - // Check is custom IP specified for host_ - std::string ip; - auto it = addr_map_.find(host_); - if (it != addr_map_.end()) ip = it->second; - - return detail::create_client_socket( - host_, ip, port_, address_family_, tcp_nodelay_, socket_options_, - connection_timeout_sec_, connection_timeout_usec_, read_timeout_sec_, - read_timeout_usec_, write_timeout_sec_, write_timeout_usec_, interface_, - error); -} - -inline bool ClientImpl::create_and_connect_socket(Socket &socket, - Error &error) { - auto sock = create_client_socket(error); - if (sock == INVALID_SOCKET) { return false; } - socket.sock = sock; - return true; -} - -inline void ClientImpl::shutdown_ssl(Socket & /*socket*/, - bool /*shutdown_gracefully*/) { - // If there are any requests in flight from threads other than us, then it's - // a thread-unsafe race because individual ssl* objects are not thread-safe. - assert(socket_requests_in_flight_ == 0 || - socket_requests_are_from_thread_ == std::this_thread::get_id()); -} - -inline void ClientImpl::shutdown_socket(Socket &socket) { - if (socket.sock == INVALID_SOCKET) { return; } - detail::shutdown_socket(socket.sock); -} - -inline void ClientImpl::close_socket(Socket &socket) { - // If there are requests in flight in another thread, usually closing - // the socket will be fine and they will simply receive an error when - // using the closed socket, but it is still a bug since rarely the OS - // may reassign the socket id to be used for a new socket, and then - // suddenly they will be operating on a live socket that is different - // than the one they intended! - assert(socket_requests_in_flight_ == 0 || - socket_requests_are_from_thread_ == std::this_thread::get_id()); - - // It is also a bug if this happens while SSL is still active -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT - assert(socket.ssl == nullptr); -#endif - if (socket.sock == INVALID_SOCKET) { return; } - detail::close_socket(socket.sock); - socket.sock = INVALID_SOCKET; -} - -inline bool ClientImpl::read_response_line(Stream &strm, const Request &req, - Response &res) { - std::array buf{}; - - detail::stream_line_reader line_reader(strm, buf.data(), buf.size()); - - if (!line_reader.getline()) { return false; } - -#ifdef CPPHTTPLIB_ALLOW_LF_AS_LINE_TERMINATOR - const static std::regex re("(HTTP/1\\.[01]) (\\d{3})(?: (.*?))?\r\n"); -#else - const static std::regex re("(HTTP/1\\.[01]) (\\d{3})(?: (.*?))?\r?\n"); -#endif - - std::cmatch m; - if (!std::regex_match(line_reader.ptr(), m, re)) { - return req.method == "CONNECT"; - } - res.version = std::string(m[1]); - res.status = std::stoi(std::string(m[2])); - res.reason = std::string(m[3]); - - // Ignore '100 Continue' - while (res.status == 100) { - if (!line_reader.getline()) { return false; } // CRLF - if (!line_reader.getline()) { return false; } // next response line - - if (!std::regex_match(line_reader.ptr(), m, re)) { return false; } - res.version = std::string(m[1]); - res.status = std::stoi(std::string(m[2])); - res.reason = std::string(m[3]); - } - - return true; -} - -inline bool ClientImpl::send(Request &req, Response &res, Error &error) { - std::lock_guard request_mutex_guard(request_mutex_); - auto ret = send_(req, res, error); - if (error == Error::SSLPeerCouldBeClosed_) { - assert(!ret); - ret = send_(req, res, error); - } - return ret; -} - -inline bool ClientImpl::send_(Request &req, Response &res, Error &error) { - { - std::lock_guard guard(socket_mutex_); - - // Set this to false immediately - if it ever gets set to true by the end of - // the request, we know another thread instructed us to close the socket. - socket_should_be_closed_when_request_is_done_ = false; - - auto is_alive = false; - if (socket_.is_open()) { - is_alive = detail::is_socket_alive(socket_.sock); - if (!is_alive) { - // Attempt to avoid sigpipe by shutting down nongracefully if it seems - // like the other side has already closed the connection Also, there - // cannot be any requests in flight from other threads since we locked - // request_mutex_, so safe to close everything immediately - const bool shutdown_gracefully = false; - shutdown_ssl(socket_, shutdown_gracefully); - shutdown_socket(socket_); - close_socket(socket_); - } - } - - if (!is_alive) { - if (!create_and_connect_socket(socket_, error)) { return false; } - -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT - // TODO: refactoring - if (is_ssl()) { - auto &scli = static_cast(*this); - if (!proxy_host_.empty() && proxy_port_ != -1) { - auto success = false; - if (!scli.connect_with_proxy(socket_, res, success, error)) { - return success; - } - } - - if (!scli.initialize_ssl(socket_, error)) { return false; } - } -#endif - } - - // Mark the current socket as being in use so that it cannot be closed by - // anyone else while this request is ongoing, even though we will be - // releasing the mutex. - if (socket_requests_in_flight_ > 1) { - assert(socket_requests_are_from_thread_ == std::this_thread::get_id()); - } - socket_requests_in_flight_ += 1; - socket_requests_are_from_thread_ = std::this_thread::get_id(); - } - - for (const auto &header : default_headers_) { - if (req.headers.find(header.first) == req.headers.end()) { - req.headers.insert(header); - } - } - - auto ret = false; - auto close_connection = !keep_alive_; - - auto se = detail::scope_exit([&]() { - // Briefly lock mutex in order to mark that a request is no longer ongoing - std::lock_guard guard(socket_mutex_); - socket_requests_in_flight_ -= 1; - if (socket_requests_in_flight_ <= 0) { - assert(socket_requests_in_flight_ == 0); - socket_requests_are_from_thread_ = std::thread::id(); - } - - if (socket_should_be_closed_when_request_is_done_ || close_connection || - !ret) { - shutdown_ssl(socket_, true); - shutdown_socket(socket_); - close_socket(socket_); - } - }); - - ret = process_socket(socket_, [&](Stream &strm) { - return handle_request(strm, req, res, close_connection, error); - }); - - if (!ret) { - if (error == Error::Success) { error = Error::Unknown; } - } - - return ret; -} - -inline Result ClientImpl::send(const Request &req) { - auto req2 = req; - return send_(std::move(req2)); -} - -inline Result ClientImpl::send_(Request &&req) { - auto res = detail::make_unique(); - auto error = Error::Success; - auto ret = send(req, *res, error); - return Result{ret ? std::move(res) : nullptr, error, std::move(req.headers)}; -} - -inline bool ClientImpl::handle_request(Stream &strm, Request &req, - Response &res, bool close_connection, - Error &error) { - if (req.path.empty()) { - error = Error::Connection; - return false; - } - - auto req_save = req; - - bool ret; - - if (!is_ssl() && !proxy_host_.empty() && proxy_port_ != -1) { - auto req2 = req; - req2.path = "http://" + host_and_port_ + req.path; - ret = process_request(strm, req2, res, close_connection, error); - req = req2; - req.path = req_save.path; - } else { - ret = process_request(strm, req, res, close_connection, error); - } - - if (!ret) { return false; } - - if (300 < res.status && res.status < 400 && follow_location_) { - req = req_save; - ret = redirect(req, res, error); - } - -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT - if ((res.status == 401 || res.status == 407) && - req.authorization_count_ < 5) { - auto is_proxy = res.status == 407; - const auto &username = - is_proxy ? proxy_digest_auth_username_ : digest_auth_username_; - const auto &password = - is_proxy ? proxy_digest_auth_password_ : digest_auth_password_; - - if (!username.empty() && !password.empty()) { - std::map auth; - if (detail::parse_www_authenticate(res, auth, is_proxy)) { - Request new_req = req; - new_req.authorization_count_ += 1; - new_req.headers.erase(is_proxy ? "Proxy-Authorization" - : "Authorization"); - new_req.headers.insert(detail::make_digest_authentication_header( - req, auth, new_req.authorization_count_, detail::random_string(10), - username, password, is_proxy)); - - Response new_res; - - ret = send(new_req, new_res, error); - if (ret) { res = new_res; } - } - } - } -#endif - - return ret; -} - -inline bool ClientImpl::redirect(Request &req, Response &res, Error &error) { - if (req.redirect_count_ == 0) { - error = Error::ExceedRedirectCount; - return false; - } - - auto location = res.get_header_value("location"); - if (location.empty()) { return false; } - - const static std::regex re( - R"((?:(https?):)?(?://(?:\[([\d:]+)\]|([^:/?#]+))(?::(\d+))?)?([^?#]*)(\?[^#]*)?(?:#.*)?)"); - - std::smatch m; - if (!std::regex_match(location, m, re)) { return false; } - - auto scheme = is_ssl() ? "https" : "http"; - - auto next_scheme = m[1].str(); - auto next_host = m[2].str(); - if (next_host.empty()) { next_host = m[3].str(); } - auto port_str = m[4].str(); - auto next_path = m[5].str(); - auto next_query = m[6].str(); - - auto next_port = port_; - if (!port_str.empty()) { - next_port = std::stoi(port_str); - } else if (!next_scheme.empty()) { - next_port = next_scheme == "https" ? 443 : 80; - } - - if (next_scheme.empty()) { next_scheme = scheme; } - if (next_host.empty()) { next_host = host_; } - if (next_path.empty()) { next_path = "/"; } - - auto path = detail::decode_url(next_path, true) + next_query; - - if (next_scheme == scheme && next_host == host_ && next_port == port_) { - return detail::redirect(*this, req, res, path, location, error); - } else { - if (next_scheme == "https") { -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT - SSLClient cli(next_host.c_str(), next_port); - cli.copy_settings(*this); - if (ca_cert_store_) { cli.set_ca_cert_store(ca_cert_store_); } - return detail::redirect(cli, req, res, path, location, error); -#else - return false; -#endif - } else { - ClientImpl cli(next_host.c_str(), next_port); - cli.copy_settings(*this); - return detail::redirect(cli, req, res, path, location, error); - } - } -} - -inline bool ClientImpl::write_content_with_provider(Stream &strm, - const Request &req, - Error &error) { - auto is_shutting_down = []() { return false; }; - - if (req.is_chunked_content_provider_) { - // TODO: Brotli support - std::unique_ptr compressor; -#ifdef CPPHTTPLIB_ZLIB_SUPPORT - if (compress_) { - compressor = detail::make_unique(); - } else -#endif - { - compressor = detail::make_unique(); - } - - return detail::write_content_chunked(strm, req.content_provider_, - is_shutting_down, *compressor, error); - } else { - return detail::write_content(strm, req.content_provider_, 0, - req.content_length_, is_shutting_down, error); - } -} - -inline bool ClientImpl::write_request(Stream &strm, Request &req, - bool close_connection, Error &error) { - // Prepare additional headers - if (close_connection) { - if (!req.has_header("Connection")) { - req.headers.emplace("Connection", "close"); - } - } - - if (!req.has_header("Host")) { - if (is_ssl()) { - if (port_ == 443) { - req.headers.emplace("Host", host_); - } else { - req.headers.emplace("Host", host_and_port_); - } - } else { - if (port_ == 80) { - req.headers.emplace("Host", host_); - } else { - req.headers.emplace("Host", host_and_port_); - } - } - } - - if (!req.has_header("Accept")) { req.headers.emplace("Accept", "*/*"); } - -#ifndef CPPHTTPLIB_NO_DEFAULT_USER_AGENT - if (!req.has_header("User-Agent")) { - auto agent = std::string("cpp-httplib/") + CPPHTTPLIB_VERSION; - req.headers.emplace("User-Agent", agent); - } -#endif - - if (req.body.empty()) { - if (req.content_provider_) { - if (!req.is_chunked_content_provider_) { - if (!req.has_header("Content-Length")) { - auto length = std::to_string(req.content_length_); - req.headers.emplace("Content-Length", length); - } - } - } else { - if (req.method == "POST" || req.method == "PUT" || - req.method == "PATCH") { - req.headers.emplace("Content-Length", "0"); - } - } - } else { - if (!req.has_header("Content-Type")) { - req.headers.emplace("Content-Type", "text/plain"); - } - - if (!req.has_header("Content-Length")) { - auto length = std::to_string(req.body.size()); - req.headers.emplace("Content-Length", length); - } - } - - if (!basic_auth_password_.empty() || !basic_auth_username_.empty()) { - if (!req.has_header("Authorization")) { - req.headers.insert(make_basic_authentication_header( - basic_auth_username_, basic_auth_password_, false)); - } - } - - if (!proxy_basic_auth_username_.empty() && - !proxy_basic_auth_password_.empty()) { - if (!req.has_header("Proxy-Authorization")) { - req.headers.insert(make_basic_authentication_header( - proxy_basic_auth_username_, proxy_basic_auth_password_, true)); - } - } - - if (!bearer_token_auth_token_.empty()) { - if (!req.has_header("Authorization")) { - req.headers.insert(make_bearer_token_authentication_header( - bearer_token_auth_token_, false)); - } - } - - if (!proxy_bearer_token_auth_token_.empty()) { - if (!req.has_header("Proxy-Authorization")) { - req.headers.insert(make_bearer_token_authentication_header( - proxy_bearer_token_auth_token_, true)); - } - } - - // Request line and headers - { - detail::BufferStream bstrm; - - const auto &path = url_encode_ ? detail::encode_url(req.path) : req.path; - bstrm.write_format("%s %s HTTP/1.1\r\n", req.method.c_str(), path.c_str()); - - detail::write_headers(bstrm, req.headers); - - // Flush buffer - auto &data = bstrm.get_buffer(); - if (!detail::write_data(strm, data.data(), data.size())) { - error = Error::Write; - return false; - } - } - - // Body - if (req.body.empty()) { - return write_content_with_provider(strm, req, error); - } - - if (!detail::write_data(strm, req.body.data(), req.body.size())) { - error = Error::Write; - return false; - } - - return true; -} - -inline std::unique_ptr ClientImpl::send_with_content_provider( - Request &req, const char *body, size_t content_length, - ContentProvider content_provider, - ContentProviderWithoutLength content_provider_without_length, - const std::string &content_type, Error &error) { - if (!content_type.empty()) { - req.headers.emplace("Content-Type", content_type); - } - -#ifdef CPPHTTPLIB_ZLIB_SUPPORT - if (compress_) { req.headers.emplace("Content-Encoding", "gzip"); } -#endif - -#ifdef CPPHTTPLIB_ZLIB_SUPPORT - if (compress_ && !content_provider_without_length) { - // TODO: Brotli support - detail::gzip_compressor compressor; - - if (content_provider) { - auto ok = true; - size_t offset = 0; - DataSink data_sink; - - data_sink.write = [&](const char *data, size_t data_len) -> bool { - if (ok) { - auto last = offset + data_len == content_length; - - auto ret = compressor.compress( - data, data_len, last, - [&](const char *compressed_data, size_t compressed_data_len) { - req.body.append(compressed_data, compressed_data_len); - return true; - }); - - if (ret) { - offset += data_len; - } else { - ok = false; - } - } - return ok; - }; - - while (ok && offset < content_length) { - if (!content_provider(offset, content_length - offset, data_sink)) { - error = Error::Canceled; - return nullptr; - } - } - } else { - if (!compressor.compress(body, content_length, true, - [&](const char *data, size_t data_len) { - req.body.append(data, data_len); - return true; - })) { - error = Error::Compression; - return nullptr; - } - } - } else -#endif - { - if (content_provider) { - req.content_length_ = content_length; - req.content_provider_ = std::move(content_provider); - req.is_chunked_content_provider_ = false; - } else if (content_provider_without_length) { - req.content_length_ = 0; - req.content_provider_ = detail::ContentProviderAdapter( - std::move(content_provider_without_length)); - req.is_chunked_content_provider_ = true; - req.headers.emplace("Transfer-Encoding", "chunked"); - } else { - req.body.assign(body, content_length); - ; - } - } - - auto res = detail::make_unique(); - return send(req, *res, error) ? std::move(res) : nullptr; -} - -inline Result ClientImpl::send_with_content_provider( - const std::string &method, const std::string &path, const Headers &headers, - const char *body, size_t content_length, ContentProvider content_provider, - ContentProviderWithoutLength content_provider_without_length, - const std::string &content_type) { - Request req; - req.method = method; - req.headers = headers; - req.path = path; - - auto error = Error::Success; - - auto res = send_with_content_provider( - req, body, content_length, std::move(content_provider), - std::move(content_provider_without_length), content_type, error); - - return Result{std::move(res), error, std::move(req.headers)}; -} - -inline std::string -ClientImpl::adjust_host_string(const std::string &host) const { - if (host.find(':') != std::string::npos) { return "[" + host + "]"; } - return host; -} - -inline bool ClientImpl::process_request(Stream &strm, Request &req, - Response &res, bool close_connection, - Error &error) { - // Send request - if (!write_request(strm, req, close_connection, error)) { return false; } - -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT - if (is_ssl()) { - auto is_proxy_enabled = !proxy_host_.empty() && proxy_port_ != -1; - if (!is_proxy_enabled) { - char buf[1]; - if (SSL_peek(socket_.ssl, buf, 1) == 0 && - SSL_get_error(socket_.ssl, 0) == SSL_ERROR_ZERO_RETURN) { - error = Error::SSLPeerCouldBeClosed_; - return false; - } - } - } -#endif - - // Receive response and headers - if (!read_response_line(strm, req, res) || - !detail::read_headers(strm, res.headers)) { - error = Error::Read; - return false; - } - - // Body - if ((res.status != 204) && req.method != "HEAD" && req.method != "CONNECT") { - auto redirect = 300 < res.status && res.status < 400 && follow_location_; - - if (req.response_handler && !redirect) { - if (!req.response_handler(res)) { - error = Error::Canceled; - return false; - } - } - - auto out = - req.content_receiver - ? static_cast( - [&](const char *buf, size_t n, uint64_t off, uint64_t len) { - if (redirect) { return true; } - auto ret = req.content_receiver(buf, n, off, len); - if (!ret) { error = Error::Canceled; } - return ret; - }) - : static_cast( - [&](const char *buf, size_t n, uint64_t /*off*/, - uint64_t /*len*/) { - if (res.body.size() + n > res.body.max_size()) { - return false; - } - res.body.append(buf, n); - return true; - }); - - auto progress = [&](uint64_t current, uint64_t total) { - if (!req.progress || redirect) { return true; } - auto ret = req.progress(current, total); - if (!ret) { error = Error::Canceled; } - return ret; - }; - - int dummy_status; - if (!detail::read_content(strm, res, (std::numeric_limits::max)(), - dummy_status, std::move(progress), std::move(out), - decompress_)) { - if (error != Error::Canceled) { error = Error::Read; } - return false; - } - } - - if (res.get_header_value("Connection") == "close" || - (res.version == "HTTP/1.0" && res.reason != "Connection established")) { - // TODO this requires a not-entirely-obvious chain of calls to be correct - // for this to be safe. Maybe a code refactor (such as moving this out to - // the send function and getting rid of the recursiveness of the mutex) - // could make this more obvious. - - // This is safe to call because process_request is only called by - // handle_request which is only called by send, which locks the request - // mutex during the process. It would be a bug to call it from a different - // thread since it's a thread-safety issue to do these things to the socket - // if another thread is using the socket. - std::lock_guard guard(socket_mutex_); - shutdown_ssl(socket_, true); - shutdown_socket(socket_); - close_socket(socket_); - } - - // Log - if (logger_) { logger_(req, res); } - - return true; -} - -inline ContentProviderWithoutLength ClientImpl::get_multipart_content_provider( - const std::string &boundary, const MultipartFormDataItems &items, - const MultipartFormDataProviderItems &provider_items) { - size_t cur_item = 0, cur_start = 0; - // cur_item and cur_start are copied to within the std::function and maintain - // state between successive calls - return [&, cur_item, cur_start](size_t offset, - DataSink &sink) mutable -> bool { - if (!offset && items.size()) { - sink.os << detail::serialize_multipart_formdata(items, boundary, false); - return true; - } else if (cur_item < provider_items.size()) { - if (!cur_start) { - const auto &begin = detail::serialize_multipart_formdata_item_begin( - provider_items[cur_item], boundary); - offset += begin.size(); - cur_start = offset; - sink.os << begin; - } - - DataSink cur_sink; - bool has_data = true; - cur_sink.write = sink.write; - cur_sink.done = [&]() { has_data = false; }; - - if (!provider_items[cur_item].provider(offset - cur_start, cur_sink)) - return false; - - if (!has_data) { - sink.os << detail::serialize_multipart_formdata_item_end(); - cur_item++; - cur_start = 0; - } - return true; - } else { - sink.os << detail::serialize_multipart_formdata_finish(boundary); - sink.done(); - return true; - } - }; -} - -inline bool -ClientImpl::process_socket(const Socket &socket, - std::function callback) { - return detail::process_client_socket( - socket.sock, read_timeout_sec_, read_timeout_usec_, write_timeout_sec_, - write_timeout_usec_, std::move(callback)); -} - -inline bool ClientImpl::is_ssl() const { return false; } - -inline Result ClientImpl::Get(const std::string &path) { - return Get(path, Headers(), Progress()); -} - -inline Result ClientImpl::Get(const std::string &path, Progress progress) { - return Get(path, Headers(), std::move(progress)); -} - -inline Result ClientImpl::Get(const std::string &path, const Headers &headers) { - return Get(path, headers, Progress()); -} - -inline Result ClientImpl::Get(const std::string &path, const Headers &headers, - Progress progress) { - Request req; - req.method = "GET"; - req.path = path; - req.headers = headers; - req.progress = std::move(progress); - - return send_(std::move(req)); -} - -inline Result ClientImpl::Get(const std::string &path, - ContentReceiver content_receiver) { - return Get(path, Headers(), nullptr, std::move(content_receiver), nullptr); -} - -inline Result ClientImpl::Get(const std::string &path, - ContentReceiver content_receiver, - Progress progress) { - return Get(path, Headers(), nullptr, std::move(content_receiver), - std::move(progress)); -} - -inline Result ClientImpl::Get(const std::string &path, const Headers &headers, - ContentReceiver content_receiver) { - return Get(path, headers, nullptr, std::move(content_receiver), nullptr); -} - -inline Result ClientImpl::Get(const std::string &path, const Headers &headers, - ContentReceiver content_receiver, - Progress progress) { - return Get(path, headers, nullptr, std::move(content_receiver), - std::move(progress)); -} - -inline Result ClientImpl::Get(const std::string &path, - ResponseHandler response_handler, - ContentReceiver content_receiver) { - return Get(path, Headers(), std::move(response_handler), - std::move(content_receiver), nullptr); -} - -inline Result ClientImpl::Get(const std::string &path, const Headers &headers, - ResponseHandler response_handler, - ContentReceiver content_receiver) { - return Get(path, headers, std::move(response_handler), - std::move(content_receiver), nullptr); -} - -inline Result ClientImpl::Get(const std::string &path, - ResponseHandler response_handler, - ContentReceiver content_receiver, - Progress progress) { - return Get(path, Headers(), std::move(response_handler), - std::move(content_receiver), std::move(progress)); -} - -inline Result ClientImpl::Get(const std::string &path, const Headers &headers, - ResponseHandler response_handler, - ContentReceiver content_receiver, - Progress progress) { - Request req; - req.method = "GET"; - req.path = path; - req.headers = headers; - req.response_handler = std::move(response_handler); - req.content_receiver = - [content_receiver](const char *data, size_t data_length, - uint64_t /*offset*/, uint64_t /*total_length*/) { - return content_receiver(data, data_length); - }; - req.progress = std::move(progress); - - return send_(std::move(req)); -} - -inline Result ClientImpl::Get(const std::string &path, const Params ¶ms, - const Headers &headers, Progress progress) { - if (params.empty()) { return Get(path, headers); } - - std::string path_with_query = append_query_params(path, params); - return Get(path_with_query.c_str(), headers, progress); -} - -inline Result ClientImpl::Get(const std::string &path, const Params ¶ms, - const Headers &headers, - ContentReceiver content_receiver, - Progress progress) { - return Get(path, params, headers, nullptr, content_receiver, progress); -} - -inline Result ClientImpl::Get(const std::string &path, const Params ¶ms, - const Headers &headers, - ResponseHandler response_handler, - ContentReceiver content_receiver, - Progress progress) { - if (params.empty()) { - return Get(path, headers, response_handler, content_receiver, progress); - } - - std::string path_with_query = append_query_params(path, params); - return Get(path_with_query.c_str(), headers, response_handler, - content_receiver, progress); -} - -inline Result ClientImpl::Head(const std::string &path) { - return Head(path, Headers()); -} - -inline Result ClientImpl::Head(const std::string &path, - const Headers &headers) { - Request req; - req.method = "HEAD"; - req.headers = headers; - req.path = path; - - return send_(std::move(req)); -} - -inline Result ClientImpl::Post(const std::string &path) { - return Post(path, std::string(), std::string()); -} - -inline Result ClientImpl::Post(const std::string &path, - const Headers &headers) { - return Post(path, headers, nullptr, 0, std::string()); -} - -inline Result ClientImpl::Post(const std::string &path, const char *body, - size_t content_length, - const std::string &content_type) { - return Post(path, Headers(), body, content_length, content_type); -} - -inline Result ClientImpl::Post(const std::string &path, const Headers &headers, - const char *body, size_t content_length, - const std::string &content_type) { - return send_with_content_provider("POST", path, headers, body, content_length, - nullptr, nullptr, content_type); -} - -inline Result ClientImpl::Post(const std::string &path, const std::string &body, - const std::string &content_type) { - return Post(path, Headers(), body, content_type); -} - -inline Result ClientImpl::Post(const std::string &path, const Headers &headers, - const std::string &body, - const std::string &content_type) { - return send_with_content_provider("POST", path, headers, body.data(), - body.size(), nullptr, nullptr, - content_type); -} - -inline Result ClientImpl::Post(const std::string &path, const Params ¶ms) { - return Post(path, Headers(), params); -} - -inline Result ClientImpl::Post(const std::string &path, size_t content_length, - ContentProvider content_provider, - const std::string &content_type) { - return Post(path, Headers(), content_length, std::move(content_provider), - content_type); -} - -inline Result ClientImpl::Post(const std::string &path, - ContentProviderWithoutLength content_provider, - const std::string &content_type) { - return Post(path, Headers(), std::move(content_provider), content_type); -} - -inline Result ClientImpl::Post(const std::string &path, const Headers &headers, - size_t content_length, - ContentProvider content_provider, - const std::string &content_type) { - return send_with_content_provider("POST", path, headers, nullptr, - content_length, std::move(content_provider), - nullptr, content_type); -} - -inline Result ClientImpl::Post(const std::string &path, const Headers &headers, - ContentProviderWithoutLength content_provider, - const std::string &content_type) { - return send_with_content_provider("POST", path, headers, nullptr, 0, nullptr, - std::move(content_provider), content_type); -} - -inline Result ClientImpl::Post(const std::string &path, const Headers &headers, - const Params ¶ms) { - auto query = detail::params_to_query_str(params); - return Post(path, headers, query, "application/x-www-form-urlencoded"); -} - -inline Result ClientImpl::Post(const std::string &path, - const MultipartFormDataItems &items) { - return Post(path, Headers(), items); -} - -inline Result ClientImpl::Post(const std::string &path, const Headers &headers, - const MultipartFormDataItems &items) { - const auto &boundary = detail::make_multipart_data_boundary(); - const auto &content_type = - detail::serialize_multipart_formdata_get_content_type(boundary); - const auto &body = detail::serialize_multipart_formdata(items, boundary); - return Post(path, headers, body, content_type.c_str()); -} - -inline Result ClientImpl::Post(const std::string &path, const Headers &headers, - const MultipartFormDataItems &items, - const std::string &boundary) { - if (!detail::is_multipart_boundary_chars_valid(boundary)) { - return Result{nullptr, Error::UnsupportedMultipartBoundaryChars}; - } - - const auto &content_type = - detail::serialize_multipart_formdata_get_content_type(boundary); - const auto &body = detail::serialize_multipart_formdata(items, boundary); - return Post(path, headers, body, content_type.c_str()); -} - -inline Result -ClientImpl::Post(const std::string &path, const Headers &headers, - const MultipartFormDataItems &items, - const MultipartFormDataProviderItems &provider_items) { - const auto &boundary = detail::make_multipart_data_boundary(); - const auto &content_type = - detail::serialize_multipart_formdata_get_content_type(boundary); - return send_with_content_provider( - "POST", path, headers, nullptr, 0, nullptr, - get_multipart_content_provider(boundary, items, provider_items), - content_type); -} - -inline Result ClientImpl::Put(const std::string &path) { - return Put(path, std::string(), std::string()); -} - -inline Result ClientImpl::Put(const std::string &path, const char *body, - size_t content_length, - const std::string &content_type) { - return Put(path, Headers(), body, content_length, content_type); -} - -inline Result ClientImpl::Put(const std::string &path, const Headers &headers, - const char *body, size_t content_length, - const std::string &content_type) { - return send_with_content_provider("PUT", path, headers, body, content_length, - nullptr, nullptr, content_type); -} - -inline Result ClientImpl::Put(const std::string &path, const std::string &body, - const std::string &content_type) { - return Put(path, Headers(), body, content_type); -} - -inline Result ClientImpl::Put(const std::string &path, const Headers &headers, - const std::string &body, - const std::string &content_type) { - return send_with_content_provider("PUT", path, headers, body.data(), - body.size(), nullptr, nullptr, - content_type); -} - -inline Result ClientImpl::Put(const std::string &path, size_t content_length, - ContentProvider content_provider, - const std::string &content_type) { - return Put(path, Headers(), content_length, std::move(content_provider), - content_type); -} - -inline Result ClientImpl::Put(const std::string &path, - ContentProviderWithoutLength content_provider, - const std::string &content_type) { - return Put(path, Headers(), std::move(content_provider), content_type); -} - -inline Result ClientImpl::Put(const std::string &path, const Headers &headers, - size_t content_length, - ContentProvider content_provider, - const std::string &content_type) { - return send_with_content_provider("PUT", path, headers, nullptr, - content_length, std::move(content_provider), - nullptr, content_type); -} - -inline Result ClientImpl::Put(const std::string &path, const Headers &headers, - ContentProviderWithoutLength content_provider, - const std::string &content_type) { - return send_with_content_provider("PUT", path, headers, nullptr, 0, nullptr, - std::move(content_provider), content_type); -} - -inline Result ClientImpl::Put(const std::string &path, const Params ¶ms) { - return Put(path, Headers(), params); -} - -inline Result ClientImpl::Put(const std::string &path, const Headers &headers, - const Params ¶ms) { - auto query = detail::params_to_query_str(params); - return Put(path, headers, query, "application/x-www-form-urlencoded"); -} - -inline Result ClientImpl::Put(const std::string &path, - const MultipartFormDataItems &items) { - return Put(path, Headers(), items); -} - -inline Result ClientImpl::Put(const std::string &path, const Headers &headers, - const MultipartFormDataItems &items) { - const auto &boundary = detail::make_multipart_data_boundary(); - const auto &content_type = - detail::serialize_multipart_formdata_get_content_type(boundary); - const auto &body = detail::serialize_multipart_formdata(items, boundary); - return Put(path, headers, body, content_type); -} - -inline Result ClientImpl::Put(const std::string &path, const Headers &headers, - const MultipartFormDataItems &items, - const std::string &boundary) { - if (!detail::is_multipart_boundary_chars_valid(boundary)) { - return Result{nullptr, Error::UnsupportedMultipartBoundaryChars}; - } - - const auto &content_type = - detail::serialize_multipart_formdata_get_content_type(boundary); - const auto &body = detail::serialize_multipart_formdata(items, boundary); - return Put(path, headers, body, content_type); -} - -inline Result -ClientImpl::Put(const std::string &path, const Headers &headers, - const MultipartFormDataItems &items, - const MultipartFormDataProviderItems &provider_items) { - const auto &boundary = detail::make_multipart_data_boundary(); - const auto &content_type = - detail::serialize_multipart_formdata_get_content_type(boundary); - return send_with_content_provider( - "PUT", path, headers, nullptr, 0, nullptr, - get_multipart_content_provider(boundary, items, provider_items), - content_type); -} -inline Result ClientImpl::Patch(const std::string &path) { - return Patch(path, std::string(), std::string()); -} - -inline Result ClientImpl::Patch(const std::string &path, const char *body, - size_t content_length, - const std::string &content_type) { - return Patch(path, Headers(), body, content_length, content_type); -} - -inline Result ClientImpl::Patch(const std::string &path, const Headers &headers, - const char *body, size_t content_length, - const std::string &content_type) { - return send_with_content_provider("PATCH", path, headers, body, - content_length, nullptr, nullptr, - content_type); -} - -inline Result ClientImpl::Patch(const std::string &path, - const std::string &body, - const std::string &content_type) { - return Patch(path, Headers(), body, content_type); -} - -inline Result ClientImpl::Patch(const std::string &path, const Headers &headers, - const std::string &body, - const std::string &content_type) { - return send_with_content_provider("PATCH", path, headers, body.data(), - body.size(), nullptr, nullptr, - content_type); -} - -inline Result ClientImpl::Patch(const std::string &path, size_t content_length, - ContentProvider content_provider, - const std::string &content_type) { - return Patch(path, Headers(), content_length, std::move(content_provider), - content_type); -} - -inline Result ClientImpl::Patch(const std::string &path, - ContentProviderWithoutLength content_provider, - const std::string &content_type) { - return Patch(path, Headers(), std::move(content_provider), content_type); -} - -inline Result ClientImpl::Patch(const std::string &path, const Headers &headers, - size_t content_length, - ContentProvider content_provider, - const std::string &content_type) { - return send_with_content_provider("PATCH", path, headers, nullptr, - content_length, std::move(content_provider), - nullptr, content_type); -} - -inline Result ClientImpl::Patch(const std::string &path, const Headers &headers, - ContentProviderWithoutLength content_provider, - const std::string &content_type) { - return send_with_content_provider("PATCH", path, headers, nullptr, 0, nullptr, - std::move(content_provider), content_type); -} - -inline Result ClientImpl::Delete(const std::string &path) { - return Delete(path, Headers(), std::string(), std::string()); -} - -inline Result ClientImpl::Delete(const std::string &path, - const Headers &headers) { - return Delete(path, headers, std::string(), std::string()); -} - -inline Result ClientImpl::Delete(const std::string &path, const char *body, - size_t content_length, - const std::string &content_type) { - return Delete(path, Headers(), body, content_length, content_type); -} - -inline Result ClientImpl::Delete(const std::string &path, - const Headers &headers, const char *body, - size_t content_length, - const std::string &content_type) { - Request req; - req.method = "DELETE"; - req.headers = headers; - req.path = path; - - if (!content_type.empty()) { - req.headers.emplace("Content-Type", content_type); - } - req.body.assign(body, content_length); - - return send_(std::move(req)); -} - -inline Result ClientImpl::Delete(const std::string &path, - const std::string &body, - const std::string &content_type) { - return Delete(path, Headers(), body.data(), body.size(), content_type); -} - -inline Result ClientImpl::Delete(const std::string &path, - const Headers &headers, - const std::string &body, - const std::string &content_type) { - return Delete(path, headers, body.data(), body.size(), content_type); -} - -inline Result ClientImpl::Options(const std::string &path) { - return Options(path, Headers()); -} - -inline Result ClientImpl::Options(const std::string &path, - const Headers &headers) { - Request req; - req.method = "OPTIONS"; - req.headers = headers; - req.path = path; - - return send_(std::move(req)); -} - -inline size_t ClientImpl::is_socket_open() const { - std::lock_guard guard(socket_mutex_); - return socket_.is_open(); -} - -inline socket_t ClientImpl::socket() const { return socket_.sock; } - -inline void ClientImpl::stop() { - std::lock_guard guard(socket_mutex_); - - // If there is anything ongoing right now, the ONLY thread-safe thing we can - // do is to shutdown_socket, so that threads using this socket suddenly - // discover they can't read/write any more and error out. Everything else - // (closing the socket, shutting ssl down) is unsafe because these actions are - // not thread-safe. - if (socket_requests_in_flight_ > 0) { - shutdown_socket(socket_); - - // Aside from that, we set a flag for the socket to be closed when we're - // done. - socket_should_be_closed_when_request_is_done_ = true; - return; - } - - // Otherwise, still holding the mutex, we can shut everything down ourselves - shutdown_ssl(socket_, true); - shutdown_socket(socket_); - close_socket(socket_); -} - -inline void ClientImpl::set_connection_timeout(time_t sec, time_t usec) { - connection_timeout_sec_ = sec; - connection_timeout_usec_ = usec; -} - -inline void ClientImpl::set_read_timeout(time_t sec, time_t usec) { - read_timeout_sec_ = sec; - read_timeout_usec_ = usec; -} - -inline void ClientImpl::set_write_timeout(time_t sec, time_t usec) { - write_timeout_sec_ = sec; - write_timeout_usec_ = usec; -} - -inline void ClientImpl::set_basic_auth(const std::string &username, - const std::string &password) { - basic_auth_username_ = username; - basic_auth_password_ = password; -} - -inline void ClientImpl::set_bearer_token_auth(const std::string &token) { - bearer_token_auth_token_ = token; -} - -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT -inline void ClientImpl::set_digest_auth(const std::string &username, - const std::string &password) { - digest_auth_username_ = username; - digest_auth_password_ = password; -} -#endif - -inline void ClientImpl::set_keep_alive(bool on) { keep_alive_ = on; } - -inline void ClientImpl::set_follow_location(bool on) { follow_location_ = on; } - -inline void ClientImpl::set_url_encode(bool on) { url_encode_ = on; } - -inline void -ClientImpl::set_hostname_addr_map(std::map addr_map) { - addr_map_ = std::move(addr_map); -} - -inline void ClientImpl::set_default_headers(Headers headers) { - default_headers_ = std::move(headers); -} - -inline void ClientImpl::set_address_family(int family) { - address_family_ = family; -} - -inline void ClientImpl::set_tcp_nodelay(bool on) { tcp_nodelay_ = on; } - -inline void ClientImpl::set_socket_options(SocketOptions socket_options) { - socket_options_ = std::move(socket_options); -} - -inline void ClientImpl::set_compress(bool on) { compress_ = on; } - -inline void ClientImpl::set_decompress(bool on) { decompress_ = on; } - -inline void ClientImpl::set_interface(const std::string &intf) { - interface_ = intf; -} - -inline void ClientImpl::set_proxy(const std::string &host, int port) { - proxy_host_ = host; - proxy_port_ = port; -} - -inline void ClientImpl::set_proxy_basic_auth(const std::string &username, - const std::string &password) { - proxy_basic_auth_username_ = username; - proxy_basic_auth_password_ = password; -} - -inline void ClientImpl::set_proxy_bearer_token_auth(const std::string &token) { - proxy_bearer_token_auth_token_ = token; -} - -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT -inline void ClientImpl::set_proxy_digest_auth(const std::string &username, - const std::string &password) { - proxy_digest_auth_username_ = username; - proxy_digest_auth_password_ = password; -} -#endif - -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT -inline void ClientImpl::set_ca_cert_path(const std::string &ca_cert_file_path, - const std::string &ca_cert_dir_path) { - ca_cert_file_path_ = ca_cert_file_path; - ca_cert_dir_path_ = ca_cert_dir_path; -} - -inline void ClientImpl::set_ca_cert_store(X509_STORE *ca_cert_store) { - if (ca_cert_store && ca_cert_store != ca_cert_store_) { - ca_cert_store_ = ca_cert_store; - } -} -#endif - -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT -inline void ClientImpl::enable_server_certificate_verification(bool enabled) { - server_certificate_verification_ = enabled; -} -#endif - -inline void ClientImpl::set_logger(Logger logger) { - logger_ = std::move(logger); -} - -/* - * SSL Implementation - */ -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT -namespace detail { - -template -inline SSL *ssl_new(socket_t sock, SSL_CTX *ctx, std::mutex &ctx_mutex, - U SSL_connect_or_accept, V setup) { - SSL *ssl = nullptr; - { - std::lock_guard guard(ctx_mutex); - ssl = SSL_new(ctx); - } - - if (ssl) { - set_nonblocking(sock, true); - auto bio = BIO_new_socket(static_cast(sock), BIO_NOCLOSE); - BIO_set_nbio(bio, 1); - SSL_set_bio(ssl, bio, bio); - - if (!setup(ssl) || SSL_connect_or_accept(ssl) != 1) { - SSL_shutdown(ssl); - { - std::lock_guard guard(ctx_mutex); - SSL_free(ssl); - } - set_nonblocking(sock, false); - return nullptr; - } - BIO_set_nbio(bio, 0); - set_nonblocking(sock, false); - } - - return ssl; -} - -inline void ssl_delete(std::mutex &ctx_mutex, SSL *ssl, - bool shutdown_gracefully) { - // sometimes we may want to skip this to try to avoid SIGPIPE if we know - // the remote has closed the network connection - // Note that it is not always possible to avoid SIGPIPE, this is merely a - // best-efforts. - if (shutdown_gracefully) { SSL_shutdown(ssl); } - - std::lock_guard guard(ctx_mutex); - SSL_free(ssl); -} - -template -bool ssl_connect_or_accept_nonblocking(socket_t sock, SSL *ssl, - U ssl_connect_or_accept, - time_t timeout_sec, - time_t timeout_usec) { - int res = 0; - while ((res = ssl_connect_or_accept(ssl)) != 1) { - auto err = SSL_get_error(ssl, res); - switch (err) { - case SSL_ERROR_WANT_READ: - if (select_read(sock, timeout_sec, timeout_usec) > 0) { continue; } - break; - case SSL_ERROR_WANT_WRITE: - if (select_write(sock, timeout_sec, timeout_usec) > 0) { continue; } - break; - default: break; - } - return false; - } - return true; -} - -template -inline bool process_server_socket_ssl( - const std::atomic &svr_sock, SSL *ssl, socket_t sock, - size_t keep_alive_max_count, time_t keep_alive_timeout_sec, - time_t read_timeout_sec, time_t read_timeout_usec, time_t write_timeout_sec, - time_t write_timeout_usec, T callback) { - return process_server_socket_core( - svr_sock, sock, keep_alive_max_count, keep_alive_timeout_sec, - [&](bool close_connection, bool &connection_closed) { - SSLSocketStream strm(sock, ssl, read_timeout_sec, read_timeout_usec, - write_timeout_sec, write_timeout_usec); - return callback(strm, close_connection, connection_closed); - }); -} - -template -inline bool -process_client_socket_ssl(SSL *ssl, socket_t sock, time_t read_timeout_sec, - time_t read_timeout_usec, time_t write_timeout_sec, - time_t write_timeout_usec, T callback) { - SSLSocketStream strm(sock, ssl, read_timeout_sec, read_timeout_usec, - write_timeout_sec, write_timeout_usec); - return callback(strm); -} - -class SSLInit { -public: - SSLInit() { - OPENSSL_init_ssl( - OPENSSL_INIT_LOAD_SSL_STRINGS | OPENSSL_INIT_LOAD_CRYPTO_STRINGS, NULL); - } -}; - -// SSL socket stream implementation -inline SSLSocketStream::SSLSocketStream(socket_t sock, SSL *ssl, - time_t read_timeout_sec, - time_t read_timeout_usec, - time_t write_timeout_sec, - time_t write_timeout_usec) - : sock_(sock), ssl_(ssl), read_timeout_sec_(read_timeout_sec), - read_timeout_usec_(read_timeout_usec), - write_timeout_sec_(write_timeout_sec), - write_timeout_usec_(write_timeout_usec) { - SSL_clear_mode(ssl, SSL_MODE_AUTO_RETRY); -} - -inline SSLSocketStream::~SSLSocketStream() {} - -inline bool SSLSocketStream::is_readable() const { - return detail::select_read(sock_, read_timeout_sec_, read_timeout_usec_) > 0; -} - -inline bool SSLSocketStream::is_writable() const { - return select_write(sock_, write_timeout_sec_, write_timeout_usec_) > 0 && - is_socket_alive(sock_); -} - -inline ssize_t SSLSocketStream::read(char *ptr, size_t size) { - if (SSL_pending(ssl_) > 0) { - return SSL_read(ssl_, ptr, static_cast(size)); - } else if (is_readable()) { - auto ret = SSL_read(ssl_, ptr, static_cast(size)); - if (ret < 0) { - auto err = SSL_get_error(ssl_, ret); - int n = 1000; -#ifdef _WIN32 - while (--n >= 0 && (err == SSL_ERROR_WANT_READ || - (err == SSL_ERROR_SYSCALL && - WSAGetLastError() == WSAETIMEDOUT))) { -#else - while (--n >= 0 && err == SSL_ERROR_WANT_READ) { -#endif - if (SSL_pending(ssl_) > 0) { - return SSL_read(ssl_, ptr, static_cast(size)); - } else if (is_readable()) { - std::this_thread::sleep_for(std::chrono::milliseconds(1)); - ret = SSL_read(ssl_, ptr, static_cast(size)); - if (ret >= 0) { return ret; } - err = SSL_get_error(ssl_, ret); - } else { - return -1; - } - } - } - return ret; - } - return -1; -} - -inline ssize_t SSLSocketStream::write(const char *ptr, size_t size) { - if (is_writable()) { - auto handle_size = static_cast( - std::min(size, (std::numeric_limits::max)())); - - auto ret = SSL_write(ssl_, ptr, static_cast(handle_size)); - if (ret < 0) { - auto err = SSL_get_error(ssl_, ret); - int n = 1000; -#ifdef _WIN32 - while (--n >= 0 && (err == SSL_ERROR_WANT_WRITE || - (err == SSL_ERROR_SYSCALL && - WSAGetLastError() == WSAETIMEDOUT))) { -#else - while (--n >= 0 && err == SSL_ERROR_WANT_WRITE) { -#endif - if (is_writable()) { - std::this_thread::sleep_for(std::chrono::milliseconds(1)); - ret = SSL_write(ssl_, ptr, static_cast(handle_size)); - if (ret >= 0) { return ret; } - err = SSL_get_error(ssl_, ret); - } else { - return -1; - } - } - } - return ret; - } - return -1; -} - -inline void SSLSocketStream::get_remote_ip_and_port(std::string &ip, - int &port) const { - detail::get_remote_ip_and_port(sock_, ip, port); -} - -inline void SSLSocketStream::get_local_ip_and_port(std::string &ip, - int &port) const { - detail::get_local_ip_and_port(sock_, ip, port); -} - -inline socket_t SSLSocketStream::socket() const { return sock_; } - -static SSLInit sslinit_; - -} // namespace detail - -// SSL HTTP server implementation -inline SSLServer::SSLServer(const char *cert_path, const char *private_key_path, - const char *client_ca_cert_file_path, - const char *client_ca_cert_dir_path, - const char *private_key_password) { - ctx_ = SSL_CTX_new(TLS_server_method()); - - if (ctx_) { - SSL_CTX_set_options(ctx_, - SSL_OP_NO_COMPRESSION | - SSL_OP_NO_SESSION_RESUMPTION_ON_RENEGOTIATION); - - SSL_CTX_set_min_proto_version(ctx_, TLS1_1_VERSION); - - // add default password callback before opening encrypted private key - if (private_key_password != nullptr && (private_key_password[0] != '\0')) { - SSL_CTX_set_default_passwd_cb_userdata(ctx_, - (char *)private_key_password); - } - - if (SSL_CTX_use_certificate_chain_file(ctx_, cert_path) != 1 || - SSL_CTX_use_PrivateKey_file(ctx_, private_key_path, SSL_FILETYPE_PEM) != - 1) { - SSL_CTX_free(ctx_); - ctx_ = nullptr; - } else if (client_ca_cert_file_path || client_ca_cert_dir_path) { - SSL_CTX_load_verify_locations(ctx_, client_ca_cert_file_path, - client_ca_cert_dir_path); - - SSL_CTX_set_verify( - ctx_, SSL_VERIFY_PEER | SSL_VERIFY_FAIL_IF_NO_PEER_CERT, nullptr); - } - } -} - -inline SSLServer::SSLServer(X509 *cert, EVP_PKEY *private_key, - X509_STORE *client_ca_cert_store) { - ctx_ = SSL_CTX_new(TLS_server_method()); - - if (ctx_) { - SSL_CTX_set_options(ctx_, - SSL_OP_NO_COMPRESSION | - SSL_OP_NO_SESSION_RESUMPTION_ON_RENEGOTIATION); - - SSL_CTX_set_min_proto_version(ctx_, TLS1_1_VERSION); - - if (SSL_CTX_use_certificate(ctx_, cert) != 1 || - SSL_CTX_use_PrivateKey(ctx_, private_key) != 1) { - SSL_CTX_free(ctx_); - ctx_ = nullptr; - } else if (client_ca_cert_store) { - SSL_CTX_set_cert_store(ctx_, client_ca_cert_store); - - SSL_CTX_set_verify( - ctx_, SSL_VERIFY_PEER | SSL_VERIFY_FAIL_IF_NO_PEER_CERT, nullptr); - } - } -} - -inline SSLServer::SSLServer( - const std::function &setup_ssl_ctx_callback) { - ctx_ = SSL_CTX_new(TLS_method()); - if (ctx_) { - if (!setup_ssl_ctx_callback(*ctx_)) { - SSL_CTX_free(ctx_); - ctx_ = nullptr; - } - } -} - -inline SSLServer::~SSLServer() { - if (ctx_) { SSL_CTX_free(ctx_); } -} - -inline bool SSLServer::is_valid() const { return ctx_; } - -inline SSL_CTX *SSLServer::ssl_context() const { return ctx_; } - -inline bool SSLServer::process_and_close_socket(socket_t sock) { - auto ssl = detail::ssl_new( - sock, ctx_, ctx_mutex_, - [&](SSL *ssl2) { - return detail::ssl_connect_or_accept_nonblocking( - sock, ssl2, SSL_accept, read_timeout_sec_, read_timeout_usec_); - }, - [](SSL * /*ssl2*/) { return true; }); - - auto ret = false; - if (ssl) { - ret = detail::process_server_socket_ssl( - svr_sock_, ssl, sock, keep_alive_max_count_, keep_alive_timeout_sec_, - read_timeout_sec_, read_timeout_usec_, write_timeout_sec_, - write_timeout_usec_, - [this, ssl](Stream &strm, bool close_connection, - bool &connection_closed) { - return process_request(strm, close_connection, connection_closed, - [&](Request &req) { req.ssl = ssl; }); - }); - - // Shutdown gracefully if the result seemed successful, non-gracefully if - // the connection appeared to be closed. - const bool shutdown_gracefully = ret; - detail::ssl_delete(ctx_mutex_, ssl, shutdown_gracefully); - } - - detail::shutdown_socket(sock); - detail::close_socket(sock); - return ret; -} - -// SSL HTTP client implementation -inline SSLClient::SSLClient(const std::string &host) - : SSLClient(host, 443, std::string(), std::string()) {} - -inline SSLClient::SSLClient(const std::string &host, int port) - : SSLClient(host, port, std::string(), std::string()) {} - -inline SSLClient::SSLClient(const std::string &host, int port, - const std::string &client_cert_path, - const std::string &client_key_path) - : ClientImpl(host, port, client_cert_path, client_key_path) { - ctx_ = SSL_CTX_new(TLS_client_method()); - - detail::split(&host_[0], &host_[host_.size()], '.', - [&](const char *b, const char *e) { - host_components_.emplace_back(std::string(b, e)); - }); - - if (!client_cert_path.empty() && !client_key_path.empty()) { - if (SSL_CTX_use_certificate_file(ctx_, client_cert_path.c_str(), - SSL_FILETYPE_PEM) != 1 || - SSL_CTX_use_PrivateKey_file(ctx_, client_key_path.c_str(), - SSL_FILETYPE_PEM) != 1) { - SSL_CTX_free(ctx_); - ctx_ = nullptr; - } - } -} - -inline SSLClient::SSLClient(const std::string &host, int port, - X509 *client_cert, EVP_PKEY *client_key) - : ClientImpl(host, port) { - ctx_ = SSL_CTX_new(TLS_client_method()); - - detail::split(&host_[0], &host_[host_.size()], '.', - [&](const char *b, const char *e) { - host_components_.emplace_back(std::string(b, e)); - }); - - if (client_cert != nullptr && client_key != nullptr) { - if (SSL_CTX_use_certificate(ctx_, client_cert) != 1 || - SSL_CTX_use_PrivateKey(ctx_, client_key) != 1) { - SSL_CTX_free(ctx_); - ctx_ = nullptr; - } - } -} - -inline SSLClient::~SSLClient() { - if (ctx_) { SSL_CTX_free(ctx_); } - // Make sure to shut down SSL since shutdown_ssl will resolve to the - // base function rather than the derived function once we get to the - // base class destructor, and won't free the SSL (causing a leak). - shutdown_ssl_impl(socket_, true); -} - -inline bool SSLClient::is_valid() const { return ctx_; } - -inline void SSLClient::set_ca_cert_store(X509_STORE *ca_cert_store) { - if (ca_cert_store) { - if (ctx_) { - if (SSL_CTX_get_cert_store(ctx_) != ca_cert_store) { - // Free memory allocated for old cert and use new store `ca_cert_store` - SSL_CTX_set_cert_store(ctx_, ca_cert_store); - } - } else { - X509_STORE_free(ca_cert_store); - } - } -} - -inline long SSLClient::get_openssl_verify_result() const { - return verify_result_; -} - -inline SSL_CTX *SSLClient::ssl_context() const { return ctx_; } - -inline bool SSLClient::create_and_connect_socket(Socket &socket, Error &error) { - return is_valid() && ClientImpl::create_and_connect_socket(socket, error); -} - -// Assumes that socket_mutex_ is locked and that there are no requests in flight -inline bool SSLClient::connect_with_proxy(Socket &socket, Response &res, - bool &success, Error &error) { - success = true; - Response res2; - if (!detail::process_client_socket( - socket.sock, read_timeout_sec_, read_timeout_usec_, - write_timeout_sec_, write_timeout_usec_, [&](Stream &strm) { - Request req2; - req2.method = "CONNECT"; - req2.path = host_and_port_; - return process_request(strm, req2, res2, false, error); - })) { - // Thread-safe to close everything because we are assuming there are no - // requests in flight - shutdown_ssl(socket, true); - shutdown_socket(socket); - close_socket(socket); - success = false; - return false; - } - - if (res2.status == 407) { - if (!proxy_digest_auth_username_.empty() && - !proxy_digest_auth_password_.empty()) { - std::map auth; - if (detail::parse_www_authenticate(res2, auth, true)) { - Response res3; - if (!detail::process_client_socket( - socket.sock, read_timeout_sec_, read_timeout_usec_, - write_timeout_sec_, write_timeout_usec_, [&](Stream &strm) { - Request req3; - req3.method = "CONNECT"; - req3.path = host_and_port_; - req3.headers.insert(detail::make_digest_authentication_header( - req3, auth, 1, detail::random_string(10), - proxy_digest_auth_username_, proxy_digest_auth_password_, - true)); - return process_request(strm, req3, res3, false, error); - })) { - // Thread-safe to close everything because we are assuming there are - // no requests in flight - shutdown_ssl(socket, true); - shutdown_socket(socket); - close_socket(socket); - success = false; - return false; - } - } - } else { - res = res2; - return false; - } - } - - return true; -} - -inline bool SSLClient::load_certs() { - bool ret = true; - - std::call_once(initialize_cert_, [&]() { - std::lock_guard guard(ctx_mutex_); - if (!ca_cert_file_path_.empty()) { - if (!SSL_CTX_load_verify_locations(ctx_, ca_cert_file_path_.c_str(), - nullptr)) { - ret = false; - } - } else if (!ca_cert_dir_path_.empty()) { - if (!SSL_CTX_load_verify_locations(ctx_, nullptr, - ca_cert_dir_path_.c_str())) { - ret = false; - } - } else { - auto loaded = false; -#ifdef _WIN32 - loaded = - detail::load_system_certs_on_windows(SSL_CTX_get_cert_store(ctx_)); -#elif defined(CPPHTTPLIB_USE_CERTS_FROM_MACOSX_KEYCHAIN) && defined(__APPLE__) -#if TARGET_OS_OSX - loaded = detail::load_system_certs_on_macos(SSL_CTX_get_cert_store(ctx_)); -#endif // TARGET_OS_OSX -#endif // _WIN32 - if (!loaded) { SSL_CTX_set_default_verify_paths(ctx_); } - } - }); - - return ret; -} - -inline bool SSLClient::initialize_ssl(Socket &socket, Error &error) { - auto ssl = detail::ssl_new( - socket.sock, ctx_, ctx_mutex_, - [&](SSL *ssl2) { - if (server_certificate_verification_) { - if (!load_certs()) { - error = Error::SSLLoadingCerts; - return false; - } - SSL_set_verify(ssl2, SSL_VERIFY_NONE, nullptr); - } - - if (!detail::ssl_connect_or_accept_nonblocking( - socket.sock, ssl2, SSL_connect, connection_timeout_sec_, - connection_timeout_usec_)) { - error = Error::SSLConnection; - return false; - } - - if (server_certificate_verification_) { - verify_result_ = SSL_get_verify_result(ssl2); - - if (verify_result_ != X509_V_OK) { - error = Error::SSLServerVerification; - return false; - } - - auto server_cert = SSL_get1_peer_certificate(ssl2); - - if (server_cert == nullptr) { - error = Error::SSLServerVerification; - return false; - } - - if (!verify_host(server_cert)) { - X509_free(server_cert); - error = Error::SSLServerVerification; - return false; - } - X509_free(server_cert); - } - - return true; - }, - [&](SSL *ssl2) { - SSL_set_tlsext_host_name(ssl2, host_.c_str()); - return true; - }); - - if (ssl) { - socket.ssl = ssl; - return true; - } - - shutdown_socket(socket); - close_socket(socket); - return false; -} - -inline void SSLClient::shutdown_ssl(Socket &socket, bool shutdown_gracefully) { - shutdown_ssl_impl(socket, shutdown_gracefully); -} - -inline void SSLClient::shutdown_ssl_impl(Socket &socket, - bool shutdown_gracefully) { - if (socket.sock == INVALID_SOCKET) { - assert(socket.ssl == nullptr); - return; - } - if (socket.ssl) { - detail::ssl_delete(ctx_mutex_, socket.ssl, shutdown_gracefully); - socket.ssl = nullptr; - } - assert(socket.ssl == nullptr); -} - -inline bool -SSLClient::process_socket(const Socket &socket, - std::function callback) { - assert(socket.ssl); - return detail::process_client_socket_ssl( - socket.ssl, socket.sock, read_timeout_sec_, read_timeout_usec_, - write_timeout_sec_, write_timeout_usec_, std::move(callback)); -} - -inline bool SSLClient::is_ssl() const { return true; } - -inline bool SSLClient::verify_host(X509 *server_cert) const { - /* Quote from RFC2818 section 3.1 "Server Identity" - - If a subjectAltName extension of type dNSName is present, that MUST - be used as the identity. Otherwise, the (most specific) Common Name - field in the Subject field of the certificate MUST be used. Although - the use of the Common Name is existing practice, it is deprecated and - Certification Authorities are encouraged to use the dNSName instead. - - Matching is performed using the matching rules specified by - [RFC2459]. If more than one identity of a given type is present in - the certificate (e.g., more than one dNSName name, a match in any one - of the set is considered acceptable.) Names may contain the wildcard - character * which is considered to match any single domain name - component or component fragment. E.g., *.a.com matches foo.a.com but - not bar.foo.a.com. f*.com matches foo.com but not bar.com. - - In some cases, the URI is specified as an IP address rather than a - hostname. In this case, the iPAddress subjectAltName must be present - in the certificate and must exactly match the IP in the URI. - - */ - return verify_host_with_subject_alt_name(server_cert) || - verify_host_with_common_name(server_cert); -} - -inline bool -SSLClient::verify_host_with_subject_alt_name(X509 *server_cert) const { - auto ret = false; - - auto type = GEN_DNS; - - struct in6_addr addr6; - struct in_addr addr; - size_t addr_len = 0; - -#ifndef __MINGW32__ - if (inet_pton(AF_INET6, host_.c_str(), &addr6)) { - type = GEN_IPADD; - addr_len = sizeof(struct in6_addr); - } else if (inet_pton(AF_INET, host_.c_str(), &addr)) { - type = GEN_IPADD; - addr_len = sizeof(struct in_addr); - } -#endif - - auto alt_names = static_cast( - X509_get_ext_d2i(server_cert, NID_subject_alt_name, nullptr, nullptr)); - - if (alt_names) { - auto dsn_matched = false; - auto ip_matched = false; - - auto count = sk_GENERAL_NAME_num(alt_names); - - for (decltype(count) i = 0; i < count && !dsn_matched; i++) { - auto val = sk_GENERAL_NAME_value(alt_names, i); - if (val->type == type) { - auto name = (const char *)ASN1_STRING_get0_data(val->d.ia5); - auto name_len = (size_t)ASN1_STRING_length(val->d.ia5); - - switch (type) { - case GEN_DNS: dsn_matched = check_host_name(name, name_len); break; - - case GEN_IPADD: - if (!memcmp(&addr6, name, addr_len) || - !memcmp(&addr, name, addr_len)) { - ip_matched = true; - } - break; - } - } - } - - if (dsn_matched || ip_matched) { ret = true; } - } - - GENERAL_NAMES_free((STACK_OF(GENERAL_NAME) *)alt_names); - return ret; -} - -inline bool SSLClient::verify_host_with_common_name(X509 *server_cert) const { - const auto subject_name = X509_get_subject_name(server_cert); - - if (subject_name != nullptr) { - char name[BUFSIZ]; - auto name_len = X509_NAME_get_text_by_NID(subject_name, NID_commonName, - name, sizeof(name)); - - if (name_len != -1) { - return check_host_name(name, static_cast(name_len)); - } - } - - return false; -} - -inline bool SSLClient::check_host_name(const char *pattern, - size_t pattern_len) const { - if (host_.size() == pattern_len && host_ == pattern) { return true; } - - // Wildcard match - // https://bugs.launchpad.net/ubuntu/+source/firefox-3.0/+bug/376484 - std::vector pattern_components; - detail::split(&pattern[0], &pattern[pattern_len], '.', - [&](const char *b, const char *e) { - pattern_components.emplace_back(std::string(b, e)); - }); - - if (host_components_.size() != pattern_components.size()) { return false; } - - auto itr = pattern_components.begin(); - for (const auto &h : host_components_) { - auto &p = *itr; - if (p != h && p != "*") { - auto partial_match = (p.size() > 0 && p[p.size() - 1] == '*' && - !p.compare(0, p.size() - 1, h)); - if (!partial_match) { return false; } - } - ++itr; - } - - return true; -} -#endif - -// Universal client implementation -inline Client::Client(const std::string &scheme_host_port) - : Client(scheme_host_port, std::string(), std::string()) {} - -inline Client::Client(const std::string &scheme_host_port, - const std::string &client_cert_path, - const std::string &client_key_path) { - const static std::regex re( - R"((?:([a-z]+):\/\/)?(?:\[([\d:]+)\]|([^:/?#]+))(?::(\d+))?)"); - - std::smatch m; - if (std::regex_match(scheme_host_port, m, re)) { - auto scheme = m[1].str(); - -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT - if (!scheme.empty() && (scheme != "http" && scheme != "https")) { -#else - if (!scheme.empty() && scheme != "http") { -#endif -#ifndef CPPHTTPLIB_NO_EXCEPTIONS - std::string msg = "'" + scheme + "' scheme is not supported."; - throw std::invalid_argument(msg); -#endif - return; - } - - auto is_ssl = scheme == "https"; - - auto host = m[2].str(); - if (host.empty()) { host = m[3].str(); } - - auto port_str = m[4].str(); - auto port = !port_str.empty() ? std::stoi(port_str) : (is_ssl ? 443 : 80); - - if (is_ssl) { -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT - cli_ = detail::make_unique(host, port, client_cert_path, - client_key_path); - is_ssl_ = is_ssl; -#endif - } else { - cli_ = detail::make_unique(host, port, client_cert_path, - client_key_path); - } - } else { - cli_ = detail::make_unique(scheme_host_port, 80, - client_cert_path, client_key_path); - } -} - -inline Client::Client(const std::string &host, int port) - : cli_(detail::make_unique(host, port)) {} - -inline Client::Client(const std::string &host, int port, - const std::string &client_cert_path, - const std::string &client_key_path) - : cli_(detail::make_unique(host, port, client_cert_path, - client_key_path)) {} - -inline Client::~Client() {} - -inline bool Client::is_valid() const { - return cli_ != nullptr && cli_->is_valid(); -} - -inline Result Client::Get(const std::string &path) { return cli_->Get(path); } -inline Result Client::Get(const std::string &path, const Headers &headers) { - return cli_->Get(path, headers); -} -inline Result Client::Get(const std::string &path, Progress progress) { - return cli_->Get(path, std::move(progress)); -} -inline Result Client::Get(const std::string &path, const Headers &headers, - Progress progress) { - return cli_->Get(path, headers, std::move(progress)); -} -inline Result Client::Get(const std::string &path, - ContentReceiver content_receiver) { - return cli_->Get(path, std::move(content_receiver)); -} -inline Result Client::Get(const std::string &path, const Headers &headers, - ContentReceiver content_receiver) { - return cli_->Get(path, headers, std::move(content_receiver)); -} -inline Result Client::Get(const std::string &path, - ContentReceiver content_receiver, Progress progress) { - return cli_->Get(path, std::move(content_receiver), std::move(progress)); -} -inline Result Client::Get(const std::string &path, const Headers &headers, - ContentReceiver content_receiver, Progress progress) { - return cli_->Get(path, headers, std::move(content_receiver), - std::move(progress)); -} -inline Result Client::Get(const std::string &path, - ResponseHandler response_handler, - ContentReceiver content_receiver) { - return cli_->Get(path, std::move(response_handler), - std::move(content_receiver)); -} -inline Result Client::Get(const std::string &path, const Headers &headers, - ResponseHandler response_handler, - ContentReceiver content_receiver) { - return cli_->Get(path, headers, std::move(response_handler), - std::move(content_receiver)); -} -inline Result Client::Get(const std::string &path, - ResponseHandler response_handler, - ContentReceiver content_receiver, Progress progress) { - return cli_->Get(path, std::move(response_handler), - std::move(content_receiver), std::move(progress)); -} -inline Result Client::Get(const std::string &path, const Headers &headers, - ResponseHandler response_handler, - ContentReceiver content_receiver, Progress progress) { - return cli_->Get(path, headers, std::move(response_handler), - std::move(content_receiver), std::move(progress)); -} -inline Result Client::Get(const std::string &path, const Params ¶ms, - const Headers &headers, Progress progress) { - return cli_->Get(path, params, headers, progress); -} -inline Result Client::Get(const std::string &path, const Params ¶ms, - const Headers &headers, - ContentReceiver content_receiver, Progress progress) { - return cli_->Get(path, params, headers, content_receiver, progress); -} -inline Result Client::Get(const std::string &path, const Params ¶ms, - const Headers &headers, - ResponseHandler response_handler, - ContentReceiver content_receiver, Progress progress) { - return cli_->Get(path, params, headers, response_handler, content_receiver, - progress); -} - -inline Result Client::Head(const std::string &path) { return cli_->Head(path); } -inline Result Client::Head(const std::string &path, const Headers &headers) { - return cli_->Head(path, headers); -} - -inline Result Client::Post(const std::string &path) { return cli_->Post(path); } -inline Result Client::Post(const std::string &path, const Headers &headers) { - return cli_->Post(path, headers); -} -inline Result Client::Post(const std::string &path, const char *body, - size_t content_length, - const std::string &content_type) { - return cli_->Post(path, body, content_length, content_type); -} -inline Result Client::Post(const std::string &path, const Headers &headers, - const char *body, size_t content_length, - const std::string &content_type) { - return cli_->Post(path, headers, body, content_length, content_type); -} -inline Result Client::Post(const std::string &path, const std::string &body, - const std::string &content_type) { - return cli_->Post(path, body, content_type); -} -inline Result Client::Post(const std::string &path, const Headers &headers, - const std::string &body, - const std::string &content_type) { - return cli_->Post(path, headers, body, content_type); -} -inline Result Client::Post(const std::string &path, size_t content_length, - ContentProvider content_provider, - const std::string &content_type) { - return cli_->Post(path, content_length, std::move(content_provider), - content_type); -} -inline Result Client::Post(const std::string &path, - ContentProviderWithoutLength content_provider, - const std::string &content_type) { - return cli_->Post(path, std::move(content_provider), content_type); -} -inline Result Client::Post(const std::string &path, const Headers &headers, - size_t content_length, - ContentProvider content_provider, - const std::string &content_type) { - return cli_->Post(path, headers, content_length, std::move(content_provider), - content_type); -} -inline Result Client::Post(const std::string &path, const Headers &headers, - ContentProviderWithoutLength content_provider, - const std::string &content_type) { - return cli_->Post(path, headers, std::move(content_provider), content_type); -} -inline Result Client::Post(const std::string &path, const Params ¶ms) { - return cli_->Post(path, params); -} -inline Result Client::Post(const std::string &path, const Headers &headers, - const Params ¶ms) { - return cli_->Post(path, headers, params); -} -inline Result Client::Post(const std::string &path, - const MultipartFormDataItems &items) { - return cli_->Post(path, items); -} -inline Result Client::Post(const std::string &path, const Headers &headers, - const MultipartFormDataItems &items) { - return cli_->Post(path, headers, items); -} -inline Result Client::Post(const std::string &path, const Headers &headers, - const MultipartFormDataItems &items, - const std::string &boundary) { - return cli_->Post(path, headers, items, boundary); -} -inline Result -Client::Post(const std::string &path, const Headers &headers, - const MultipartFormDataItems &items, - const MultipartFormDataProviderItems &provider_items) { - return cli_->Post(path, headers, items, provider_items); -} -inline Result Client::Put(const std::string &path) { return cli_->Put(path); } -inline Result Client::Put(const std::string &path, const char *body, - size_t content_length, - const std::string &content_type) { - return cli_->Put(path, body, content_length, content_type); -} -inline Result Client::Put(const std::string &path, const Headers &headers, - const char *body, size_t content_length, - const std::string &content_type) { - return cli_->Put(path, headers, body, content_length, content_type); -} -inline Result Client::Put(const std::string &path, const std::string &body, - const std::string &content_type) { - return cli_->Put(path, body, content_type); -} -inline Result Client::Put(const std::string &path, const Headers &headers, - const std::string &body, - const std::string &content_type) { - return cli_->Put(path, headers, body, content_type); -} -inline Result Client::Put(const std::string &path, size_t content_length, - ContentProvider content_provider, - const std::string &content_type) { - return cli_->Put(path, content_length, std::move(content_provider), - content_type); -} -inline Result Client::Put(const std::string &path, - ContentProviderWithoutLength content_provider, - const std::string &content_type) { - return cli_->Put(path, std::move(content_provider), content_type); -} -inline Result Client::Put(const std::string &path, const Headers &headers, - size_t content_length, - ContentProvider content_provider, - const std::string &content_type) { - return cli_->Put(path, headers, content_length, std::move(content_provider), - content_type); -} -inline Result Client::Put(const std::string &path, const Headers &headers, - ContentProviderWithoutLength content_provider, - const std::string &content_type) { - return cli_->Put(path, headers, std::move(content_provider), content_type); -} -inline Result Client::Put(const std::string &path, const Params ¶ms) { - return cli_->Put(path, params); -} -inline Result Client::Put(const std::string &path, const Headers &headers, - const Params ¶ms) { - return cli_->Put(path, headers, params); -} -inline Result Client::Put(const std::string &path, - const MultipartFormDataItems &items) { - return cli_->Put(path, items); -} -inline Result Client::Put(const std::string &path, const Headers &headers, - const MultipartFormDataItems &items) { - return cli_->Put(path, headers, items); -} -inline Result Client::Put(const std::string &path, const Headers &headers, - const MultipartFormDataItems &items, - const std::string &boundary) { - return cli_->Put(path, headers, items, boundary); -} -inline Result -Client::Put(const std::string &path, const Headers &headers, - const MultipartFormDataItems &items, - const MultipartFormDataProviderItems &provider_items) { - return cli_->Put(path, headers, items, provider_items); -} -inline Result Client::Patch(const std::string &path) { - return cli_->Patch(path); -} -inline Result Client::Patch(const std::string &path, const char *body, - size_t content_length, - const std::string &content_type) { - return cli_->Patch(path, body, content_length, content_type); -} -inline Result Client::Patch(const std::string &path, const Headers &headers, - const char *body, size_t content_length, - const std::string &content_type) { - return cli_->Patch(path, headers, body, content_length, content_type); -} -inline Result Client::Patch(const std::string &path, const std::string &body, - const std::string &content_type) { - return cli_->Patch(path, body, content_type); -} -inline Result Client::Patch(const std::string &path, const Headers &headers, - const std::string &body, - const std::string &content_type) { - return cli_->Patch(path, headers, body, content_type); -} -inline Result Client::Patch(const std::string &path, size_t content_length, - ContentProvider content_provider, - const std::string &content_type) { - return cli_->Patch(path, content_length, std::move(content_provider), - content_type); -} -inline Result Client::Patch(const std::string &path, - ContentProviderWithoutLength content_provider, - const std::string &content_type) { - return cli_->Patch(path, std::move(content_provider), content_type); -} -inline Result Client::Patch(const std::string &path, const Headers &headers, - size_t content_length, - ContentProvider content_provider, - const std::string &content_type) { - return cli_->Patch(path, headers, content_length, std::move(content_provider), - content_type); -} -inline Result Client::Patch(const std::string &path, const Headers &headers, - ContentProviderWithoutLength content_provider, - const std::string &content_type) { - return cli_->Patch(path, headers, std::move(content_provider), content_type); -} -inline Result Client::Delete(const std::string &path) { - return cli_->Delete(path); -} -inline Result Client::Delete(const std::string &path, const Headers &headers) { - return cli_->Delete(path, headers); -} -inline Result Client::Delete(const std::string &path, const char *body, - size_t content_length, - const std::string &content_type) { - return cli_->Delete(path, body, content_length, content_type); -} -inline Result Client::Delete(const std::string &path, const Headers &headers, - const char *body, size_t content_length, - const std::string &content_type) { - return cli_->Delete(path, headers, body, content_length, content_type); -} -inline Result Client::Delete(const std::string &path, const std::string &body, - const std::string &content_type) { - return cli_->Delete(path, body, content_type); -} -inline Result Client::Delete(const std::string &path, const Headers &headers, - const std::string &body, - const std::string &content_type) { - return cli_->Delete(path, headers, body, content_type); -} -inline Result Client::Options(const std::string &path) { - return cli_->Options(path); -} -inline Result Client::Options(const std::string &path, const Headers &headers) { - return cli_->Options(path, headers); -} - -inline bool Client::send(Request &req, Response &res, Error &error) { - return cli_->send(req, res, error); -} - -inline Result Client::send(const Request &req) { return cli_->send(req); } - -inline size_t Client::is_socket_open() const { return cli_->is_socket_open(); } - -inline socket_t Client::socket() const { return cli_->socket(); } - -inline void Client::stop() { cli_->stop(); } - -inline void -Client::set_hostname_addr_map(std::map addr_map) { - cli_->set_hostname_addr_map(std::move(addr_map)); -} - -inline void Client::set_default_headers(Headers headers) { - cli_->set_default_headers(std::move(headers)); -} - -inline void Client::set_address_family(int family) { - cli_->set_address_family(family); -} - -inline void Client::set_tcp_nodelay(bool on) { cli_->set_tcp_nodelay(on); } - -inline void Client::set_socket_options(SocketOptions socket_options) { - cli_->set_socket_options(std::move(socket_options)); -} - -inline void Client::set_connection_timeout(time_t sec, time_t usec) { - cli_->set_connection_timeout(sec, usec); -} - -inline void Client::set_read_timeout(time_t sec, time_t usec) { - cli_->set_read_timeout(sec, usec); -} - -inline void Client::set_write_timeout(time_t sec, time_t usec) { - cli_->set_write_timeout(sec, usec); -} - -inline void Client::set_basic_auth(const std::string &username, - const std::string &password) { - cli_->set_basic_auth(username, password); -} -inline void Client::set_bearer_token_auth(const std::string &token) { - cli_->set_bearer_token_auth(token); -} -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT -inline void Client::set_digest_auth(const std::string &username, - const std::string &password) { - cli_->set_digest_auth(username, password); -} -#endif - -inline void Client::set_keep_alive(bool on) { cli_->set_keep_alive(on); } -inline void Client::set_follow_location(bool on) { - cli_->set_follow_location(on); -} - -inline void Client::set_url_encode(bool on) { cli_->set_url_encode(on); } - -inline void Client::set_compress(bool on) { cli_->set_compress(on); } - -inline void Client::set_decompress(bool on) { cli_->set_decompress(on); } - -inline void Client::set_interface(const std::string &intf) { - cli_->set_interface(intf); -} - -inline void Client::set_proxy(const std::string &host, int port) { - cli_->set_proxy(host, port); -} -inline void Client::set_proxy_basic_auth(const std::string &username, - const std::string &password) { - cli_->set_proxy_basic_auth(username, password); -} -inline void Client::set_proxy_bearer_token_auth(const std::string &token) { - cli_->set_proxy_bearer_token_auth(token); -} -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT -inline void Client::set_proxy_digest_auth(const std::string &username, - const std::string &password) { - cli_->set_proxy_digest_auth(username, password); -} -#endif - -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT -inline void Client::enable_server_certificate_verification(bool enabled) { - cli_->enable_server_certificate_verification(enabled); -} -#endif - -inline void Client::set_logger(Logger logger) { cli_->set_logger(logger); } - -#ifdef CPPHTTPLIB_OPENSSL_SUPPORT -inline void Client::set_ca_cert_path(const std::string &ca_cert_file_path, - const std::string &ca_cert_dir_path) { - cli_->set_ca_cert_path(ca_cert_file_path, ca_cert_dir_path); -} - -inline void Client::set_ca_cert_store(X509_STORE *ca_cert_store) { - if (is_ssl_) { - static_cast(*cli_).set_ca_cert_store(ca_cert_store); - } else { - cli_->set_ca_cert_store(ca_cert_store); - } -} - -inline long Client::get_openssl_verify_result() const { - if (is_ssl_) { - return static_cast(*cli_).get_openssl_verify_result(); - } - return -1; // NOTE: -1 doesn't match any of X509_V_ERR_??? -} - -inline SSL_CTX *Client::ssl_context() const { - if (is_ssl_) { return static_cast(*cli_).ssl_context(); } - return nullptr; -} -#endif - -// ---------------------------------------------------------------------------- - -} // namespace httplib - -#if defined(_WIN32) && defined(CPPHTTPLIB_USE_POLL) -#undef poll -#endif - -#endif // CPPHTTPLIB_HTTPLIB_H diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/demo/src/components/Stage.tsx b/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/demo/src/components/Stage.tsx deleted file mode 100644 index 53250487668abfd94308bf4c6152455ba46877fd..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/demo/src/components/Stage.tsx +++ /dev/null @@ -1,49 +0,0 @@ -// Copyright (c) Meta Platforms, Inc. and affiliates. -// All rights reserved. - -// This source code is licensed under the license found in the -// LICENSE file in the root directory of this source tree. - -import React, { useContext } from "react"; -import * as _ from "underscore"; -import Tool from "./Tool"; -import { modelInputProps } from "./helpers/Interfaces"; -import AppContext from "./hooks/createContext"; - -const Stage = () => { - const { - clicks: [, setClicks], - image: [image], - } = useContext(AppContext)!; - - const getClick = (x: number, y: number): modelInputProps => { - const clickType = 1; - return { x, y, clickType }; - }; - - // Get mouse position and scale the (x, y) coordinates back to the natural - // scale of the image. Update the state of clicks with setClicks to trigger - // the ONNX model to run and generate a new mask via a useEffect in App.tsx - const handleMouseMove = _.throttle((e: any) => { - let el = e.nativeEvent.target; - const rect = el.getBoundingClientRect(); - let x = e.clientX - rect.left; - let y = e.clientY - rect.top; - const imageScale = image ? image.width / el.offsetWidth : 1; - x *= imageScale; - y *= imageScale; - const click = getClick(x, y); - if (click) setClicks([click]); - }, 15); - - const flexCenterClasses = "flex items-center justify-center"; - return ( -
-
- -
-
- ); -}; - -export default Stage; diff --git a/spaces/InstaDeepAI/nucleotide_transformer_benchmark/app.py b/spaces/InstaDeepAI/nucleotide_transformer_benchmark/app.py deleted file mode 100644 index ac2912b99f9255d97b0157d43843b79d80c9c14e..0000000000000000000000000000000000000000 --- a/spaces/InstaDeepAI/nucleotide_transformer_benchmark/app.py +++ /dev/null @@ -1,266 +0,0 @@ -from typing import List - -import gradio as gr -import numpy as np -import pandas as pd - -_ORIGINAL_DF = pd.read_csv("./data/benchmark.csv") -_METRICS = ["MCC", "F1", "ACC"] -_AGGREGATION_METHODS = ["mean", "max", "min", "median"] -_TASKS = { - "histone_marks": [ - "H4", - "H3", - "H3K14ac", - "H3K4me1", - "H3K4me3", - "H3K4me2", - "H3K36me3", - "H4ac", - "H3K79me3", - "H3K9ac", - ], - "regulatory_elements": [ - "promoter_no_tata", - "enhancers", - "enhancers_types", - "promoter_all", - "promoter_tata", - ], - "RNA_production": [ - "splice_sites_donors", - "splice_sites_all", - "splice_sites_acceptors", - ], -} - -_BIBTEX = """@article{DallaTorre2023TheNT, - title={The Nucleotide Transformer: Building and Evaluating Robust Foundation Models for Human Genomics}, - author={Hugo Dalla-Torre and Liam Gonzalez and Javier Mendoza Revilla and Nicolas Lopez Carranza and Adam Henryk Grzywaczewski and Francesco Oteri and Christian Dallago and Evan Trop and Hassan Sirelkhatim and Guillaume Richard and Marcin J. Skwark and Karim Beguir and Marie Lopez and Thomas Pierrot}, - journal={bioRxiv}, - year={2023}, - url={https://api.semanticscholar.org/CorpusID:255943445} -} -""" # noqa -_LAST_UPDATED = "Sept 15, 2023" - -banner_url = "./assets/logo.png" -_BANNER = f'
Banner
' # noqa - -_INTRODUCTION_TEXT = """The 🤗 Nucleotide Transformer Leaderboard aims to track, rank and evaluate DNA foundational models on a set of curated downstream tasks introduced in the huggingface dataset [nucleotide_transformer_downstream_tasks](https://huggingface.co/datasets/InstaDeepAI/nucleotide_transformer_downstream_tasks), with a standardized evaluation protocol presented in the "ℹ️ Methods" tab.\n\n - -This leaderboard has been designed to provide, to the best of our ability, fair and robust comparisons between models. If you have any question or concern regarding our methodology or if you would like another model to appear in this leaderboard, please reach out to m.lopez@instadeep.com and t.pierrot@instadeep.com. While we may not be able to take into consideration all requests, the team will always do its best to ensure that benchmark stays as fair, relevant and up-to-date as possible.\n\n - """ # noqa - -_METHODS_TEXT = """ -This leaderboard uses the downstream tasks benchmark and evaluation methdology described in the Nucleotide Transformer paper. We fine-tune each model on each task using a ten-fold validation strategy. For each model and each task, we report the aggregation over the ten-folds for several metrics - the Matthew Correlation Coefficient (MCC), the macro f1-score (F1) and the accuracy (ACC). The Nucleotide Transformer, DNABert and Enformer models have been fine-tuned using the same parameter efficient fine-tuning technique (IA3) with the same set of hyper-parameters. Due to the different nature of their architecture, the HyenaDNA models have been fully-finetuned using the original code provided by the authors. -\n\n - -Please keep in mind that the Enformer has been originally trained in a supervised fashion to solve gene expression tasks. For the sake of benchmarking, we re-used the provided model torso as a pre-trained model for our benchmark, which is not the intended and recommended use of the original paper. Though we think this comparison is interesting to highlight the differences between self-supervised and supervised learning for pre-training and observe that the Enformer is a very competitive baseline even for tasks that differ from gene expression. -\n\n - -For the sake of clarity the tasks being shown by default in this leaderboard are the human related tasks while the original Nucleotide Transformer paper shows performance over both yeast and human related tasks. To obtain the same results as the one shown in the paper, please check all the tasks boxes above. -\n\n -""" # noqa - - -def retrieve_array_from_text(text): - return np.fromstring(text.replace("[", "").replace("]", ""), dtype=float, sep=",") - - -def format_number(x): - return float(f"{x:.3}") - - -def get_dataset( - histone_tasks: List[str], - regulatory_tasks: List[str], - rna_tasks: List[str], - target_metric: str = "MCC", - aggregation_method: str = "mean", -): - tasks = histone_tasks + regulatory_tasks + rna_tasks - - aggr_fn = getattr(np, aggregation_method) - scores = _ORIGINAL_DF[target_metric].apply(retrieve_array_from_text).apply(aggr_fn) - scores = scores.apply(format_number) - df = _ORIGINAL_DF.drop(columns=_METRICS) - df["Score"] = scores - df = df.pivot(index="Model", columns="Dataset", values="Score") - df = df[tasks] - df["All Tasks"] = df.agg("mean", axis="columns").apply(format_number) - columns = list(df.columns.values) - columns.sort() - df = df[columns] - df.reset_index(inplace=True) - df = df.rename(columns={"index": "Model"}) - df = df.sort_values(by=["All Tasks"], ascending=False) - - leaderboard_table = gr.components.Dataframe.update( - value=df, - # datatype=TYPES, - max_rows=None, - interactive=False, - visible=True, - ) - return leaderboard_table - - -def get_bar_plot( - histone_tasks: List[str], - regulatory_tasks: List[str], - rna_tasks: List[str], - target_metric: str = "MCC", - aggregation_method: str = "mean", -): - tasks = histone_tasks + regulatory_tasks + rna_tasks - - aggr_fn = getattr(np, aggregation_method) - scores = _ORIGINAL_DF[target_metric].apply(retrieve_array_from_text).apply(aggr_fn) - scores = scores.apply(format_number) - df = _ORIGINAL_DF.drop(columns=_METRICS) - df["Score"] = scores / len(tasks) - df = df.query(f"Dataset == {tasks}") - - bar_plot = gr.BarPlot.update( - df, - x="Model", - y="Score", - color="Dataset", - width=500, - x_label_angle=-45, - x_title="Model", - y_title="Score", - color_legend_title="Downstream Task", - ) - return bar_plot - - -with gr.Blocks() as demo: - with gr.Row(): - gr.Image(banner_url, height=160, scale=1) - gr.Markdown(_INTRODUCTION_TEXT, elem_classes="markdown-text") - # gr.Textbox(_INTRODUCTION_TEXT, scale=5) - - with gr.Row(): - metric_choice = gr.Dropdown( - choices=_METRICS, - value="MCC", - label="Metric displayed.", - ) - aggr_choice = gr.Dropdown( - choices=_AGGREGATION_METHODS, - value="mean", - label="Aggregation used over 10-folds.", - ) - - with gr.Row(): - regulatory_tasks = gr.CheckboxGroup( - choices=_TASKS["regulatory_elements"], - value=_TASKS["regulatory_elements"], - label="Regulatory Elements Downstream Tasks.", - info="Human data.", - scale=3, - ) - rna_tasks = gr.CheckboxGroup( - choices=_TASKS["RNA_production"], - value=_TASKS["RNA_production"], - label="RNA Production Downstream Tasks.", - info="Human data.", - scale=3, - ) - histone_tasks = gr.CheckboxGroup( - choices=_TASKS["histone_marks"], - label="Histone Modification Downstream Tasks.", - info="Yeast data.", - scale=4, - ) - - with gr.Tabs(elem_classes="tab-buttons") as tabs: - with gr.TabItem("🏅 Leaderboard", elem_id="od-benchmark-tab-table", id=0): - dataframe = gr.components.Dataframe( - elem_id="leaderboard-table", - ) - - with gr.TabItem("📈 Graph", elem_id="od-benchmark-tab-table", id=2): - bar_plot = gr.BarPlot( - elem_id="leaderboard-bar-plot", - ) - - with gr.TabItem("ℹ️ Methods", elem_id="od-benchmark-tab-table", id=1): - gr.Markdown(_METHODS_TEXT, elem_classes="markdown-text") - - gr.Markdown(f"Last updated on **{_LAST_UPDATED}**", elem_classes="markdown-text") - - with gr.Row(): - with gr.Accordion("📙 Citation", open=False): - gr.Textbox( - value=_BIBTEX, - lines=7, - label="Copy the BibTeX snippet to cite this source", - elem_id="citation-button", - ).style(show_copy_button=True) - - histone_tasks.change( - get_dataset, - inputs=[histone_tasks, regulatory_tasks, rna_tasks, metric_choice, aggr_choice], - outputs=dataframe, - ) - regulatory_tasks.change( - get_dataset, - inputs=[histone_tasks, regulatory_tasks, rna_tasks, metric_choice, aggr_choice], - outputs=dataframe, - ) - rna_tasks.change( - get_dataset, - inputs=[histone_tasks, regulatory_tasks, rna_tasks, metric_choice, aggr_choice], - outputs=dataframe, - ) - metric_choice.change( - get_dataset, - inputs=[histone_tasks, regulatory_tasks, rna_tasks, metric_choice, aggr_choice], - outputs=dataframe, - ) - aggr_choice.change( - get_dataset, - inputs=[histone_tasks, regulatory_tasks, rna_tasks, metric_choice, aggr_choice], - outputs=dataframe, - ) - demo.load( - fn=get_dataset, - inputs=[histone_tasks, regulatory_tasks, rna_tasks, metric_choice, aggr_choice], - outputs=dataframe, - ) - - histone_tasks.change( - get_bar_plot, - inputs=[histone_tasks, regulatory_tasks, rna_tasks, metric_choice, aggr_choice], - outputs=bar_plot, - ) - regulatory_tasks.change( - get_bar_plot, - inputs=[histone_tasks, regulatory_tasks, rna_tasks, metric_choice, aggr_choice], - outputs=bar_plot, - ) - rna_tasks.change( - get_bar_plot, - inputs=[histone_tasks, regulatory_tasks, rna_tasks, metric_choice, aggr_choice], - outputs=bar_plot, - ) - metric_choice.change( - get_bar_plot, - inputs=[histone_tasks, regulatory_tasks, rna_tasks, metric_choice, aggr_choice], - outputs=bar_plot, - ) - aggr_choice.change( - get_bar_plot, - inputs=[histone_tasks, regulatory_tasks, rna_tasks, metric_choice, aggr_choice], - outputs=bar_plot, - ) - demo.load( - fn=get_bar_plot, - inputs=[histone_tasks, regulatory_tasks, rna_tasks, metric_choice, aggr_choice], - outputs=bar_plot, - ) - -demo.launch() diff --git a/spaces/Intel/NeuralChat-ICX-INT4/README.md b/spaces/Intel/NeuralChat-ICX-INT4/README.md deleted file mode 100644 index a951c5793e94638ecb7958d8f7a8729dafe5132f..0000000000000000000000000000000000000000 --- a/spaces/Intel/NeuralChat-ICX-INT4/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: NeuralChat on ICX (using 4-bit) -emoji: 💻 -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Jacks2003/3D_Photo_Inpainting/MiDaS/monodepth_net.py b/spaces/Jacks2003/3D_Photo_Inpainting/MiDaS/monodepth_net.py deleted file mode 100644 index 461db0807deaa98b98e4b5447d0a24b830ab7dbf..0000000000000000000000000000000000000000 --- a/spaces/Jacks2003/3D_Photo_Inpainting/MiDaS/monodepth_net.py +++ /dev/null @@ -1,186 +0,0 @@ -"""MonoDepthNet: Network for monocular depth estimation trained by mixing several datasets. -This file contains code that is adapted from -https://github.com/thomasjpfan/pytorch_refinenet/blob/master/pytorch_refinenet/refinenet/refinenet_4cascade.py -""" -import torch -import torch.nn as nn -from torchvision import models - - -class MonoDepthNet(nn.Module): - """Network for monocular depth estimation. - """ - - def __init__(self, path=None, features=256): - """Init. - - Args: - path (str, optional): Path to saved model. Defaults to None. - features (int, optional): Number of features. Defaults to 256. - """ - super().__init__() - - resnet = models.resnet50(pretrained=False) - - self.pretrained = nn.Module() - self.scratch = nn.Module() - self.pretrained.layer1 = nn.Sequential(resnet.conv1, resnet.bn1, resnet.relu, - resnet.maxpool, resnet.layer1) - - self.pretrained.layer2 = resnet.layer2 - self.pretrained.layer3 = resnet.layer3 - self.pretrained.layer4 = resnet.layer4 - - # adjust channel number of feature maps - self.scratch.layer1_rn = nn.Conv2d(256, features, kernel_size=3, stride=1, padding=1, bias=False) - self.scratch.layer2_rn = nn.Conv2d(512, features, kernel_size=3, stride=1, padding=1, bias=False) - self.scratch.layer3_rn = nn.Conv2d(1024, features, kernel_size=3, stride=1, padding=1, bias=False) - self.scratch.layer4_rn = nn.Conv2d(2048, features, kernel_size=3, stride=1, padding=1, bias=False) - - self.scratch.refinenet4 = FeatureFusionBlock(features) - self.scratch.refinenet3 = FeatureFusionBlock(features) - self.scratch.refinenet2 = FeatureFusionBlock(features) - self.scratch.refinenet1 = FeatureFusionBlock(features) - - # adaptive output module: 2 convolutions and upsampling - self.scratch.output_conv = nn.Sequential(nn.Conv2d(features, 128, kernel_size=3, stride=1, padding=1), - nn.Conv2d(128, 1, kernel_size=3, stride=1, padding=1), - Interpolate(scale_factor=2, mode='bilinear')) - - # load model - if path: - self.load(path) - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input data (image) - - Returns: - tensor: depth - """ - layer_1 = self.pretrained.layer1(x) - layer_2 = self.pretrained.layer2(layer_1) - layer_3 = self.pretrained.layer3(layer_2) - layer_4 = self.pretrained.layer4(layer_3) - - layer_1_rn = self.scratch.layer1_rn(layer_1) - layer_2_rn = self.scratch.layer2_rn(layer_2) - layer_3_rn = self.scratch.layer3_rn(layer_3) - layer_4_rn = self.scratch.layer4_rn(layer_4) - - path_4 = self.scratch.refinenet4(layer_4_rn) - path_3 = self.scratch.refinenet3(path_4, layer_3_rn) - path_2 = self.scratch.refinenet2(path_3, layer_2_rn) - path_1 = self.scratch.refinenet1(path_2, layer_1_rn) - - out = self.scratch.output_conv(path_1) - - return out - - def load(self, path): - """Load model from file. - - Args: - path (str): file path - """ - parameters = torch.load(path) - - self.load_state_dict(parameters) - - -class Interpolate(nn.Module): - """Interpolation module. - """ - - def __init__(self, scale_factor, mode): - """Init. - - Args: - scale_factor (float): scaling - mode (str): interpolation mode - """ - super(Interpolate, self).__init__() - - self.interp = nn.functional.interpolate - self.scale_factor = scale_factor - self.mode = mode - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input - - Returns: - tensor: interpolated data - """ - x = self.interp(x, scale_factor=self.scale_factor, mode=self.mode, align_corners=False) - - return x - - -class ResidualConvUnit(nn.Module): - """Residual convolution module. - """ - - def __init__(self, features): - """Init. - - Args: - features (int): number of features - """ - super().__init__() - - self.conv1 = nn.Conv2d(features, features, kernel_size=3, stride=1, padding=1, bias=True) - self.conv2 = nn.Conv2d(features, features, kernel_size=3, stride=1, padding=1, bias=False) - self.relu = nn.ReLU(inplace=True) - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input - - Returns: - tensor: output - """ - out = self.relu(x) - out = self.conv1(out) - out = self.relu(out) - out = self.conv2(out) - - return out + x - - -class FeatureFusionBlock(nn.Module): - """Feature fusion block. - """ - - def __init__(self, features): - """Init. - - Args: - features (int): number of features - """ - super().__init__() - - self.resConfUnit = ResidualConvUnit(features) - - def forward(self, *xs): - """Forward pass. - - Returns: - tensor: output - """ - output = xs[0] - - if len(xs) == 2: - output += self.resConfUnit(xs[1]) - - output = self.resConfUnit(output) - output = nn.functional.interpolate(output, scale_factor=2, - mode='bilinear', align_corners=True) - - return output diff --git a/spaces/Jaehan/zero-shot-classification-1/app.py b/spaces/Jaehan/zero-shot-classification-1/app.py deleted file mode 100644 index 74d0a07afce113c8454ff5d17a5d09a04eeb8e17..0000000000000000000000000000000000000000 --- a/spaces/Jaehan/zero-shot-classification-1/app.py +++ /dev/null @@ -1,17 +0,0 @@ -from transformers import pipeline -import gradio as grad - -zero_shot_cls = pipeline("zero-shot-classification") - -def classify(text, labels): - cls_labels = labels.split(",") - - #["IT", "software", "marketing", "sales", "R&D", "logistics"] - response = zero_shot_cls(text, cls_labels) - return response - -in_text = grad.Textbox(lines=1, label="English", placeholder="Text to be classified") -in_labels = grad.Textbox(lines=1, label="Labels", placeholder="Comma separated labels") -out = grad.Textbox(lines=1, label="Classification") - -grad.Interface(classify, inputs=[in_text, in_labels], outputs=out).launch() \ No newline at end of file diff --git a/spaces/Jamkonams/AutoGPT/autogpt/setup.py b/spaces/Jamkonams/AutoGPT/autogpt/setup.py deleted file mode 100644 index bfa68201b62bf67230a61fb1ecb00d1ab0ef0631..0000000000000000000000000000000000000000 --- a/spaces/Jamkonams/AutoGPT/autogpt/setup.py +++ /dev/null @@ -1,77 +0,0 @@ -"""Set up the AI and its goals""" -from colorama import Fore, Style - -from autogpt import utils -from autogpt.config.ai_config import AIConfig -from autogpt.logs import logger - - -def prompt_user() -> AIConfig: - """Prompt the user for input - - Returns: - AIConfig: The AIConfig object containing the user's input - """ - ai_name = "" - # Construct the prompt - logger.typewriter_log( - "Welcome to Auto-GPT! ", - Fore.GREEN, - "run with '--help' for more information.", - speak_text=True, - ) - - logger.typewriter_log( - "Create an AI-Assistant:", - Fore.GREEN, - "Enter the name of your AI and its role below. Entering nothing will load" - " defaults.", - speak_text=True, - ) - - # Get AI Name from User - logger.typewriter_log( - "Name your AI: ", Fore.GREEN, "For example, 'Entrepreneur-GPT'" - ) - ai_name = utils.clean_input("AI Name: ") - if ai_name == "": - ai_name = "Entrepreneur-GPT" - - logger.typewriter_log( - f"{ai_name} here!", Fore.LIGHTBLUE_EX, "I am at your service.", speak_text=True - ) - - # Get AI Role from User - logger.typewriter_log( - "Describe your AI's role: ", - Fore.GREEN, - "For example, 'an AI designed to autonomously develop and run businesses with" - " the sole goal of increasing your net worth.'", - ) - ai_role = utils.clean_input(f"{ai_name} is: ") - if ai_role == "": - ai_role = "an AI designed to autonomously develop and run businesses with the" - " sole goal of increasing your net worth." - - # Enter up to 5 goals for the AI - logger.typewriter_log( - "Enter up to 5 goals for your AI: ", - Fore.GREEN, - "For example: \nIncrease net worth, Grow Twitter Account, Develop and manage" - " multiple businesses autonomously'", - ) - print("Enter nothing to load defaults, enter nothing when finished.", flush=True) - ai_goals = [] - for i in range(5): - ai_goal = utils.clean_input(f"{Fore.LIGHTBLUE_EX}Goal{Style.RESET_ALL} {i+1}: ") - if ai_goal == "": - break - ai_goals.append(ai_goal) - if not ai_goals: - ai_goals = [ - "Increase net worth", - "Grow Twitter Account", - "Develop and manage multiple businesses autonomously", - ] - - return AIConfig(ai_name, ai_role, ai_goals) diff --git a/spaces/Jarvis2301/Aku/text/__init__.py b/spaces/Jarvis2301/Aku/text/__init__.py deleted file mode 100644 index 663c4b6416affb53c9dc56dddbc8b2b65d4bf518..0000000000000000000000000000000000000000 --- a/spaces/Jarvis2301/Aku/text/__init__.py +++ /dev/null @@ -1,57 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners -from text.symbols import symbols - - -# Mappings from symbol to numeric ID and vice versa: -_symbol_to_id = {s: i for i, s in enumerate(symbols)} -_id_to_symbol = {i: s for i, s in enumerate(symbols)} - - -def text_to_sequence(text, symbols, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence, clean_text - - -def cleaned_text_to_sequence(cleaned_text): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [_symbol_to_id[symbol] for symbol in cleaned_text if symbol in _symbol_to_id.keys()] - return sequence - - -def sequence_to_text(sequence): - '''Converts a sequence of IDs back to a string''' - result = '' - for symbol_id in sequence: - s = _id_to_symbol[symbol_id] - result += s - return result - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/JeffJing/ZookChatBot/tls_client/cookies.py b/spaces/JeffJing/ZookChatBot/tls_client/cookies.py deleted file mode 100644 index 70db97f29e148d8661ba696f2d33daced4b1c40d..0000000000000000000000000000000000000000 --- a/spaces/JeffJing/ZookChatBot/tls_client/cookies.py +++ /dev/null @@ -1,463 +0,0 @@ -from .structures import CaseInsensitiveDict - -from http.cookiejar import CookieJar, Cookie -from typing import MutableMapping, Union, Any -from urllib.parse import urlparse, urlunparse -from http.client import HTTPMessage -import copy - -try: - import threading -except ImportError: - import dummy_threading as threading - - -class MockRequest: - """ - Mimic a urllib2.Request to get the correct cookie string for the request. - """ - - def __init__(self, request_url: str, request_headers: CaseInsensitiveDict): - self.request_url = request_url - self.request_headers = request_headers - self._new_headers = {} - self.type = urlparse(self.request_url).scheme - - def get_type(self): - return self.type - - def get_host(self): - return urlparse(self.request_url).netloc - - def get_origin_req_host(self): - return self.get_host() - - def get_full_url(self): - # Only return the response's URL if the user hadn't set the Host - # header - if not self.request_headers.get("Host"): - return self.request_url - # If they did set it, retrieve it and reconstruct the expected domain - host = self.request_headers["Host"] - parsed = urlparse(self.request_url) - # Reconstruct the URL as we expect it - return urlunparse( - [ - parsed.scheme, - host, - parsed.path, - parsed.params, - parsed.query, - parsed.fragment, - ] - ) - - def is_unverifiable(self): - return True - - def has_header(self, name): - return name in self.request_headers or name in self._new_headers - - def get_header(self, name, default=None): - return self.request_headers.get(name, self._new_headers.get(name, default)) - - def add_unredirected_header(self, name, value): - self._new_headers[name] = value - - def get_new_headers(self): - return self._new_headers - - @property - def unverifiable(self): - return self.is_unverifiable() - - @property - def origin_req_host(self): - return self.get_origin_req_host() - - @property - def host(self): - return self.get_host() - - -class MockResponse: - """ - Wraps a httplib.HTTPMessage to mimic a urllib.addinfourl. - The objective is to retrieve the response cookies correctly. - """ - - def __init__(self, headers): - self._headers = headers - - def info(self): - return self._headers - - def getheaders(self, name): - self._headers.getheaders(name) - - -class CookieConflictError(RuntimeError): - """There are two cookies that meet the criteria specified in the cookie jar. - Use .get and .set and include domain and path args in order to be more specific. - """ - - -class RequestsCookieJar(CookieJar, MutableMapping): - """ Origin: requests library (https://github.com/psf/requests) - Compatibility class; is a cookielib.CookieJar, but exposes a dict - interface. - - This is the CookieJar we create by default for requests and sessions that - don't specify one, since some clients may expect response.cookies and - session.cookies to support dict operations. - - Requests does not use the dict interface internally; it's just for - compatibility with external client code. All requests code should work - out of the box with externally provided instances of ``CookieJar``, e.g. - ``LWPCookieJar`` and ``FileCookieJar``. - - Unlike a regular CookieJar, this class is pickleable. - - .. warning:: dictionary operations that are normally O(1) may be O(n). - """ - - def get(self, name, default=None, domain=None, path=None): - """Dict-like get() that also supports optional domain and path args in - order to resolve naming collisions from using one cookie jar over - multiple domains. - - .. warning:: operation is O(n), not O(1). - """ - try: - return self._find_no_duplicates(name, domain, path) - except KeyError: - return default - - def set(self, name, value, **kwargs): - """Dict-like set() that also supports optional domain and path args in - order to resolve naming collisions from using one cookie jar over - multiple domains. - """ - # support client code that unsets cookies by assignment of a None value: - if value is None: - remove_cookie_by_name( - self, name, domain=kwargs.get("domain"), path=kwargs.get("path") - ) - return - - c = create_cookie(name, value, **kwargs) - self.set_cookie(c) - return c - - def iterkeys(self): - """Dict-like iterkeys() that returns an iterator of names of cookies - from the jar. - - .. seealso:: itervalues() and iteritems(). - """ - for cookie in iter(self): - yield cookie.name - - def keys(self): - """Dict-like keys() that returns a list of names of cookies from the - jar. - - .. seealso:: values() and items(). - """ - return list(self.iterkeys()) - - def itervalues(self): - """Dict-like itervalues() that returns an iterator of values of cookies - from the jar. - - .. seealso:: iterkeys() and iteritems(). - """ - for cookie in iter(self): - yield cookie.value - - def values(self): - """Dict-like values() that returns a list of values of cookies from the - jar. - - .. seealso:: keys() and items(). - """ - return list(self.itervalues()) - - def iteritems(self): - """Dict-like iteritems() that returns an iterator of name-value tuples - from the jar. - - .. seealso:: iterkeys() and itervalues(). - """ - for cookie in iter(self): - yield cookie.name, cookie.value - - def items(self): - """Dict-like items() that returns a list of name-value tuples from the - jar. Allows client-code to call ``dict(RequestsCookieJar)`` and get a - vanilla python dict of key value pairs. - - .. seealso:: keys() and values(). - """ - return list(self.iteritems()) - - def list_domains(self): - """Utility method to list all the domains in the jar.""" - domains = [] - for cookie in iter(self): - if cookie.domain not in domains: - domains.append(cookie.domain) - return domains - - def list_paths(self): - """Utility method to list all the paths in the jar.""" - paths = [] - for cookie in iter(self): - if cookie.path not in paths: - paths.append(cookie.path) - return paths - - def multiple_domains(self): - """Returns True if there are multiple domains in the jar. - Returns False otherwise. - - :rtype: bool - """ - domains = [] - for cookie in iter(self): - if cookie.domain is not None and cookie.domain in domains: - return True - domains.append(cookie.domain) - return False # there is only one domain in jar - - def get_dict(self, domain=None, path=None): - """Takes as an argument an optional domain and path and returns a plain - old Python dict of name-value pairs of cookies that meet the - requirements. - - :rtype: dict - """ - dictionary = {} - for cookie in iter(self): - if (domain is None or cookie.domain == domain) and ( - path is None or cookie.path == path - ): - dictionary[cookie.name] = cookie.value - return dictionary - - def __contains__(self, name): - try: - return super().__contains__(name) - except CookieConflictError: - return True - - def __getitem__(self, name): - """Dict-like __getitem__() for compatibility with client code. Throws - exception if there are more than one cookie with name. In that case, - use the more explicit get() method instead. - - .. warning:: operation is O(n), not O(1). - """ - return self._find_no_duplicates(name) - - def __setitem__(self, name, value): - """Dict-like __setitem__ for compatibility with client code. Throws - exception if there is already a cookie of that name in the jar. In that - case, use the more explicit set() method instead. - """ - self.set(name, value) - - def __delitem__(self, name): - """Deletes a cookie given a name. Wraps ``cookielib.CookieJar``'s - ``remove_cookie_by_name()``. - """ - remove_cookie_by_name(self, name) - - def set_cookie(self, cookie, *args, **kwargs): - if ( - hasattr(cookie.value, "startswith") - and cookie.value.startswith('"') - and cookie.value.endswith('"') - ): - cookie.value = cookie.value.replace('\\"', "") - return super().set_cookie(cookie, *args, **kwargs) - - def update(self, other): - """Updates this jar with cookies from another CookieJar or dict-like""" - if isinstance(other, CookieJar): - for cookie in other: - self.set_cookie(copy.copy(cookie)) - else: - super().update(other) - - def _find(self, name, domain=None, path=None): - """Requests uses this method internally to get cookie values. - - If there are conflicting cookies, _find arbitrarily chooses one. - See _find_no_duplicates if you want an exception thrown if there are - conflicting cookies. - - :param name: a string containing name of cookie - :param domain: (optional) string containing domain of cookie - :param path: (optional) string containing path of cookie - :return: cookie.value - """ - for cookie in iter(self): - if cookie.name == name: - if domain is None or cookie.domain == domain: - if path is None or cookie.path == path: - return cookie.value - - raise KeyError(f"name={name!r}, domain={domain!r}, path={path!r}") - - def _find_no_duplicates(self, name, domain=None, path=None): - """Both ``__get_item__`` and ``get`` call this function: it's never - used elsewhere in Requests. - - :param name: a string containing name of cookie - :param domain: (optional) string containing domain of cookie - :param path: (optional) string containing path of cookie - :raises KeyError: if cookie is not found - :raises CookieConflictError: if there are multiple cookies - that match name and optionally domain and path - :return: cookie.value - """ - toReturn = None - for cookie in iter(self): - if cookie.name == name: - if domain is None or cookie.domain == domain: - if path is None or cookie.path == path: - if toReturn is not None: - # if there are multiple cookies that meet passed in criteria - raise CookieConflictError( - f"There are multiple cookies with name, {name!r}" - ) - # we will eventually return this as long as no cookie conflict - toReturn = cookie.value - - if toReturn: - return toReturn - raise KeyError(f"name={name!r}, domain={domain!r}, path={path!r}") - - def __getstate__(self): - """Unlike a normal CookieJar, this class is pickleable.""" - state = self.__dict__.copy() - # remove the unpickleable RLock object - state.pop("_cookies_lock") - return state - - def __setstate__(self, state): - """Unlike a normal CookieJar, this class is pickleable.""" - self.__dict__.update(state) - if "_cookies_lock" not in self.__dict__: - self._cookies_lock = threading.RLock() - - def copy(self): - """Return a copy of this RequestsCookieJar.""" - new_cj = RequestsCookieJar() - new_cj.set_policy(self.get_policy()) - new_cj.update(self) - return new_cj - - def get_policy(self): - """Return the CookiePolicy instance used.""" - return self._policy - - -def remove_cookie_by_name(cookiejar: RequestsCookieJar, name: str, domain: str = None, path: str = None): - """Removes a cookie by name, by default over all domains and paths.""" - clearables = [] - for cookie in cookiejar: - if cookie.name != name: - continue - if domain is not None and domain != cookie.domain: - continue - if path is not None and path != cookie.path: - continue - clearables.append((cookie.domain, cookie.path, cookie.name)) - - for domain, path, name in clearables: - cookiejar.clear(domain, path, name) - - -def create_cookie(name: str, value: str, **kwargs: Any) -> Cookie: - """Make a cookie from underspecified parameters.""" - result = { - "version": 0, - "name": name, - "value": value, - "port": None, - "domain": "", - "path": "/", - "secure": False, - "expires": None, - "discard": True, - "comment": None, - "comment_url": None, - "rest": {"HttpOnly": None}, - "rfc2109": False, - } - - badargs = set(kwargs) - set(result) - if badargs: - raise TypeError( - f"create_cookie() got unexpected keyword arguments: {list(badargs)}" - ) - - result.update(kwargs) - result["port_specified"] = bool(result["port"]) - result["domain_specified"] = bool(result["domain"]) - result["domain_initial_dot"] = result["domain"].startswith(".") - result["path_specified"] = bool(result["path"]) - - return Cookie(**result) - - -def cookiejar_from_dict(cookie_dict: dict) -> RequestsCookieJar: - """transform a dict to CookieJar""" - cookie_jar = RequestsCookieJar() - if cookie_dict is not None: - for name, value in cookie_dict.items(): - cookie_jar.set_cookie(create_cookie(name=name, value=value)) - return cookie_jar - - -def merge_cookies(cookiejar: RequestsCookieJar, cookies: Union[dict, RequestsCookieJar]) -> RequestsCookieJar: - """Merge cookies in session and cookies provided in request""" - if type(cookies) is dict: - cookies = cookiejar_from_dict(cookies) - - for cookie in cookies: - cookiejar.set_cookie(cookie) - - return cookiejar - - -def get_cookie_header(request_url: str, request_headers: CaseInsensitiveDict, cookie_jar: RequestsCookieJar) -> str: - r = MockRequest(request_url, request_headers) - cookie_jar.add_cookie_header(r) - return r.get_new_headers().get("Cookie") - - -def extract_cookies_to_jar( - request_url: str, - request_headers: CaseInsensitiveDict, - cookie_jar: RequestsCookieJar, - response_headers: dict - ) -> RequestsCookieJar: - response_cookie_jar = cookiejar_from_dict({}) - - req = MockRequest(request_url, request_headers) - # mimic HTTPMessage - http_message = HTTPMessage() - http_message._headers = [] - for header_name, header_values in response_headers.items(): - for header_value in header_values: - http_message._headers.append( - (header_name, header_value) - ) - res = MockResponse(http_message) - response_cookie_jar.extract_cookies(res, req) - - merge_cookies(cookie_jar, response_cookie_jar) - return response_cookie_jar diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT/README.md b/spaces/JohnSmith9982/ChuanhuChatGPT/README.md deleted file mode 100644 index 820a9a57349cfbf6d565c797ed822c398347e682..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/ChuanhuChatGPT/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ChuanhuChatGPT -emoji: 🐯 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.40.0 -app_file: ChuanhuChatbot.py -pinned: false -license: gpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/JoshMe1/YTYT/app.py b/spaces/JoshMe1/YTYT/app.py deleted file mode 100644 index b4665a889f6d0553662fa9fca285fb91afc75e52..0000000000000000000000000000000000000000 --- a/spaces/JoshMe1/YTYT/app.py +++ /dev/null @@ -1,72 +0,0 @@ -import streamlit as st -import whisper -from pytube import YouTube -import os - -def get_audio(url): - yt = YouTube(url) - return yt.streams.filter(only_audio=True)[0].download(filename="tmp.mp4") - -def get_transcript(url, model_size, lang, format): - - model = whisper.load_model(model_size) - - if lang == "None": - lang = None - - result = model.transcribe(get_audio(url), fp16=False, language=lang) - - if format == "None": - return result["text"] - elif format == ".srt": - return format_to_srt(result["segments"]) - -def format_to_srt(segments): - output = "" - for i, segment in enumerate(segments): - output += f"{i + 1}\n" - output += f"{format_timestamp(segment['start'])} --> {format_timestamp(segment['end'])}\n" - output += f"{segment['text']}\n\n" - return output - -def format_timestamp(t): - hh = t//3600 - mm = (t - hh*3600)//60 - ss = t - hh*3600 - mm*60 - mi = (t - int(t))*1000 - return f"{int(hh):02d}:{int(mm):02d}:{int(ss):02d},{int(mi):03d}" - -def save_srt(transcript): - with open("transcript.srt", "w") as f: - f.write(transcript) - return True - -def download_srt(transcript): - with open("done.txt", "w") as f: - f.write(transcript) - with open("done.txt", "r") as f: - srt = format_to_srt(f.read()) - with open("transcript.srt", "w") as f: - f.write(srt) - return st.download_button(label="Download Transcript (.srt)", data="transcript.srt") - -langs = ["None"] + sorted(list(whisper.tokenizer.LANGUAGES.values())) -model_size = list(whisper._MODELS.keys()) - -st.title("Whisper Transcription Demo") - -url = st.text_input("Youtube video URL") -model_size = st.selectbox("Model", model_size) -lang = st.selectbox("Language (Optional)", langs) -format = st.selectbox("Timestamps? (Optional)", ["None", ".srt"]) -st.markdown("Larger models are more accurate, but slower. For 1min video, it'll take ~30s (tiny), ~1min (base), ~3min (small), ~5min (medium), etc.") - -if st.button("Transcribe"): - transcript = get_transcript(url, model_size, lang, format) - st.text_area("Transcription of the video", transcript) - if format == ".srt": - save_srt(transcript) - download_srt(transcript) - -if os.path.exists("done.txt"): - os.remove("done.txt") diff --git a/spaces/Junity/TokaiTeio-SVC/inference/infer_tool.py b/spaces/Junity/TokaiTeio-SVC/inference/infer_tool.py deleted file mode 100644 index 415e956ce2b50500ea5bfb58909f5ab7863dbe56..0000000000000000000000000000000000000000 --- a/spaces/Junity/TokaiTeio-SVC/inference/infer_tool.py +++ /dev/null @@ -1,244 +0,0 @@ -import hashlib -import io -import json -import logging -import os -import time -from pathlib import Path -from inference import slicer - -import librosa -import numpy as np -# import onnxruntime -import parselmouth -import soundfile -import torch -import torchaudio - -import cluster -from hubert import hubert_model -import utils -from models import SynthesizerTrn - -logging.getLogger('matplotlib').setLevel(logging.WARNING) - - -def read_temp(file_name): - if not os.path.exists(file_name): - with open(file_name, "w") as f: - f.write(json.dumps({"info": "temp_dict"})) - return {} - else: - try: - with open(file_name, "r") as f: - data = f.read() - data_dict = json.loads(data) - if os.path.getsize(file_name) > 50 * 1024 * 1024: - f_name = file_name.replace("\\", "/").split("/")[-1] - print(f"clean {f_name}") - for wav_hash in list(data_dict.keys()): - if int(time.time()) - int(data_dict[wav_hash]["time"]) > 14 * 24 * 3600: - del data_dict[wav_hash] - except Exception as e: - print(e) - print(f"{file_name} error,auto rebuild file") - data_dict = {"info": "temp_dict"} - return data_dict - - -def write_temp(file_name, data): - with open(file_name, "w") as f: - f.write(json.dumps(data)) - - -def timeit(func): - def run(*args, **kwargs): - t = time.time() - res = func(*args, **kwargs) - print('executing \'%s\' costed %.3fs' % (func.__name__, time.time() - t)) - return res - - return run - - -def format_wav(audio_path): - if Path(audio_path).suffix == '.wav': - return - raw_audio, raw_sample_rate = librosa.load(audio_path, mono=True, sr=None) - soundfile.write(Path(audio_path).with_suffix(".wav"), raw_audio, raw_sample_rate) - - -def get_end_file(dir_path, end): - file_lists = [] - for root, dirs, files in os.walk(dir_path): - files = [f for f in files if f[0] != '.'] - dirs[:] = [d for d in dirs if d[0] != '.'] - for f_file in files: - if f_file.endswith(end): - file_lists.append(os.path.join(root, f_file).replace("\\", "/")) - return file_lists - - -def get_md5(content): - return hashlib.new("md5", content).hexdigest() - -def fill_a_to_b(a, b): - if len(a) < len(b): - for _ in range(0, len(b) - len(a)): - a.append(a[0]) - -def mkdir(paths: list): - for path in paths: - if not os.path.exists(path): - os.mkdir(path) - -def pad_array(arr, target_length): - current_length = arr.shape[0] - if current_length >= target_length: - return arr - else: - pad_width = target_length - current_length - pad_left = pad_width // 2 - pad_right = pad_width - pad_left - padded_arr = np.pad(arr, (pad_left, pad_right), 'constant', constant_values=(0, 0)) - return padded_arr - - -class Svc(object): - def __init__(self, net_g_path, config_path, - device=None, - cluster_model_path="logs/44k/kmeans_10000.pt"): - self.net_g_path = net_g_path - if device is None: - self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu") - else: - self.dev = torch.device(device) - self.net_g_ms = None - self.hps_ms = utils.get_hparams_from_file(config_path) - self.target_sample = self.hps_ms.data.sampling_rate - self.hop_size = self.hps_ms.data.hop_length - self.spk2id = self.hps_ms.spk - # 加载hubert - self.hubert_model = utils.get_hubert_model().to(self.dev) - self.load_model() - if os.path.exists(cluster_model_path): - self.cluster_model = cluster.get_cluster_model(cluster_model_path) - - def load_model(self): - # 获取模型配置 - self.net_g_ms = SynthesizerTrn( - self.hps_ms.data.filter_length // 2 + 1, - self.hps_ms.train.segment_size // self.hps_ms.data.hop_length, - **self.hps_ms.model) - _ = utils.load_checkpoint(self.net_g_path, self.net_g_ms, None) - if "half" in self.net_g_path and torch.cuda.is_available(): - _ = self.net_g_ms.half().eval().to(self.dev) - else: - _ = self.net_g_ms.eval().to(self.dev) - - - - def get_unit_f0(self, in_path, tran, cluster_infer_ratio, speaker): - - wav, sr = librosa.load(in_path, sr=self.target_sample) - - f0 = utils.compute_f0_parselmouth(wav, sampling_rate=self.target_sample, hop_length=self.hop_size) - f0, uv = utils.interpolate_f0(f0) - f0 = torch.FloatTensor(f0) - uv = torch.FloatTensor(uv) - f0 = f0 * 2 ** (tran / 12) - f0 = f0.unsqueeze(0).to(self.dev) - uv = uv.unsqueeze(0).to(self.dev) - - wav16k = librosa.resample(wav, orig_sr=self.target_sample, target_sr=16000) - wav16k = torch.from_numpy(wav16k).to(self.dev) - c = utils.get_hubert_content(self.hubert_model, wav_16k_tensor=wav16k) - c = utils.repeat_expand_2d(c.squeeze(0), f0.shape[1]) - - if cluster_infer_ratio !=0: - cluster_c = cluster.get_cluster_center_result(self.cluster_model, c.cpu().numpy().T, speaker).T - cluster_c = torch.FloatTensor(cluster_c).to(self.dev) - c = cluster_infer_ratio * cluster_c + (1 - cluster_infer_ratio) * c - - c = c.unsqueeze(0) - return c, f0, uv - - def infer(self, speaker, tran, raw_path, - cluster_infer_ratio=0, - auto_predict_f0=False, - noice_scale=0.4): - speaker_id = self.spk2id[speaker] - sid = torch.LongTensor([int(speaker_id)]).to(self.dev).unsqueeze(0) - c, f0, uv = self.get_unit_f0(raw_path, tran, cluster_infer_ratio, speaker) - if "half" in self.net_g_path and torch.cuda.is_available(): - c = c.half() - with torch.no_grad(): - start = time.time() - audio = self.net_g_ms.infer(c, f0=f0, g=sid, uv=uv, predict_f0=auto_predict_f0, noice_scale=noice_scale)[0,0].data.float() - use_time = time.time() - start - print("vits use time:{}".format(use_time)) - return audio, audio.shape[-1] - - def slice_inference(self,raw_audio_path, spk, tran, slice_db,cluster_infer_ratio, auto_predict_f0,noice_scale, pad_seconds=0.5): - wav_path = raw_audio_path - chunks = slicer.cut(wav_path, db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks) - - audio = [] - for (slice_tag, data) in audio_data: - print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======') - # padd - pad_len = int(audio_sr * pad_seconds) - data = np.concatenate([np.zeros([pad_len]), data, np.zeros([pad_len])]) - length = int(np.ceil(len(data) / audio_sr * self.target_sample)) - raw_path = io.BytesIO() - soundfile.write(raw_path, data, audio_sr, format="wav") - raw_path.seek(0) - if slice_tag: - print('jump empty segment') - _audio = np.zeros(length) - else: - out_audio, out_sr = self.infer(spk, tran, raw_path, - cluster_infer_ratio=cluster_infer_ratio, - auto_predict_f0=auto_predict_f0, - noice_scale=noice_scale - ) - _audio = out_audio.cpu().numpy() - - pad_len = int(self.target_sample * pad_seconds) - _audio = _audio[pad_len:-pad_len] - audio.extend(list(_audio)) - return np.array(audio) - - -class RealTimeVC: - def __init__(self): - self.last_chunk = None - self.last_o = None - self.chunk_len = 16000 # 区块长度 - self.pre_len = 3840 # 交叉淡化长度,640的倍数 - - """输入输出都是1维numpy 音频波形数组""" - - def process(self, svc_model, speaker_id, f_pitch_change, input_wav_path): - import maad - audio, sr = torchaudio.load(input_wav_path) - audio = audio.cpu().numpy()[0] - temp_wav = io.BytesIO() - if self.last_chunk is None: - input_wav_path.seek(0) - audio, sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path) - audio = audio.cpu().numpy() - self.last_chunk = audio[-self.pre_len:] - self.last_o = audio - return audio[-self.chunk_len:] - else: - audio = np.concatenate([self.last_chunk, audio]) - soundfile.write(temp_wav, audio, sr, format="wav") - temp_wav.seek(0) - audio, sr = svc_model.infer(speaker_id, f_pitch_change, temp_wav) - audio = audio.cpu().numpy() - ret = maad.util.crossfade(self.last_o, audio, self.pre_len) - self.last_chunk = audio[-self.pre_len:] - self.last_o = audio - return ret[self.chunk_len:2 * self.chunk_len] diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/layers/pixel_decoder.py b/spaces/KyanChen/RSPrompter/mmdet/models/layers/pixel_decoder.py deleted file mode 100644 index fb61434045eb9996276518577800132e4a25eb3e..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/layers/pixel_decoder.py +++ /dev/null @@ -1,249 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import List, Tuple, Union - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import Conv2d, ConvModule -from mmengine.model import BaseModule, ModuleList, caffe2_xavier_init -from torch import Tensor - -from mmdet.registry import MODELS -from mmdet.utils import ConfigType, OptMultiConfig -from .positional_encoding import SinePositionalEncoding -from .transformer import DetrTransformerEncoder - - -@MODELS.register_module() -class PixelDecoder(BaseModule): - """Pixel decoder with a structure like fpn. - - Args: - in_channels (list[int] | tuple[int]): Number of channels in the - input feature maps. - feat_channels (int): Number channels for feature. - out_channels (int): Number channels for output. - norm_cfg (:obj:`ConfigDict` or dict): Config for normalization. - Defaults to dict(type='GN', num_groups=32). - act_cfg (:obj:`ConfigDict` or dict): Config for activation. - Defaults to dict(type='ReLU'). - encoder (:obj:`ConfigDict` or dict): Config for transorformer - encoder.Defaults to None. - positional_encoding (:obj:`ConfigDict` or dict): Config for - transformer encoder position encoding. Defaults to - dict(type='SinePositionalEncoding', num_feats=128, - normalize=True). - init_cfg (:obj:`ConfigDict` or dict or list[:obj:`ConfigDict` or \ - dict], optional): Initialization config dict. Defaults to None. - """ - - def __init__(self, - in_channels: Union[List[int], Tuple[int]], - feat_channels: int, - out_channels: int, - norm_cfg: ConfigType = dict(type='GN', num_groups=32), - act_cfg: ConfigType = dict(type='ReLU'), - init_cfg: OptMultiConfig = None) -> None: - super().__init__(init_cfg=init_cfg) - self.in_channels = in_channels - self.num_inputs = len(in_channels) - self.lateral_convs = ModuleList() - self.output_convs = ModuleList() - self.use_bias = norm_cfg is None - for i in range(0, self.num_inputs - 1): - lateral_conv = ConvModule( - in_channels[i], - feat_channels, - kernel_size=1, - bias=self.use_bias, - norm_cfg=norm_cfg, - act_cfg=None) - output_conv = ConvModule( - feat_channels, - feat_channels, - kernel_size=3, - stride=1, - padding=1, - bias=self.use_bias, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.lateral_convs.append(lateral_conv) - self.output_convs.append(output_conv) - - self.last_feat_conv = ConvModule( - in_channels[-1], - feat_channels, - kernel_size=3, - padding=1, - stride=1, - bias=self.use_bias, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.mask_feature = Conv2d( - feat_channels, out_channels, kernel_size=3, stride=1, padding=1) - - def init_weights(self) -> None: - """Initialize weights.""" - for i in range(0, self.num_inputs - 2): - caffe2_xavier_init(self.lateral_convs[i].conv, bias=0) - caffe2_xavier_init(self.output_convs[i].conv, bias=0) - - caffe2_xavier_init(self.mask_feature, bias=0) - caffe2_xavier_init(self.last_feat_conv, bias=0) - - def forward(self, feats: List[Tensor], - batch_img_metas: List[dict]) -> Tuple[Tensor, Tensor]: - """ - Args: - feats (list[Tensor]): Feature maps of each level. Each has - shape of (batch_size, c, h, w). - batch_img_metas (list[dict]): List of image information. - Pass in for creating more accurate padding mask. Not - used here. - - Returns: - tuple[Tensor, Tensor]: a tuple containing the following: - - - mask_feature (Tensor): Shape (batch_size, c, h, w). - - memory (Tensor): Output of last stage of backbone.\ - Shape (batch_size, c, h, w). - """ - y = self.last_feat_conv(feats[-1]) - for i in range(self.num_inputs - 2, -1, -1): - x = feats[i] - cur_feat = self.lateral_convs[i](x) - y = cur_feat + \ - F.interpolate(y, size=cur_feat.shape[-2:], mode='nearest') - y = self.output_convs[i](y) - - mask_feature = self.mask_feature(y) - memory = feats[-1] - return mask_feature, memory - - -@MODELS.register_module() -class TransformerEncoderPixelDecoder(PixelDecoder): - """Pixel decoder with transormer encoder inside. - - Args: - in_channels (list[int] | tuple[int]): Number of channels in the - input feature maps. - feat_channels (int): Number channels for feature. - out_channels (int): Number channels for output. - norm_cfg (:obj:`ConfigDict` or dict): Config for normalization. - Defaults to dict(type='GN', num_groups=32). - act_cfg (:obj:`ConfigDict` or dict): Config for activation. - Defaults to dict(type='ReLU'). - encoder (:obj:`ConfigDict` or dict): Config for transformer encoder. - Defaults to None. - positional_encoding (:obj:`ConfigDict` or dict): Config for - transformer encoder position encoding. Defaults to - dict(num_feats=128, normalize=True). - init_cfg (:obj:`ConfigDict` or dict or list[:obj:`ConfigDict` or \ - dict], optional): Initialization config dict. Defaults to None. - """ - - def __init__(self, - in_channels: Union[List[int], Tuple[int]], - feat_channels: int, - out_channels: int, - norm_cfg: ConfigType = dict(type='GN', num_groups=32), - act_cfg: ConfigType = dict(type='ReLU'), - encoder: ConfigType = None, - positional_encoding: ConfigType = dict( - num_feats=128, normalize=True), - init_cfg: OptMultiConfig = None) -> None: - super().__init__( - in_channels=in_channels, - feat_channels=feat_channels, - out_channels=out_channels, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - init_cfg=init_cfg) - self.last_feat_conv = None - - self.encoder = DetrTransformerEncoder(**encoder) - self.encoder_embed_dims = self.encoder.embed_dims - assert self.encoder_embed_dims == feat_channels, 'embed_dims({}) of ' \ - 'tranformer encoder must equal to feat_channels({})'.format( - feat_channels, self.encoder_embed_dims) - self.positional_encoding = SinePositionalEncoding( - **positional_encoding) - self.encoder_in_proj = Conv2d( - in_channels[-1], feat_channels, kernel_size=1) - self.encoder_out_proj = ConvModule( - feat_channels, - feat_channels, - kernel_size=3, - stride=1, - padding=1, - bias=self.use_bias, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def init_weights(self) -> None: - """Initialize weights.""" - for i in range(0, self.num_inputs - 2): - caffe2_xavier_init(self.lateral_convs[i].conv, bias=0) - caffe2_xavier_init(self.output_convs[i].conv, bias=0) - - caffe2_xavier_init(self.mask_feature, bias=0) - caffe2_xavier_init(self.encoder_in_proj, bias=0) - caffe2_xavier_init(self.encoder_out_proj.conv, bias=0) - - for p in self.encoder.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def forward(self, feats: List[Tensor], - batch_img_metas: List[dict]) -> Tuple[Tensor, Tensor]: - """ - Args: - feats (list[Tensor]): Feature maps of each level. Each has - shape of (batch_size, c, h, w). - batch_img_metas (list[dict]): List of image information. Pass in - for creating more accurate padding mask. - - Returns: - tuple: a tuple containing the following: - - - mask_feature (Tensor): shape (batch_size, c, h, w). - - memory (Tensor): shape (batch_size, c, h, w). - """ - feat_last = feats[-1] - bs, c, h, w = feat_last.shape - input_img_h, input_img_w = batch_img_metas[0]['batch_input_shape'] - padding_mask = feat_last.new_ones((bs, input_img_h, input_img_w), - dtype=torch.float32) - for i in range(bs): - img_h, img_w = batch_img_metas[i]['img_shape'] - padding_mask[i, :img_h, :img_w] = 0 - padding_mask = F.interpolate( - padding_mask.unsqueeze(1), - size=feat_last.shape[-2:], - mode='nearest').to(torch.bool).squeeze(1) - - pos_embed = self.positional_encoding(padding_mask) - feat_last = self.encoder_in_proj(feat_last) - # (batch_size, c, h, w) -> (batch_size, num_queries, c) - feat_last = feat_last.flatten(2).permute(0, 2, 1) - pos_embed = pos_embed.flatten(2).permute(0, 2, 1) - # (batch_size, h, w) -> (batch_size, h*w) - padding_mask = padding_mask.flatten(1) - memory = self.encoder( - query=feat_last, - query_pos=pos_embed, - key_padding_mask=padding_mask) - # (batch_size, num_queries, c) -> (batch_size, c, h, w) - memory = memory.permute(0, 2, 1).view(bs, self.encoder_embed_dims, h, - w) - y = self.encoder_out_proj(memory) - for i in range(self.num_inputs - 2, -1, -1): - x = feats[i] - cur_feat = self.lateral_convs[i](x) - y = cur_feat + \ - F.interpolate(y, size=cur_feat.shape[-2:], mode='nearest') - y = self.output_convs[i](y) - - mask_feature = self.mask_feature(y) - return mask_feature, memory diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/mask_heads/scnet_semantic_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/mask_heads/scnet_semantic_head.py deleted file mode 100644 index 55c5c8e4fae7d4e941a770d985c7253fd70f2226..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/mask_heads/scnet_semantic_head.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.models.layers import ResLayer, SimplifiedBasicBlock -from mmdet.registry import MODELS -from .fused_semantic_head import FusedSemanticHead - - -@MODELS.register_module() -class SCNetSemanticHead(FusedSemanticHead): - """Mask head for `SCNet `_. - - Args: - conv_to_res (bool, optional): if True, change the conv layers to - ``SimplifiedBasicBlock``. - """ - - def __init__(self, conv_to_res: bool = True, **kwargs) -> None: - super().__init__(**kwargs) - self.conv_to_res = conv_to_res - if self.conv_to_res: - num_res_blocks = self.num_convs // 2 - self.convs = ResLayer( - SimplifiedBasicBlock, - self.in_channels, - self.conv_out_channels, - num_res_blocks, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg) - self.num_convs = num_res_blocks diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/scnet_roi_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/scnet_roi_head.py deleted file mode 100644 index e6d2bc1915bae38011cc75a720e48ed53b51ddb5..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/scnet_roi_head.py +++ /dev/null @@ -1,677 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import List, Optional, Tuple - -import torch -import torch.nn.functional as F -from mmengine.structures import InstanceData -from torch import Tensor - -from mmdet.registry import MODELS -from mmdet.structures import SampleList -from mmdet.structures.bbox import bbox2roi -from mmdet.utils import ConfigType, InstanceList, OptConfigType -from ..layers import adaptive_avg_pool2d -from ..task_modules.samplers import SamplingResult -from ..utils import empty_instances, unpack_gt_instances -from .cascade_roi_head import CascadeRoIHead - - -@MODELS.register_module() -class SCNetRoIHead(CascadeRoIHead): - """RoIHead for `SCNet `_. - - Args: - num_stages (int): number of cascade stages. - stage_loss_weights (list): loss weight of cascade stages. - semantic_roi_extractor (dict): config to init semantic roi extractor. - semantic_head (dict): config to init semantic head. - feat_relay_head (dict): config to init feature_relay_head. - glbctx_head (dict): config to init global context head. - """ - - def __init__(self, - num_stages: int, - stage_loss_weights: List[float], - semantic_roi_extractor: OptConfigType = None, - semantic_head: OptConfigType = None, - feat_relay_head: OptConfigType = None, - glbctx_head: OptConfigType = None, - **kwargs) -> None: - super().__init__( - num_stages=num_stages, - stage_loss_weights=stage_loss_weights, - **kwargs) - assert self.with_bbox and self.with_mask - assert not self.with_shared_head # shared head is not supported - - if semantic_head is not None: - self.semantic_roi_extractor = MODELS.build(semantic_roi_extractor) - self.semantic_head = MODELS.build(semantic_head) - - if feat_relay_head is not None: - self.feat_relay_head = MODELS.build(feat_relay_head) - - if glbctx_head is not None: - self.glbctx_head = MODELS.build(glbctx_head) - - def init_mask_head(self, mask_roi_extractor: ConfigType, - mask_head: ConfigType) -> None: - """Initialize ``mask_head``""" - if mask_roi_extractor is not None: - self.mask_roi_extractor = MODELS.build(mask_roi_extractor) - self.mask_head = MODELS.build(mask_head) - - # TODO move to base_roi_head later - @property - def with_semantic(self) -> bool: - """bool: whether the head has semantic head""" - return hasattr(self, - 'semantic_head') and self.semantic_head is not None - - @property - def with_feat_relay(self) -> bool: - """bool: whether the head has feature relay head""" - return (hasattr(self, 'feat_relay_head') - and self.feat_relay_head is not None) - - @property - def with_glbctx(self) -> bool: - """bool: whether the head has global context head""" - return hasattr(self, 'glbctx_head') and self.glbctx_head is not None - - def _fuse_glbctx(self, roi_feats: Tensor, glbctx_feat: Tensor, - rois: Tensor) -> Tensor: - """Fuse global context feats with roi feats. - - Args: - roi_feats (Tensor): RoI features. - glbctx_feat (Tensor): Global context feature.. - rois (Tensor): RoIs with the shape (n, 5) where the first - column indicates batch id of each RoI. - - Returns: - Tensor: Fused feature. - """ - assert roi_feats.size(0) == rois.size(0) - # RuntimeError: isDifferentiableType(variable.scalar_type()) - # INTERNAL ASSERT FAILED if detach() is not used when calling - # roi_head.predict(). - img_inds = torch.unique(rois[:, 0].detach().cpu(), sorted=True).long() - fused_feats = torch.zeros_like(roi_feats) - for img_id in img_inds: - inds = (rois[:, 0] == img_id.item()) - fused_feats[inds] = roi_feats[inds] + glbctx_feat[img_id] - return fused_feats - - def _slice_pos_feats(self, feats: Tensor, - sampling_results: List[SamplingResult]) -> Tensor: - """Get features from pos rois. - - Args: - feats (Tensor): Input features. - sampling_results (list["obj:`SamplingResult`]): Sampling results. - - Returns: - Tensor: Sliced features. - """ - num_rois = [res.priors.size(0) for res in sampling_results] - num_pos_rois = [res.pos_priors.size(0) for res in sampling_results] - inds = torch.zeros(sum(num_rois), dtype=torch.bool) - start = 0 - for i in range(len(num_rois)): - start = 0 if i == 0 else start + num_rois[i - 1] - stop = start + num_pos_rois[i] - inds[start:stop] = 1 - sliced_feats = feats[inds] - return sliced_feats - - def _bbox_forward(self, - stage: int, - x: Tuple[Tensor], - rois: Tensor, - semantic_feat: Optional[Tensor] = None, - glbctx_feat: Optional[Tensor] = None) -> dict: - """Box head forward function used in both training and testing. - - Args: - stage (int): The current stage in Cascade RoI Head. - x (tuple[Tensor]): List of multi-level img features. - rois (Tensor): RoIs with the shape (n, 5) where the first - column indicates batch id of each RoI. - semantic_feat (Tensor): Semantic feature. Defaults to None. - glbctx_feat (Tensor): Global context feature. Defaults to None. - - Returns: - dict[str, Tensor]: Usually returns a dictionary with keys: - - - `cls_score` (Tensor): Classification scores. - - `bbox_pred` (Tensor): Box energies / deltas. - - `bbox_feats` (Tensor): Extract bbox RoI features. - """ - bbox_roi_extractor = self.bbox_roi_extractor[stage] - bbox_head = self.bbox_head[stage] - bbox_feats = bbox_roi_extractor(x[:bbox_roi_extractor.num_inputs], - rois) - if self.with_semantic and semantic_feat is not None: - bbox_semantic_feat = self.semantic_roi_extractor([semantic_feat], - rois) - if bbox_semantic_feat.shape[-2:] != bbox_feats.shape[-2:]: - bbox_semantic_feat = adaptive_avg_pool2d( - bbox_semantic_feat, bbox_feats.shape[-2:]) - bbox_feats += bbox_semantic_feat - if self.with_glbctx and glbctx_feat is not None: - bbox_feats = self._fuse_glbctx(bbox_feats, glbctx_feat, rois) - cls_score, bbox_pred, relayed_feat = bbox_head( - bbox_feats, return_shared_feat=True) - - bbox_results = dict( - cls_score=cls_score, - bbox_pred=bbox_pred, - relayed_feat=relayed_feat) - return bbox_results - - def _mask_forward(self, - x: Tuple[Tensor], - rois: Tensor, - semantic_feat: Optional[Tensor] = None, - glbctx_feat: Optional[Tensor] = None, - relayed_feat: Optional[Tensor] = None) -> dict: - """Mask head forward function used in both training and testing. - - Args: - stage (int): The current stage in Cascade RoI Head. - x (tuple[Tensor]): Tuple of multi-level img features. - rois (Tensor): RoIs with the shape (n, 5) where the first - column indicates batch id of each RoI. - semantic_feat (Tensor): Semantic feature. Defaults to None. - glbctx_feat (Tensor): Global context feature. Defaults to None. - relayed_feat (Tensor): Relayed feature. Defaults to None. - - Returns: - dict: Usually returns a dictionary with keys: - - - `mask_preds` (Tensor): Mask prediction. - """ - mask_feats = self.mask_roi_extractor( - x[:self.mask_roi_extractor.num_inputs], rois) - if self.with_semantic and semantic_feat is not None: - mask_semantic_feat = self.semantic_roi_extractor([semantic_feat], - rois) - if mask_semantic_feat.shape[-2:] != mask_feats.shape[-2:]: - mask_semantic_feat = F.adaptive_avg_pool2d( - mask_semantic_feat, mask_feats.shape[-2:]) - mask_feats += mask_semantic_feat - if self.with_glbctx and glbctx_feat is not None: - mask_feats = self._fuse_glbctx(mask_feats, glbctx_feat, rois) - if self.with_feat_relay and relayed_feat is not None: - mask_feats = mask_feats + relayed_feat - mask_preds = self.mask_head(mask_feats) - mask_results = dict(mask_preds=mask_preds) - - return mask_results - - def bbox_loss(self, - stage: int, - x: Tuple[Tensor], - sampling_results: List[SamplingResult], - semantic_feat: Optional[Tensor] = None, - glbctx_feat: Optional[Tensor] = None) -> dict: - """Run forward function and calculate loss for box head in training. - - Args: - stage (int): The current stage in Cascade RoI Head. - x (tuple[Tensor]): List of multi-level img features. - sampling_results (list["obj:`SamplingResult`]): Sampling results. - semantic_feat (Tensor): Semantic feature. Defaults to None. - glbctx_feat (Tensor): Global context feature. Defaults to None. - - Returns: - dict: Usually returns a dictionary with keys: - - - `cls_score` (Tensor): Classification scores. - - `bbox_pred` (Tensor): Box energies / deltas. - - `bbox_feats` (Tensor): Extract bbox RoI features. - - `loss_bbox` (dict): A dictionary of bbox loss components. - - `rois` (Tensor): RoIs with the shape (n, 5) where the first - column indicates batch id of each RoI. - - `bbox_targets` (tuple): Ground truth for proposals in a - single image. Containing the following list of Tensors: - (labels, label_weights, bbox_targets, bbox_weights) - """ - bbox_head = self.bbox_head[stage] - rois = bbox2roi([res.priors for res in sampling_results]) - bbox_results = self._bbox_forward( - stage, - x, - rois, - semantic_feat=semantic_feat, - glbctx_feat=glbctx_feat) - bbox_results.update(rois=rois) - - bbox_loss_and_target = bbox_head.loss_and_target( - cls_score=bbox_results['cls_score'], - bbox_pred=bbox_results['bbox_pred'], - rois=rois, - sampling_results=sampling_results, - rcnn_train_cfg=self.train_cfg[stage]) - - bbox_results.update(bbox_loss_and_target) - return bbox_results - - def mask_loss(self, - x: Tuple[Tensor], - sampling_results: List[SamplingResult], - batch_gt_instances: InstanceList, - semantic_feat: Optional[Tensor] = None, - glbctx_feat: Optional[Tensor] = None, - relayed_feat: Optional[Tensor] = None) -> dict: - """Run forward function and calculate loss for mask head in training. - - Args: - x (tuple[Tensor]): Tuple of multi-level img features. - sampling_results (list["obj:`SamplingResult`]): Sampling results. - batch_gt_instances (list[:obj:`InstanceData`]): Batch of - gt_instance. It usually includes ``bboxes``, ``labels``, and - ``masks`` attributes. - semantic_feat (Tensor): Semantic feature. Defaults to None. - glbctx_feat (Tensor): Global context feature. Defaults to None. - relayed_feat (Tensor): Relayed feature. Defaults to None. - - Returns: - dict: Usually returns a dictionary with keys: - - - `mask_preds` (Tensor): Mask prediction. - - `loss_mask` (dict): A dictionary of mask loss components. - """ - pos_rois = bbox2roi([res.pos_priors for res in sampling_results]) - mask_results = self._mask_forward( - x, - pos_rois, - semantic_feat=semantic_feat, - glbctx_feat=glbctx_feat, - relayed_feat=relayed_feat) - - mask_loss_and_target = self.mask_head.loss_and_target( - mask_preds=mask_results['mask_preds'], - sampling_results=sampling_results, - batch_gt_instances=batch_gt_instances, - rcnn_train_cfg=self.train_cfg[-1]) - mask_results.update(mask_loss_and_target) - - return mask_results - - def semantic_loss(self, x: Tuple[Tensor], - batch_data_samples: SampleList) -> dict: - """Semantic segmentation loss. - - Args: - x (Tuple[Tensor]): Tuple of multi-level img features. - batch_data_samples (list[:obj:`DetDataSample`]): The batch - data samples. It usually includes information such - as `gt_instance` or `gt_panoptic_seg` or `gt_sem_seg`. - - Returns: - dict: Usually returns a dictionary with keys: - - - `semantic_feat` (Tensor): Semantic feature. - - `loss_seg` (dict): Semantic segmentation loss. - """ - gt_semantic_segs = [ - data_sample.gt_sem_seg.sem_seg - for data_sample in batch_data_samples - ] - gt_semantic_segs = torch.stack(gt_semantic_segs) - semantic_pred, semantic_feat = self.semantic_head(x) - loss_seg = self.semantic_head.loss(semantic_pred, gt_semantic_segs) - - semantic_results = dict(loss_seg=loss_seg, semantic_feat=semantic_feat) - - return semantic_results - - def global_context_loss(self, x: Tuple[Tensor], - batch_gt_instances: InstanceList) -> dict: - """Global context loss. - - Args: - x (Tuple[Tensor]): Tuple of multi-level img features. - batch_gt_instances (list[:obj:`InstanceData`]): Batch of - gt_instance. It usually includes ``bboxes``, ``labels``, and - ``masks`` attributes. - - Returns: - dict: Usually returns a dictionary with keys: - - - `glbctx_feat` (Tensor): Global context feature. - - `loss_glbctx` (dict): Global context loss. - """ - gt_labels = [ - gt_instances.labels for gt_instances in batch_gt_instances - ] - mc_pred, glbctx_feat = self.glbctx_head(x) - loss_glbctx = self.glbctx_head.loss(mc_pred, gt_labels) - global_context_results = dict( - loss_glbctx=loss_glbctx, glbctx_feat=glbctx_feat) - - return global_context_results - - def loss(self, x: Tensor, rpn_results_list: InstanceList, - batch_data_samples: SampleList) -> dict: - """Perform forward propagation and loss calculation of the detection - roi on the features of the upstream network. - - Args: - x (tuple[Tensor]): List of multi-level img features. - rpn_results_list (list[:obj:`InstanceData`]): List of region - proposals. - batch_data_samples (list[:obj:`DetDataSample`]): The batch - data samples. It usually includes information such - as `gt_instance` or `gt_panoptic_seg` or `gt_sem_seg`. - - Returns: - dict[str, Tensor]: A dictionary of loss components - """ - assert len(rpn_results_list) == len(batch_data_samples) - outputs = unpack_gt_instances(batch_data_samples) - batch_gt_instances, batch_gt_instances_ignore, batch_img_metas \ - = outputs - - losses = dict() - - # semantic segmentation branch - if self.with_semantic: - semantic_results = self.semantic_loss( - x=x, batch_data_samples=batch_data_samples) - losses['loss_semantic_seg'] = semantic_results['loss_seg'] - semantic_feat = semantic_results['semantic_feat'] - else: - semantic_feat = None - - # global context branch - if self.with_glbctx: - global_context_results = self.global_context_loss( - x=x, batch_gt_instances=batch_gt_instances) - losses['loss_glbctx'] = global_context_results['loss_glbctx'] - glbctx_feat = global_context_results['glbctx_feat'] - else: - glbctx_feat = None - - results_list = rpn_results_list - num_imgs = len(batch_img_metas) - for stage in range(self.num_stages): - stage_loss_weight = self.stage_loss_weights[stage] - - # assign gts and sample proposals - sampling_results = [] - bbox_assigner = self.bbox_assigner[stage] - bbox_sampler = self.bbox_sampler[stage] - for i in range(num_imgs): - results = results_list[i] - # rename rpn_results.bboxes to rpn_results.priors - results.priors = results.pop('bboxes') - - assign_result = bbox_assigner.assign( - results, batch_gt_instances[i], - batch_gt_instances_ignore[i]) - sampling_result = bbox_sampler.sample( - assign_result, - results, - batch_gt_instances[i], - feats=[lvl_feat[i][None] for lvl_feat in x]) - sampling_results.append(sampling_result) - - # bbox head forward and loss - bbox_results = self.bbox_loss( - stage=stage, - x=x, - sampling_results=sampling_results, - semantic_feat=semantic_feat, - glbctx_feat=glbctx_feat) - - for name, value in bbox_results['loss_bbox'].items(): - losses[f's{stage}.{name}'] = ( - value * stage_loss_weight if 'loss' in name else value) - - # refine bboxes - if stage < self.num_stages - 1: - bbox_head = self.bbox_head[stage] - with torch.no_grad(): - results_list = bbox_head.refine_bboxes( - sampling_results=sampling_results, - bbox_results=bbox_results, - batch_img_metas=batch_img_metas) - - if self.with_feat_relay: - relayed_feat = self._slice_pos_feats(bbox_results['relayed_feat'], - sampling_results) - relayed_feat = self.feat_relay_head(relayed_feat) - else: - relayed_feat = None - - # mask head forward and loss - mask_results = self.mask_loss( - x=x, - sampling_results=sampling_results, - batch_gt_instances=batch_gt_instances, - semantic_feat=semantic_feat, - glbctx_feat=glbctx_feat, - relayed_feat=relayed_feat) - mask_stage_loss_weight = sum(self.stage_loss_weights) - losses['loss_mask'] = mask_stage_loss_weight * mask_results[ - 'loss_mask']['loss_mask'] - - return losses - - def predict(self, - x: Tuple[Tensor], - rpn_results_list: InstanceList, - batch_data_samples: SampleList, - rescale: bool = False) -> InstanceList: - """Perform forward propagation of the roi head and predict detection - results on the features of the upstream network. - - Args: - x (tuple[Tensor]): Features from upstream network. Each - has shape (N, C, H, W). - rpn_results_list (list[:obj:`InstanceData`]): list of region - proposals. - batch_data_samples (List[:obj:`DetDataSample`]): The Data - Samples. It usually includes information such as - `gt_instance`, `gt_panoptic_seg` and `gt_sem_seg`. - rescale (bool): Whether to rescale the results to - the original image. Defaults to False. - - Returns: - list[obj:`InstanceData`]: Detection results of each image. - Each item usually contains following keys. - - - scores (Tensor): Classification scores, has a shape - (num_instance, ) - - labels (Tensor): Labels of bboxes, has a shape - (num_instances, ). - - bboxes (Tensor): Has a shape (num_instances, 4), - the last dimension 4 arrange as (x1, y1, x2, y2). - - masks (Tensor): Has a shape (num_instances, H, W). - """ - assert self.with_bbox, 'Bbox head must be implemented.' - batch_img_metas = [ - data_samples.metainfo for data_samples in batch_data_samples - ] - - if self.with_semantic: - _, semantic_feat = self.semantic_head(x) - else: - semantic_feat = None - - if self.with_glbctx: - _, glbctx_feat = self.glbctx_head(x) - else: - glbctx_feat = None - - # TODO: nms_op in mmcv need be enhanced, the bbox result may get - # difference when not rescale in bbox_head - - # If it has the mask branch, the bbox branch does not need - # to be scaled to the original image scale, because the mask - # branch will scale both bbox and mask at the same time. - bbox_rescale = rescale if not self.with_mask else False - results_list = self.predict_bbox( - x=x, - semantic_feat=semantic_feat, - glbctx_feat=glbctx_feat, - batch_img_metas=batch_img_metas, - rpn_results_list=rpn_results_list, - rcnn_test_cfg=self.test_cfg, - rescale=bbox_rescale) - - if self.with_mask: - results_list = self.predict_mask( - x=x, - semantic_heat=semantic_feat, - glbctx_feat=glbctx_feat, - batch_img_metas=batch_img_metas, - results_list=results_list, - rescale=rescale) - - return results_list - - def predict_mask(self, - x: Tuple[Tensor], - semantic_heat: Tensor, - glbctx_feat: Tensor, - batch_img_metas: List[dict], - results_list: List[InstanceData], - rescale: bool = False) -> List[InstanceData]: - """Perform forward propagation of the mask head and predict detection - results on the features of the upstream network. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - semantic_feat (Tensor): Semantic feature. - glbctx_feat (Tensor): Global context feature. - batch_img_metas (list[dict]): List of image information. - results_list (list[:obj:`InstanceData`]): Detection results of - each image. - rescale (bool): If True, return boxes in original image space. - Defaults to False. - - Returns: - list[:obj:`InstanceData`]: Detection results of each image - after the post process. - Each item usually contains following keys. - - - scores (Tensor): Classification scores, has a shape - (num_instance, ) - - labels (Tensor): Labels of bboxes, has a shape - (num_instances, ). - - bboxes (Tensor): Has a shape (num_instances, 4), - the last dimension 4 arrange as (x1, y1, x2, y2). - - masks (Tensor): Has a shape (num_instances, H, W). - """ - bboxes = [res.bboxes for res in results_list] - mask_rois = bbox2roi(bboxes) - if mask_rois.shape[0] == 0: - results_list = empty_instances( - batch_img_metas=batch_img_metas, - device=mask_rois.device, - task_type='mask', - instance_results=results_list, - mask_thr_binary=self.test_cfg.mask_thr_binary) - return results_list - - bboxes_results = self._bbox_forward( - stage=-1, - x=x, - rois=mask_rois, - semantic_feat=semantic_heat, - glbctx_feat=glbctx_feat) - relayed_feat = bboxes_results['relayed_feat'] - relayed_feat = self.feat_relay_head(relayed_feat) - - mask_results = self._mask_forward( - x=x, - rois=mask_rois, - semantic_feat=semantic_heat, - glbctx_feat=glbctx_feat, - relayed_feat=relayed_feat) - mask_preds = mask_results['mask_preds'] - - # split batch mask prediction back to each image - num_bbox_per_img = tuple(len(_bbox) for _bbox in bboxes) - mask_preds = mask_preds.split(num_bbox_per_img, 0) - - results_list = self.mask_head.predict_by_feat( - mask_preds=mask_preds, - results_list=results_list, - batch_img_metas=batch_img_metas, - rcnn_test_cfg=self.test_cfg, - rescale=rescale) - - return results_list - - def forward(self, x: Tuple[Tensor], rpn_results_list: InstanceList, - batch_data_samples: SampleList) -> tuple: - """Network forward process. Usually includes backbone, neck and head - forward without any post-processing. - - Args: - x (List[Tensor]): Multi-level features that may have different - resolutions. - rpn_results_list (list[:obj:`InstanceData`]): List of region - proposals. - batch_data_samples (list[:obj:`DetDataSample`]): Each item contains - the meta information of each image and corresponding - annotations. - - Returns - tuple: A tuple of features from ``bbox_head`` and ``mask_head`` - forward. - """ - results = () - batch_img_metas = [ - data_samples.metainfo for data_samples in batch_data_samples - ] - - if self.with_semantic: - _, semantic_feat = self.semantic_head(x) - else: - semantic_feat = None - - if self.with_glbctx: - _, glbctx_feat = self.glbctx_head(x) - else: - glbctx_feat = None - - proposals = [rpn_results.bboxes for rpn_results in rpn_results_list] - num_proposals_per_img = tuple(len(p) for p in proposals) - rois = bbox2roi(proposals) - # bbox head - if self.with_bbox: - rois, cls_scores, bbox_preds = self._refine_roi( - x=x, - rois=rois, - semantic_feat=semantic_feat, - glbctx_feat=glbctx_feat, - batch_img_metas=batch_img_metas, - num_proposals_per_img=num_proposals_per_img) - results = results + (cls_scores, bbox_preds) - # mask head - if self.with_mask: - rois = torch.cat(rois) - bboxes_results = self._bbox_forward( - stage=-1, - x=x, - rois=rois, - semantic_feat=semantic_feat, - glbctx_feat=glbctx_feat) - relayed_feat = bboxes_results['relayed_feat'] - relayed_feat = self.feat_relay_head(relayed_feat) - mask_results = self._mask_forward( - x=x, - rois=rois, - semantic_feat=semantic_feat, - glbctx_feat=glbctx_feat, - relayed_feat=relayed_feat) - mask_preds = mask_results['mask_preds'] - mask_preds = mask_preds.split(num_proposals_per_img, 0) - results = results + (mask_preds, ) - return results diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/calc_rvc_model_similarity.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/calc_rvc_model_similarity.py deleted file mode 100644 index 42496e088e51dc5162d0714470c2226f696e260c..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/calc_rvc_model_similarity.py +++ /dev/null @@ -1,96 +0,0 @@ -# This code references https://huggingface.co/JosephusCheung/ASimilarityCalculatior/blob/main/qwerty.py -# Fill in the path of the model to be queried and the root directory of the reference models, and this script will return the similarity between the model to be queried and all reference models. -import os -import logging - -logger = logging.getLogger(__name__) - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -def cal_cross_attn(to_q, to_k, to_v, rand_input): - hidden_dim, embed_dim = to_q.shape - attn_to_q = nn.Linear(hidden_dim, embed_dim, bias=False) - attn_to_k = nn.Linear(hidden_dim, embed_dim, bias=False) - attn_to_v = nn.Linear(hidden_dim, embed_dim, bias=False) - attn_to_q.load_state_dict({"weight": to_q}) - attn_to_k.load_state_dict({"weight": to_k}) - attn_to_v.load_state_dict({"weight": to_v}) - - return torch.einsum( - "ik, jk -> ik", - F.softmax( - torch.einsum("ij, kj -> ik", attn_to_q(rand_input), attn_to_k(rand_input)), - dim=-1, - ), - attn_to_v(rand_input), - ) - - -def model_hash(filename): - try: - with open(filename, "rb") as file: - import hashlib - - m = hashlib.sha256() - - file.seek(0x100000) - m.update(file.read(0x10000)) - return m.hexdigest()[0:8] - except FileNotFoundError: - return "NOFILE" - - -def eval(model, n, input): - qk = f"enc_p.encoder.attn_layers.{n}.conv_q.weight" - uk = f"enc_p.encoder.attn_layers.{n}.conv_k.weight" - vk = f"enc_p.encoder.attn_layers.{n}.conv_v.weight" - atoq, atok, atov = model[qk][:, :, 0], model[uk][:, :, 0], model[vk][:, :, 0] - - attn = cal_cross_attn(atoq, atok, atov, input) - return attn - - -def main(path, root): - torch.manual_seed(114514) - model_a = torch.load(path, map_location="cpu")["weight"] - - logger.info("Query:\t\t%s\t%s" % (path, model_hash(path))) - - map_attn_a = {} - map_rand_input = {} - for n in range(6): - hidden_dim, embed_dim, _ = model_a[ - f"enc_p.encoder.attn_layers.{n}.conv_v.weight" - ].shape - rand_input = torch.randn([embed_dim, hidden_dim]) - - map_attn_a[n] = eval(model_a, n, rand_input) - map_rand_input[n] = rand_input - - del model_a - - for name in sorted(list(os.listdir(root))): - path = "%s/%s" % (root, name) - model_b = torch.load(path, map_location="cpu")["weight"] - - sims = [] - for n in range(6): - attn_a = map_attn_a[n] - attn_b = eval(model_b, n, map_rand_input[n]) - - sim = torch.mean(torch.cosine_similarity(attn_a, attn_b)) - sims.append(sim) - - logger.info( - "Reference:\t%s\t%s\t%s" - % (path, model_hash(path), f"{torch.mean(torch.stack(sims)) * 1e2:.2f}%") - ) - - -if __name__ == "__main__": - query_path = r"assets\weights\mi v3.pth" - reference_root = r"assets\weights" - main(query_path, reference_root) diff --git a/spaces/Liu-LAB/GPT-academic/docs/README_EN.md b/spaces/Liu-LAB/GPT-academic/docs/README_EN.md deleted file mode 100644 index 02b8588c38f1b52228840b882e509064daecb3f0..0000000000000000000000000000000000000000 --- a/spaces/Liu-LAB/GPT-academic/docs/README_EN.md +++ /dev/null @@ -1,322 +0,0 @@ -> **Note** -> -> This English README is automatically generated by the markdown translation plugin in this project, and may not be 100% correct. -> -> When installing dependencies, **please strictly select the versions** specified in requirements.txt. -> -> `pip install -r requirements.txt` - -# GPT Academic Optimization (GPT Academic) - -**If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. -To translate this project to arbitary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental).** - -> Note: -> -> 1. Please note that only the function plugins (buttons) marked in **red** support reading files. Some plugins are in the **drop-down menu** in the plugin area. We welcome and process any new plugins with the **highest priority**! -> 2. The function of each file in this project is detailed in the self-translation analysis [`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). With version iteration, you can also click on related function plugins at any time to call GPT to regenerate the project's self-analysis report. Common questions are summarized in the [`wiki`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Installation method](#installation). -> 3. This project is compatible with and encourages trying domestic large language models such as chatglm, RWKV, Pangu, etc. Multiple API keys are supported and can be filled in the configuration file like `API_KEY="openai-key1,openai-key2,api2d-key3"`. When temporarily changing `API_KEY`, enter the temporary `API_KEY` in the input area and press enter to submit, which will take effect. - -
- -Function | Description ---- | --- -One-click polishing | Supports one-click polishing and one-click searching for grammar errors in papers. -One-click Chinese-English translation | One-click Chinese-English translation. -One-click code interpretation | Displays, explains, generates, and adds comments to code. -[Custom shortcut keys](https://www.bilibili.com/video/BV14s4y1E7jN) | Supports custom shortcut keys. -Modular design | Supports custom powerful [function plug-ins](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions), plug-ins support [hot update](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97). -[Self-program profiling](https://www.bilibili.com/video/BV1cj411A7VW) | [Function plug-in] [One-click understanding](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) of the source code of this project -[Program profiling](https://www.bilibili.com/video/BV1cj411A7VW) | [Function plug-in] One-click profiling of other project trees in Python/C/C++/Java/Lua/... -Reading papers, [translating](https://www.bilibili.com/video/BV1KT411x7Wn) papers | [Function Plug-in] One-click interpretation of latex/pdf full-text papers and generation of abstracts. -Latex full-text [translation](https://www.bilibili.com/video/BV1nk4y1Y7Js/), [polishing](https://www.bilibili.com/video/BV1FT411H7c5/) | [Function plug-in] One-click translation or polishing of latex papers. -Batch annotation generation | [Function plug-in] One-click batch generation of function annotations. -Markdown [Chinese-English translation](https://www.bilibili.com/video/BV1yo4y157jV/) | [Function plug-in] Have you seen the [README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md) in the five languages above? -Chat analysis report generation | [Function plug-in] Automatically generate summary reports after running. -[PDF full-text translation function](https://www.bilibili.com/video/BV1KT411x7Wn) | [Function plug-in] PDF paper extract title & summary + translate full text (multi-threaded) -[Arxiv Assistant](https://www.bilibili.com/video/BV1LM4y1279X) | [Function plug-in] Enter the arxiv article url and you can translate abstracts and download PDFs with one click. -[Google Scholar Integration Assistant](https://www.bilibili.com/video/BV19L411U7ia) | [Function plug-in] Given any Google Scholar search page URL, let GPT help you [write relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/) -Internet information aggregation+GPT | [Function plug-in] One-click [let GPT get information from the Internet first](https://www.bilibili.com/video/BV1om4y127ck), then answer questions, and let the information never be outdated. -Formula/image/table display | Can display formulas in both [tex form and render form](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), support formulas and code highlighting. -Multi-threaded function plug-in support | Supports multi-threaded calling of chatgpt, and can process [massive text](https://www.bilibili.com/video/BV1FT411H7c5/) or programs with one click. -Start Dark Gradio [theme](https://github.com/binary-husky/gpt_academic/issues/173) | Add ```/?__theme=dark``` after the browser URL to switch to the dark theme. -[Multiple LLM models](https://www.bilibili.com/video/BV1wT411p7yf) support, [API2D](https://api2d.com/) interface support | The feeling of being served by GPT3.5, GPT4, [Tsinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B), and [Fudan MOSS](https://github.com/OpenLMLab/MOSS) at the same time must be great, right? -More LLM model access, support [huggingface deployment](https://huggingface.co/spaces/qingxu98/gpt-academic) | Add Newbing interface (New Bing), introduce Tsinghua [Jittorllms](https://github.com/Jittor/JittorLLMs) to support [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) and [Panguα](https://openi.org.cn/pangu/) -More new feature displays (image generation, etc.)…… | See the end of this document for more... -
- -- New interface (modify the LAYOUT option in `config.py` to switch between "left and right layout" and "up and down layout") -
- -
- All buttons are dynamically generated by reading `functional.py`, and you can add custom functions freely to unleash the power of clipboard. -
- -
- -- polishing/correction -
- -
- -- If the output contains formulas, they will be displayed in both `tex` and render form, making it easy to copy and read. -
- -
- -- Tired of reading the project code? ChatGPT can explain it all. -
- -
- -- Multiple large language models are mixed, such as ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4. -
- -
- ---- -# Installation -## Method 1: Directly running (Windows, Linux or MacOS) - -1. Download the project -```sh -git clone https://github.com/binary-husky/gpt_academic.git -cd gpt_academic -``` - -2. Configure the API_KEY - -Configure the API KEY in `config.py`, [special network environment settings](https://github.com/binary-husky/gpt_academic/issues/1). - -(P.S. When the program is running, it will first check if there is a private configuration file named `config_private.py` and use the configurations in it to override the same configurations in `config.py`. Therefore, if you can understand our configuration reading logic, we strongly recommend that you create a new configuration file named `config_private.py` next to `config.py` and transfer (copy) the configurations in `config.py` to `config_private.py`. `config_private.py` is not controlled by git and can make your private information more secure. P.S. The project also supports configuring most options through `environment variables`. Please refer to the format of `docker-compose` file when writing. Reading priority: `environment variables` > `config_private.py` > `config.py`) - - -3. Install the dependencies -```sh -# (Option I: If familiar with python) (python version 3.9 or above, the newer the better), note: use official pip source or Ali pip source, temporary switching method: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ -python -m pip install -r requirements.txt - -# (Option II: If not familiar with python) Use anaconda, the steps are similar (https://www.bilibili.com/video/BV1rc411W7Dr): -conda create -n gptac_venv python=3.11 # create anaconda environment -conda activate gptac_venv # activate anaconda environment -python -m pip install -r requirements.txt # this step is the same as pip installation -``` - -
If you need to support Tsinghua ChatGLM/Fudan MOSS as a backend, click to expand -

- -[Optional step] If you need to support Tsinghua ChatGLM/Fudan MOSS as a backend, you need to install more dependencies (prerequisites: familiar with Python + used Pytorch + computer configuration is strong enough): -```sh -# [Optional Step I] Support Tsinghua ChatGLM. Tsinghua ChatGLM remarks: if you encounter the "Call ChatGLM fail cannot load ChatGLM parameters" error, refer to this: 1: The default installation above is torch + cpu version, to use cuda, you need to uninstall torch and reinstall torch + cuda; 2: If the model cannot be loaded due to insufficient local configuration, you can modify the model accuracy in request_llm/bridge_chatglm.py, and change AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code = True) -python -m pip install -r request_llm/requirements_chatglm.txt - -# [Optional Step II] Support Fudan MOSS -python -m pip install -r request_llm/requirements_moss.txt -git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # When executing this line of code, you must be in the root directory of the project - -# [Optional Step III] Make sure the AVAIL_LLM_MODELS in the config.py configuration file includes the expected models. Currently supported models are as follows (the jittorllms series only supports the docker solution for the time being): -AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"] -``` - -

-
- - - -4. Run it -```sh -python main.py -```5. Test Function Plugin -``` -- Test function plugin template function (ask GPT what happened today in history), based on which you can implement more complex functions as a template - Click "[Function Plugin Template Demo] Today in History" -``` - -## Installation - Method 2: Using Docker - -1. ChatGPT Only (Recommended for Most People) - -``` sh -git clone https://github.com/binary-husky/gpt_academic.git # Download project -cd gpt_academic # Enter path -nano config.py # Edit config.py with any text editor, configure "Proxy", "API_KEY" and "WEB_PORT" (e.g. 50923), etc. -docker build -t gpt-academic . # Install - -#(Last step - option 1) In a Linux environment, use `--net=host` for convenience and speed. -docker run --rm -it --net=host gpt-academic -#(Last step - option 2) On macOS/windows environment, only -p option can be used to expose the container's port (e.g. 50923) to the port of the main machine. -docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic -``` - -2. ChatGPT + ChatGLM + MOSS (Requires Docker Knowledge) - -``` sh -# Modify docker-compose.yml, delete Plan 1 and Plan 3, and keep Plan 2. Modify the configuration of Plan 2 in docker-compose.yml, refer to the comments in it for configuration. -docker-compose up -``` - -3. ChatGPT + LLAMA + Pangu + RWKV (Requires Docker Knowledge) - -``` sh -# Modify docker-compose.yml, delete Plan 1 and Plan 2, and keep Plan 3. Modify the configuration of Plan 3 in docker-compose.yml, refer to the comments in it for configuration. -docker-compose up -``` - -## Installation - Method 3: Other Deployment Options - -1. How to Use Reverse Proxy URL/Microsoft Cloud Azure API -Configure API_URL_REDIRECT according to the instructions in 'config.py'. - -2. Deploy to a Remote Server (Requires Knowledge and Experience with Cloud Servers) -Please visit [Deployment Wiki-1](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97) - -3. Using WSL2 (Windows Subsystem for Linux) -Please visit [Deployment Wiki-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2) - -4. How to Run Under a Subdomain (e.g. `http://localhost/subpath`) -Please visit [FastAPI Running Instructions](docs/WithFastapi.md) - -5. Using docker-compose to Run -Read the docker-compose.yml and follow the prompts. - ---- -# Advanced Usage -## Custom New Shortcut Buttons / Custom Function Plugins - -1. Custom New Shortcut Buttons (Academic Hotkey) -Open `core_functional.py` with any text editor, add an entry as follows and restart the program. (If the button has been successfully added and is visible, the prefix and suffix can be hot-modified without having to restart the program.) -For example, -``` -"Super English-to-Chinese": { - # Prefix, which will be added before your input. For example, used to describe your requests, such as translation, code explanation, polishing, etc. - "Prefix": "Please translate the following content into Chinese and then use a markdown table to explain the proprietary terms that appear in the text:\n\n", - - # Suffix, which is added after your input. For example, with the prefix, your input content can be surrounded by quotes. - "Suffix": "", -}, -``` -
- -
- -2. Custom Function Plugins - -Write powerful function plugins to perform any task you can think of, even those you cannot think of. -The difficulty of plugin writing and debugging in this project is very low. As long as you have a certain knowledge of Python, you can implement your own plug-in functions based on the template we provide. -For details, please refer to the [Function Plugin Guide](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97). - ---- -# Latest Update -## New Feature Dynamics -1. Conversation saving function. Call `Save current conversation` in the function plugin area to save the current conversation as a readable and recoverable HTML file. In addition, call `Load conversation history archive` in the function plugin area (dropdown menu) to restore previous sessions. Tip: Clicking `Load conversation history archive` without specifying a file will display the cached history of HTML archives, and clicking `Delete all local conversation history` will delete all HTML archive caches. - -
- -
- - -2. Report generation. Most plugins will generate work reports after execution. - -
- - - -
- - -3. Modular function design with simple interfaces that support powerful functions. - -
- - -
- - -4. This is an open-source project that can "self-translate". - -
- -
- -5. Translating other open-source projects is a piece of cake. - -
- -
- -
- -
- -6. A small feature decorated with [live2d](https://github.com/fghrsh/live2d_demo) (disabled by default, need to modify `config.py`). - -
- -
- -7. Added MOSS large language model support. -
- -
- -8. OpenAI image generation. -
- -
- -9. OpenAI audio parsing and summarization. -
- -
- -10. Full-text proofreading and error correction of LaTeX. -
- -
- - -## Versions: -- version 3.5(Todo): Use natural language to call all function plugins of this project (high priority). -- version 3.4(Todo): Improve multi-threading support for chatglm local large models. -- version 3.3: +Internet information integration function. -- version 3.2: Function plugin supports more parameter interfaces (save conversation function, interpretation of any language code + simultaneous inquiry of any LLM combination). -- version 3.1: Support simultaneous inquiry of multiple GPT models! Support api2d, and support load balancing of multiple apikeys. -- version 3.0: Support chatglm and other small LLM models. -- version 2.6: Refactored plugin structure, improved interactivity, and added more plugins. -- version 2.5: Self-updating, solving the problem of text overflow and token overflow when summarizing large engineering source codes. -- version 2.4: (1) Added PDF full-text translation function; (2) Added the function of switching the position of the input area; (3) Added vertical layout option; (4) Optimized multi-threading function plugins. -- version 2.3: Enhanced multi-threading interactivity. -- version 2.2: Function plugin supports hot reloading. -- version 2.1: Collapsible layout. -- version 2.0: Introduction of modular function plugins. -- version 1.0: Basic functions. - -gpt_academic Developer QQ Group-2: 610599535 - -- Known Issues - - Some browser translation plugins interfere with the front-end operation of this software. - - Both high and low versions of gradio can lead to various exceptions. - -## Reference and Learning - -``` -Many other excellent designs have been referenced in the code, mainly including: - -# Project 1: THU ChatGLM-6B: -https://github.com/THUDM/ChatGLM-6B - -# Project 2: THU JittorLLMs: -https://github.com/Jittor/JittorLLMs - -# Project 3: Edge-GPT: -https://github.com/acheong08/EdgeGPT - -# Project 4: ChuanhuChatGPT: -https://github.com/GaiZhenbiao/ChuanhuChatGPT - -# Project 5: ChatPaper: -https://github.com/kaixindelele/ChatPaper - -# More: -https://github.com/gradio-app/gradio -https://github.com/fghrsh/live2d_demo -``` \ No newline at end of file diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/model/metrics.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/model/metrics.py deleted file mode 100644 index 9944feb1cf76cfb8707122c7a6ea7a830c02070a..0000000000000000000000000000000000000000 --- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/model/metrics.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import numpy as np - -from ..utils import misc - - -class TrainMetric(object): - def __init__(self, pred_outputs, gt_outputs): - self.pred_outputs = pred_outputs - self.gt_outputs = gt_outputs - - def update(self, *args, **kwargs): - raise NotImplementedError - - def get_epoch_value(self): - raise NotImplementedError - - def reset_epoch_stats(self): - raise NotImplementedError - - def log_states(self, sw, tag_prefix, global_step): - pass - - @property - def name(self): - return type(self).__name__ - - -class AdaptiveIoU(TrainMetric): - def __init__(self, init_thresh=0.4, thresh_step=0.025, thresh_beta=0.99, iou_beta=0.9, - ignore_label=-1, from_logits=True, - pred_output='instances', gt_output='instances'): - super().__init__(pred_outputs=(pred_output,), gt_outputs=(gt_output,)) - self._ignore_label = ignore_label - self._from_logits = from_logits - self._iou_thresh = init_thresh - self._thresh_step = thresh_step - self._thresh_beta = thresh_beta - self._iou_beta = iou_beta - self._ema_iou = 0.0 - self._epoch_iou_sum = 0.0 - self._epoch_batch_count = 0 - - def update(self, pred, gt): - gt_mask = gt > 0 - if self._from_logits: - pred = torch.sigmoid(pred) - - gt_mask_area = torch.sum(gt_mask, dim=(1, 2)).detach().cpu().numpy() - if np.all(gt_mask_area == 0): - return - - ignore_mask = gt == self._ignore_label - max_iou = _compute_iou(pred > self._iou_thresh, gt_mask, ignore_mask).mean() - best_thresh = self._iou_thresh - for t in [best_thresh - self._thresh_step, best_thresh + self._thresh_step]: - temp_iou = _compute_iou(pred > t, gt_mask, ignore_mask).mean() - if temp_iou > max_iou: - max_iou = temp_iou - best_thresh = t - - self._iou_thresh = self._thresh_beta * self._iou_thresh + (1 - self._thresh_beta) * best_thresh - self._ema_iou = self._iou_beta * self._ema_iou + (1 - self._iou_beta) * max_iou - self._epoch_iou_sum += max_iou - self._epoch_batch_count += 1 - - def get_epoch_value(self): - if self._epoch_batch_count > 0: - return self._epoch_iou_sum / self._epoch_batch_count - else: - return 0.0 - - def reset_epoch_stats(self): - self._epoch_iou_sum = 0.0 - self._epoch_batch_count = 0 - - def log_states(self, sw, tag_prefix, global_step): - sw.add_scalar(tag=tag_prefix + '_ema_iou', value=self._ema_iou, global_step=global_step) - sw.add_scalar(tag=tag_prefix + '_iou_thresh', value=self._iou_thresh, global_step=global_step) - - @property - def iou_thresh(self): - return self._iou_thresh - - -def _compute_iou(pred_mask, gt_mask, ignore_mask=None, keep_ignore=False): - if ignore_mask is not None: - pred_mask = torch.where(ignore_mask, torch.zeros_like(pred_mask), pred_mask) - - reduction_dims = misc.get_dims_with_exclusion(gt_mask.dim(), 0) - union = torch.mean((pred_mask | gt_mask).float(), dim=reduction_dims).detach().cpu().numpy() - intersection = torch.mean((pred_mask & gt_mask).float(), dim=reduction_dims).detach().cpu().numpy() - nonzero = union > 0 - - iou = intersection[nonzero] / union[nonzero] - if not keep_ignore: - return iou - else: - result = np.full_like(intersection, -1) - result[nonzero] = iou - return result diff --git a/spaces/MakiAi/Image2VideoProcessingPipelin/act.bat b/spaces/MakiAi/Image2VideoProcessingPipelin/act.bat deleted file mode 100644 index 2af4794c64f82b286ef87c38e70f8a3533a5b561..0000000000000000000000000000000000000000 --- a/spaces/MakiAi/Image2VideoProcessingPipelin/act.bat +++ /dev/null @@ -1 +0,0 @@ -conda activate vImage2MOV2 \ No newline at end of file diff --git a/spaces/Manjushri/Instruct-Pix-2-Pix/README.md b/spaces/Manjushri/Instruct-Pix-2-Pix/README.md deleted file mode 100644 index a3903df23e8e2c79943c168b78ccb038a036885c..0000000000000000000000000000000000000000 --- a/spaces/Manjushri/Instruct-Pix-2-Pix/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Instruct Pix 2 Pix -emoji: 👁 -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/data/tar_dataset.py b/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/data/tar_dataset.py deleted file mode 100644 index 0605ba3a96ab80a1212fdb1a3860337d7e7b20cc..0000000000000000000000000000000000000000 --- a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/data/tar_dataset.py +++ /dev/null @@ -1,138 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -import os -import gzip -import numpy as np -import io -from PIL import Image -from torch.utils.data import Dataset - -try: - from PIL import UnidentifiedImageError - - unidentified_error_available = True -except ImportError: - # UnidentifiedImageError isn't available in older versions of PIL - unidentified_error_available = False - -class DiskTarDataset(Dataset): - def __init__(self, - tarfile_path='dataset/imagenet/ImageNet-21k/metadata/tar_files.npy', - tar_index_dir='dataset/imagenet/ImageNet-21k/metadata/tarindex_npy', - preload=False, - num_synsets="all"): - """ - - preload (bool): Recommend to set preload to False when using - - num_synsets (integer or string "all"): set to small number for debugging - will load subset of dataset - """ - tar_files = np.load(tarfile_path) - - chunk_datasets = [] - dataset_lens = [] - if isinstance(num_synsets, int): - assert num_synsets < len(tar_files) - tar_files = tar_files[:num_synsets] - for tar_file in tar_files: - dataset = _TarDataset(tar_file, tar_index_dir, preload=preload) - chunk_datasets.append(dataset) - dataset_lens.append(len(dataset)) - - self.chunk_datasets = chunk_datasets - self.dataset_lens = np.array(dataset_lens).astype(np.int32) - self.dataset_cumsums = np.cumsum(self.dataset_lens) - self.num_samples = sum(self.dataset_lens) - labels = np.zeros(self.dataset_lens.sum(), dtype=np.int64) - sI = 0 - for k in range(len(self.dataset_lens)): - assert (sI+self.dataset_lens[k]) <= len(labels), f"{k} {sI+self.dataset_lens[k]} vs. {len(labels)}" - labels[sI:(sI+self.dataset_lens[k])] = k - sI += self.dataset_lens[k] - self.labels = labels - - def __len__(self): - return self.num_samples - - def __getitem__(self, index): - assert index >= 0 and index < len(self) - # find the dataset file we need to go to - d_index = np.searchsorted(self.dataset_cumsums, index) - - # edge case, if index is at edge of chunks, move right - if index in self.dataset_cumsums: - d_index += 1 - - assert d_index == self.labels[index], f"{d_index} vs. {self.labels[index]} mismatch for {index}" - - # change index to local dataset index - if d_index == 0: - local_index = index - else: - local_index = index - self.dataset_cumsums[d_index - 1] - data_bytes = self.chunk_datasets[d_index][local_index] - exception_to_catch = UnidentifiedImageError if unidentified_error_available else Exception - try: - image = Image.open(data_bytes).convert("RGB") - except exception_to_catch: - image = Image.fromarray(np.ones((224,224,3), dtype=np.uint8)*128) - d_index = -1 - - # label is the dataset (synset) we indexed into - return image, d_index, index - - def __repr__(self): - st = f"DiskTarDataset(subdatasets={len(self.dataset_lens)},samples={self.num_samples})" - return st - -class _TarDataset(object): - - def __init__(self, filename, npy_index_dir, preload=False): - # translated from - # fbcode/experimental/deeplearning/matthijs/comp_descs/tardataset.lua - self.filename = filename - self.names = [] - self.offsets = [] - self.npy_index_dir = npy_index_dir - names, offsets = self.load_index() - - self.num_samples = len(names) - if preload: - self.data = np.memmap(filename, mode='r', dtype='uint8') - self.offsets = offsets - else: - self.data = None - - - def __len__(self): - return self.num_samples - - def load_index(self): - basename = os.path.basename(self.filename) - basename = os.path.splitext(basename)[0] - names = np.load(os.path.join(self.npy_index_dir, f"{basename}_names.npy")) - offsets = np.load(os.path.join(self.npy_index_dir, f"{basename}_offsets.npy")) - return names, offsets - - def __getitem__(self, idx): - if self.data is None: - self.data = np.memmap(self.filename, mode='r', dtype='uint8') - _, self.offsets = self.load_index() - - ofs = self.offsets[idx] * 512 - fsize = 512 * (self.offsets[idx + 1] - self.offsets[idx]) - data = self.data[ofs:ofs + fsize] - - if data[:13].tostring() == '././@LongLink': - data = data[3 * 512:] - else: - data = data[512:] - - # just to make it more fun a few JPEGs are GZIP compressed... - # catch this case - if tuple(data[:2]) == (0x1f, 0x8b): - s = io.BytesIO(data.tostring()) - g = gzip.GzipFile(None, 'r', 0, s) - sdata = g.read() - else: - sdata = data.tostring() - return io.BytesIO(sdata) \ No newline at end of file diff --git a/spaces/MetaWabbit/Auto-GPT/autogpt/config/ai_config.py b/spaces/MetaWabbit/Auto-GPT/autogpt/config/ai_config.py deleted file mode 100644 index d50c30beee9dc8009f63415378ae1c6a399f0037..0000000000000000000000000000000000000000 --- a/spaces/MetaWabbit/Auto-GPT/autogpt/config/ai_config.py +++ /dev/null @@ -1,121 +0,0 @@ -# sourcery skip: do-not-use-staticmethod -""" -A module that contains the AIConfig class object that contains the configuration -""" -from __future__ import annotations - -import os -from typing import Type - -import yaml - - -class AIConfig: - """ - A class object that contains the configuration information for the AI - - Attributes: - ai_name (str): The name of the AI. - ai_role (str): The description of the AI's role. - ai_goals (list): The list of objectives the AI is supposed to complete. - """ - - def __init__( - self, ai_name: str = "", ai_role: str = "", ai_goals: list | None = None - ) -> None: - """ - Initialize a class instance - - Parameters: - ai_name (str): The name of the AI. - ai_role (str): The description of the AI's role. - ai_goals (list): The list of objectives the AI is supposed to complete. - Returns: - None - """ - if ai_goals is None: - ai_goals = [] - self.ai_name = ai_name - self.ai_role = ai_role - self.ai_goals = ai_goals - - # Soon this will go in a folder where it remembers more stuff about the run(s) - SAVE_FILE = os.path.join(os.path.dirname(__file__), "..", "ai_settings.yaml") - - @staticmethod - def load(config_file: str = SAVE_FILE) -> "AIConfig": - """ - Returns class object with parameters (ai_name, ai_role, ai_goals) loaded from - yaml file if yaml file exists, - else returns class with no parameters. - - Parameters: - config_file (int): The path to the config yaml file. - DEFAULT: "../ai_settings.yaml" - - Returns: - cls (object): An instance of given cls object - """ - - try: - with open(config_file, encoding="utf-8") as file: - config_params = yaml.load(file, Loader=yaml.FullLoader) - except FileNotFoundError: - config_params = {} - - ai_name = config_params.get("ai_name", "") - ai_role = config_params.get("ai_role", "") - ai_goals = config_params.get("ai_goals", []) - # type: Type[AIConfig] - return AIConfig(ai_name, ai_role, ai_goals) - - def save(self, config_file: str = SAVE_FILE) -> None: - """ - Saves the class parameters to the specified file yaml file path as a yaml file. - - Parameters: - config_file(str): The path to the config yaml file. - DEFAULT: "../ai_settings.yaml" - - Returns: - None - """ - - config = { - "ai_name": self.ai_name, - "ai_role": self.ai_role, - "ai_goals": self.ai_goals, - } - with open(config_file, "w", encoding="utf-8") as file: - yaml.dump(config, file, allow_unicode=True) - - def construct_full_prompt(self) -> str: - """ - Returns a prompt to the user with the class information in an organized fashion. - - Parameters: - None - - Returns: - full_prompt (str): A string containing the initial prompt for the user - including the ai_name, ai_role and ai_goals. - """ - - prompt_start = ( - "Your decisions must always be made independently without" - " seeking user assistance. Play to your strengths as an LLM and pursue" - " simple strategies with no legal complications." - "" - ) - - from autogpt.prompt import get_prompt - - # Construct full prompt - full_prompt = ( - f"You are {self.ai_name}, {self.ai_role}\n{prompt_start}\n\nGOALS:\n\n" - ) - for i, goal in enumerate(self.ai_goals): - full_prompt += f"{i+1}. {goal}\n" - - full_prompt += f"\n\n{get_prompt()}" - return full_prompt diff --git a/spaces/MirageML/sjc/sd1/main.py b/spaces/MirageML/sjc/sd1/main.py deleted file mode 100644 index 5194121ceae28fc81ba6894a7a06cf394ba9331b..0000000000000000000000000000000000000000 --- a/spaces/MirageML/sjc/sd1/main.py +++ /dev/null @@ -1,803 +0,0 @@ -import argparse, os, sys, datetime, glob, importlib, csv -import numpy as np -import time -import torch - -import torchvision -import pytorch_lightning as pl - -from packaging import version -from omegaconf import OmegaConf -from torch.utils.data import random_split, DataLoader, Dataset, Subset -from functools import partial -from PIL import Image - -from pytorch_lightning import seed_everything -from pytorch_lightning.trainer import Trainer -from pytorch_lightning.callbacks import ModelCheckpoint, Callback, LearningRateMonitor -from pytorch_lightning.utilities.distributed import rank_zero_only -from pytorch_lightning.utilities import rank_zero_info - -from ldm.data.base import Txt2ImgIterableBaseDataset -from ldm.util import instantiate_from_config - -from pdb import set_trace - -import warnings -warnings.filterwarnings("ignore", category=DeprecationWarning) -from transformers import logging -logging.set_verbosity_error() - -def load_model_from_config(config, ckpt, verbose=False): - print(f"Loading model from {ckpt}") - pl_sd = torch.load(ckpt, map_location="cpu") - sd = pl_sd["state_dict"] - config.model.params.ckpt_path = ckpt - model = instantiate_from_config(config.model) - m, u = model.load_state_dict(sd, strict=False) - if len(m) > 0 and verbose: - print("missing keys:") - print(m) - if len(u) > 0 and verbose: - print("unexpected keys:") - print(u) - - model.cuda() - return model - -def get_parser(**parser_kwargs): - def str2bool(v): - if isinstance(v, bool): - return v - if v.lower() in ("yes", "true", "t", "y", "1"): - return True - elif v.lower() in ("no", "false", "f", "n", "0"): - return False - else: - raise argparse.ArgumentTypeError("Boolean value expected.") - - parser = argparse.ArgumentParser(**parser_kwargs) - parser.add_argument( - "-n", - "--name", - type=str, - const=True, - default="", - nargs="?", - help="postfix for logdir", - ) - parser.add_argument( - "-r", - "--resume", - type=str, - const=True, - default="", - nargs="?", - help="resume from logdir or checkpoint in logdir", - ) - parser.add_argument( - "-b", - "--base", - nargs="*", - metavar="base_config.yaml", - help="paths to base configs. Loaded from left-to-right. " - "Parameters can be overwritten or added with command-line options of the form `--key value`.", - default=list(), - ) - parser.add_argument( - "-t", - "--train", - type=str2bool, - const=True, - default=False, - nargs="?", - help="train", - ) - parser.add_argument( - "--no-test", - type=str2bool, - const=True, - default=False, - nargs="?", - help="disable test", - ) - parser.add_argument( - "-p", - "--project", - help="name of new or path to existing project" - ) - parser.add_argument( - "-d", - "--debug", - type=str2bool, - nargs="?", - const=True, - default=False, - help="enable post-mortem debugging", - ) - parser.add_argument( - "-s", - "--seed", - type=int, - default=23, - help="seed for seed_everything", - ) - parser.add_argument( - "-f", - "--postfix", - type=str, - default="", - help="post-postfix for default name", - ) - parser.add_argument( - "-l", - "--logdir", - type=str, - default="logs", - help="directory for logging dat shit", - ) - parser.add_argument( - "--scale_lr", - type=str2bool, - nargs="?", - const=True, - default=True, - help="scale base-lr by ngpu * batch_size * n_accumulate", - ) - - parser.add_argument( - "--datadir_in_name", - type=str2bool, - nargs="?", - const=True, - default=True, - help="Prepend the final directory in the data_root to the output directory name") - - parser.add_argument("--actual_resume", type=str, default="", help="Path to model to actually resume from") - parser.add_argument("--data_root", type=str, required=True, help="Path to directory with training images") - - parser.add_argument("--embedding_manager_ckpt", type=str, default="", help="Initialize embedding manager from a checkpoint") - parser.add_argument("--placeholder_tokens", type=str, nargs="+", default=["*"]) - - parser.add_argument("--init_word", type=str, help="Word to use as source for initial token embedding.") - - return parser - - -def nondefault_trainer_args(opt): - parser = argparse.ArgumentParser() - parser = Trainer.add_argparse_args(parser) - args = parser.parse_args([]) - return sorted(k for k in vars(args) if getattr(opt, k) != getattr(args, k)) - - -class WrappedDataset(Dataset): - """Wraps an arbitrary object with __len__ and __getitem__ into a pytorch dataset""" - - def __init__(self, dataset): - self.data = dataset - - def __len__(self): - return len(self.data) - - def __getitem__(self, idx): - return self.data[idx] - - -def worker_init_fn(_): - worker_info = torch.utils.data.get_worker_info() - - dataset = worker_info.dataset - worker_id = worker_info.id - - if isinstance(dataset, Txt2ImgIterableBaseDataset): - split_size = dataset.num_records // worker_info.num_workers - # reset num_records to the true number to retain reliable length information - dataset.sample_ids = dataset.valid_ids[worker_id * split_size:(worker_id + 1) * split_size] - current_id = np.random.choice(len(np.random.get_state()[1]), 1) - return np.random.seed(np.random.get_state()[1][current_id] + worker_id) - else: - return np.random.seed(np.random.get_state()[1][0] + worker_id) - - -class DataModuleFromConfig(pl.LightningDataModule): - def __init__(self, batch_size, train=None, validation=None, test=None, predict=None, - wrap=False, num_workers=None, shuffle_test_loader=False, use_worker_init_fn=False, - shuffle_val_dataloader=False): - super().__init__() - self.batch_size = batch_size - self.dataset_configs = dict() - self.num_workers = num_workers if num_workers is not None else batch_size * 2 - self.use_worker_init_fn = use_worker_init_fn - if train is not None: - self.dataset_configs["train"] = train - self.train_dataloader = self._train_dataloader - if validation is not None: - self.dataset_configs["validation"] = validation - self.val_dataloader = partial(self._val_dataloader, shuffle=shuffle_val_dataloader) - if test is not None: - self.dataset_configs["test"] = test - self.test_dataloader = partial(self._test_dataloader, shuffle=shuffle_test_loader) - if predict is not None: - self.dataset_configs["predict"] = predict - self.predict_dataloader = self._predict_dataloader - self.wrap = wrap - - def prepare_data(self): - for data_cfg in self.dataset_configs.values(): - instantiate_from_config(data_cfg) - - def setup(self, stage=None): - self.datasets = dict( - (k, instantiate_from_config(self.dataset_configs[k])) - for k in self.dataset_configs) - if self.wrap: - for k in self.datasets: - self.datasets[k] = WrappedDataset(self.datasets[k]) - - def _train_dataloader(self): - is_iterable_dataset = isinstance(self.datasets['train'], Txt2ImgIterableBaseDataset) - if is_iterable_dataset or self.use_worker_init_fn: - init_fn = worker_init_fn - else: - init_fn = None - return DataLoader(self.datasets["train"], batch_size=self.batch_size, - num_workers=self.num_workers, shuffle=False if is_iterable_dataset else True, - worker_init_fn=init_fn) - - def _val_dataloader(self, shuffle=False): - if isinstance(self.datasets['validation'], Txt2ImgIterableBaseDataset) or self.use_worker_init_fn: - init_fn = worker_init_fn - else: - init_fn = None - return DataLoader(self.datasets["validation"], - batch_size=self.batch_size, - num_workers=self.num_workers, - worker_init_fn=init_fn, - shuffle=shuffle) - - def _test_dataloader(self, shuffle=False): - is_iterable_dataset = isinstance(self.datasets['train'], Txt2ImgIterableBaseDataset) - if is_iterable_dataset or self.use_worker_init_fn: - init_fn = worker_init_fn - else: - init_fn = None - - # do not shuffle dataloader for iterable dataset - shuffle = shuffle and (not is_iterable_dataset) - - return DataLoader(self.datasets["test"], batch_size=self.batch_size, - num_workers=self.num_workers, worker_init_fn=init_fn, shuffle=shuffle) - - def _predict_dataloader(self, shuffle=False): - if isinstance(self.datasets['predict'], Txt2ImgIterableBaseDataset) or self.use_worker_init_fn: - init_fn = worker_init_fn - else: - init_fn = None - return DataLoader(self.datasets["predict"], batch_size=self.batch_size, - num_workers=self.num_workers, worker_init_fn=init_fn) - - -class SetupCallback(Callback): - def __init__(self, resume, now, logdir, ckptdir, cfgdir, config, lightning_config): - super().__init__() - self.resume = resume - self.now = now - self.logdir = logdir - self.ckptdir = ckptdir - self.cfgdir = cfgdir - self.config = config - self.lightning_config = lightning_config - - def on_keyboard_interrupt(self, trainer, pl_module): - if trainer.global_rank == 0: - print("Summoning checkpoint.") - ckpt_path = os.path.join(self.ckptdir, "last.ckpt") - trainer.save_checkpoint(ckpt_path) - - def on_pretrain_routine_start(self, trainer, pl_module): - if trainer.global_rank == 0: - # Create logdirs and save configs - os.makedirs(self.logdir, exist_ok=True) - os.makedirs(self.ckptdir, exist_ok=True) - os.makedirs(self.cfgdir, exist_ok=True) - - if "callbacks" in self.lightning_config: - if 'metrics_over_trainsteps_checkpoint' in self.lightning_config['callbacks']: - os.makedirs(os.path.join(self.ckptdir, 'trainstep_checkpoints'), exist_ok=True) - print("Project config") - print(OmegaConf.to_yaml(self.config)) - OmegaConf.save(self.config, - os.path.join(self.cfgdir, "{}-project.yaml".format(self.now))) - - print("Lightning config") - print(OmegaConf.to_yaml(self.lightning_config)) - OmegaConf.save(OmegaConf.create({"lightning": self.lightning_config}), - os.path.join(self.cfgdir, "{}-lightning.yaml".format(self.now))) - - else: - # ModelCheckpoint callback created log directory --- remove it - if not self.resume and os.path.exists(self.logdir): - dst, name = os.path.split(self.logdir) - dst = os.path.join(dst, "child_runs", name) - os.makedirs(os.path.split(dst)[0], exist_ok=True) - try: - os.rename(self.logdir, dst) - except FileNotFoundError: - pass - - -class ImageLogger(Callback): - def __init__(self, batch_frequency, max_images, clamp=True, increase_log_steps=True, - rescale=True, disabled=False, log_on_batch_idx=False, log_first_step=False, - log_images_kwargs=None): - super().__init__() - self.rescale = rescale - self.batch_freq = batch_frequency - self.max_images = max_images - self.logger_log_images = { - pl.loggers.TestTubeLogger: self._testtube, - } - self.log_steps = [2 ** n for n in range(int(np.log2(self.batch_freq)) + 1)] - if not increase_log_steps: - self.log_steps = [self.batch_freq] - self.clamp = clamp - self.disabled = disabled - self.log_on_batch_idx = log_on_batch_idx - self.log_images_kwargs = log_images_kwargs if log_images_kwargs else {} - self.log_first_step = log_first_step - - @rank_zero_only - def _testtube(self, pl_module, images, batch_idx, split): - for k in images: - grid = torchvision.utils.make_grid(images[k]) - grid = (grid + 1.0) / 2.0 # -1,1 -> 0,1; c,h,w - - tag = f"{split}/{k}" - pl_module.logger.experiment.add_image( - tag, grid, - global_step=pl_module.global_step) - - @rank_zero_only - def log_local(self, save_dir, split, images, - global_step, current_epoch, batch_idx): - root = os.path.join(save_dir, "images", split) - for k in images: - grid = torchvision.utils.make_grid(images[k], nrow=4) - if self.rescale: - grid = (grid + 1.0) / 2.0 # -1,1 -> 0,1; c,h,w - grid = grid.transpose(0, 1).transpose(1, 2).squeeze(-1) - grid = grid.numpy() - grid = (grid * 255).astype(np.uint8) - filename = "{}_gs-{:06}_e-{:06}_b-{:06}.jpg".format( - k, - global_step, - current_epoch, - batch_idx) - path = os.path.join(root, filename) - os.makedirs(os.path.split(path)[0], exist_ok=True) - Image.fromarray(grid).save(path) - - def log_img(self, pl_module, batch, batch_idx, split="train"): - check_idx = batch_idx if self.log_on_batch_idx else pl_module.global_step - if (self.check_frequency(check_idx) and # batch_idx % self.batch_freq == 0 - hasattr(pl_module, "log_images") and - callable(pl_module.log_images) and - self.max_images > 0): - logger = type(pl_module.logger) - - is_train = pl_module.training - if is_train: - pl_module.eval() - - with torch.no_grad(): - images = pl_module.log_images(batch, split=split, **self.log_images_kwargs) - - for k in images: - N = min(images[k].shape[0], self.max_images) - images[k] = images[k][:N] - if isinstance(images[k], torch.Tensor): - images[k] = images[k].detach().cpu() - if self.clamp: - images[k] = torch.clamp(images[k], -1., 1.) - - self.log_local(pl_module.logger.save_dir, split, images, - pl_module.global_step, pl_module.current_epoch, batch_idx) - - logger_log_images = self.logger_log_images.get(logger, lambda *args, **kwargs: None) - logger_log_images(pl_module, images, pl_module.global_step, split) - - if is_train: - pl_module.train() - - def check_frequency(self, check_idx): - if ((check_idx % self.batch_freq) == 0 or (check_idx in self.log_steps)) and ( - check_idx > 0 or self.log_first_step): - try: - self.log_steps.pop(0) - except IndexError as e: - print(e) - pass - return True - return False - - def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx): - if not self.disabled and (pl_module.global_step > 0 or self.log_first_step): - self.log_img(pl_module, batch, batch_idx, split="train") - - def on_validation_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx): - if not self.disabled and pl_module.global_step > 0: - self.log_img(pl_module, batch, batch_idx, split="val") - if hasattr(pl_module, 'calibrate_grad_norm'): - if (pl_module.calibrate_grad_norm and batch_idx % 25 == 0) and batch_idx > 0: - self.log_gradients(trainer, pl_module, batch_idx=batch_idx) - - -class CUDACallback(Callback): - # see https://github.com/SeanNaren/minGPT/blob/master/mingpt/callback.py - def on_train_epoch_start(self, trainer, pl_module): - # Reset the memory use counter - torch.cuda.reset_peak_memory_stats(trainer.root_gpu) - torch.cuda.synchronize(trainer.root_gpu) - self.start_time = time.time() - - def on_train_epoch_end(self, trainer, pl_module): - torch.cuda.synchronize(trainer.root_gpu) - max_memory = torch.cuda.max_memory_allocated(trainer.root_gpu) / 2 ** 20 - epoch_time = time.time() - self.start_time - - try: - max_memory = trainer.training_type_plugin.reduce(max_memory) - epoch_time = trainer.training_type_plugin.reduce(epoch_time) - - rank_zero_info(f"Average Epoch time: {epoch_time:.2f} seconds") - rank_zero_info(f"Average Peak memory {max_memory:.2f}MiB") - except AttributeError: - pass - - -if __name__ == "__main__": - # custom parser to specify config files, train, test and debug mode, - # postfix, resume. - # `--key value` arguments are interpreted as arguments to the trainer. - # `nested.key=value` arguments are interpreted as config parameters. - # configs are merged from left-to-right followed by command line parameters. - - # model: - # base_learning_rate: float - # target: path to lightning module - # params: - # key: value - # data: - # target: main.DataModuleFromConfig - # params: - # batch_size: int - # wrap: bool - # train: - # target: path to train dataset - # params: - # key: value - # validation: - # target: path to validation dataset - # params: - # key: value - # test: - # target: path to test dataset - # params: - # key: value - # lightning: (optional, has sane defaults and can be specified on cmdline) - # trainer: - # additional arguments to trainer - # logger: - # logger to instantiate - # modelcheckpoint: - # modelcheckpoint to instantiate - # callbacks: - # callback1: - # target: importpath - # params: - # key: value - - now = datetime.datetime.now().strftime("%Y-%m-%dT%H-%M-%S") - - # add cwd for convenience and to make classes in this file available when - # running as `python main.py` - # (in particular `main.DataModuleFromConfig`) - sys.path.append(os.getcwd()) - - parser = get_parser() - parser = Trainer.add_argparse_args(parser) - - opt, unknown = parser.parse_known_args() - if opt.name and opt.resume: - raise ValueError( - "-n/--name and -r/--resume cannot be specified both." - "If you want to resume training in a new log folder, " - "use -n/--name in combination with --resume_from_checkpoint" - ) - if opt.resume: - if not os.path.exists(opt.resume): - raise ValueError("Cannot find {}".format(opt.resume)) - if os.path.isfile(opt.resume): - paths = opt.resume.split("/") - # idx = len(paths)-paths[::-1].index("logs")+1 - # logdir = "/".join(paths[:idx]) - logdir = "/".join(paths[:-2]) - ckpt = opt.resume - else: - assert os.path.isdir(opt.resume), opt.resume - logdir = opt.resume.rstrip("/") - ckpt = os.path.join(logdir, "checkpoints", "last.ckpt") - - opt.resume_from_checkpoint = ckpt - base_configs = sorted(glob.glob(os.path.join(logdir, "configs/*.yaml"))) - opt.base = base_configs + opt.base - _tmp = logdir.split("/") - nowname = _tmp[-1] - else: - if opt.name: - name = "_" + opt.name - elif opt.base: - cfg_fname = os.path.split(opt.base[0])[-1] - cfg_name = os.path.splitext(cfg_fname)[0] - name = "_" + cfg_name - else: - name = "" - - if opt.datadir_in_name: - now = os.path.basename(os.path.normpath(opt.data_root)) + now - - nowname = now + name + opt.postfix - logdir = os.path.join(opt.logdir, nowname) - - ckptdir = os.path.join(logdir, "checkpoints") - cfgdir = os.path.join(logdir, "configs") - seed_everything(opt.seed) - - try: - # init and save configs - configs = [OmegaConf.load(cfg) for cfg in opt.base] - cli = OmegaConf.from_dotlist(unknown) - config = OmegaConf.merge(*configs, cli) - lightning_config = config.pop("lightning", OmegaConf.create()) - # merge trainer cli with config - trainer_config = lightning_config.get("trainer", OmegaConf.create()) - # default to ddp - trainer_config["accelerator"] = "ddp" - for k in nondefault_trainer_args(opt): - trainer_config[k] = getattr(opt, k) - if not "gpus" in trainer_config: - del trainer_config["accelerator"] - cpu = True - else: - gpuinfo = trainer_config["gpus"] - print(f"Running on GPUs {gpuinfo}") - cpu = False - trainer_opt = argparse.Namespace(**trainer_config) - lightning_config.trainer = trainer_config - - # model - - # config.model.params.personalization_config.params.init_word = opt.init_word - config.model.params.personalization_config.params.embedding_manager_ckpt = opt.embedding_manager_ckpt - config.model.params.personalization_config.params.placeholder_tokens = opt.placeholder_tokens - - if opt.init_word: - config.model.params.personalization_config.params.initializer_words[0] = opt.init_word - - if opt.actual_resume: - model = load_model_from_config(config, opt.actual_resume) - else: - model = instantiate_from_config(config.model) - - # trainer and callbacks - trainer_kwargs = dict() - - # default logger configs - default_logger_cfgs = { - "wandb": { - "target": "pytorch_lightning.loggers.WandbLogger", - "params": { - "name": nowname, - "save_dir": logdir, - "offline": opt.debug, - "id": nowname, - } - }, - "testtube": { - "target": "pytorch_lightning.loggers.TestTubeLogger", - "params": { - "name": "testtube", - "save_dir": logdir, - } - }, - } - default_logger_cfg = default_logger_cfgs["testtube"] - if "logger" in lightning_config: - logger_cfg = lightning_config.logger - else: - logger_cfg = OmegaConf.create() - logger_cfg = OmegaConf.merge(default_logger_cfg, logger_cfg) - trainer_kwargs["logger"] = instantiate_from_config(logger_cfg) - - # modelcheckpoint - use TrainResult/EvalResult(checkpoint_on=metric) to - # specify which metric is used to determine best models - default_modelckpt_cfg = { - "target": "pytorch_lightning.callbacks.ModelCheckpoint", - "params": { - "dirpath": ckptdir, - "filename": "{epoch:06}", - "verbose": True, - "save_last": True, - } - } - if hasattr(model, "monitor"): - print(f"Monitoring {model.monitor} as checkpoint metric.") - default_modelckpt_cfg["params"]["monitor"] = model.monitor - default_modelckpt_cfg["params"]["save_top_k"] = 1 - - if "modelcheckpoint" in lightning_config: - modelckpt_cfg = lightning_config.modelcheckpoint - else: - modelckpt_cfg = OmegaConf.create() - modelckpt_cfg = OmegaConf.merge(default_modelckpt_cfg, modelckpt_cfg) - print(f"Merged modelckpt-cfg: \n{modelckpt_cfg}") - if version.parse(pl.__version__) < version.parse('1.4.0'): - trainer_kwargs["checkpoint_callback"] = instantiate_from_config(modelckpt_cfg) - - # add callback which sets up log directory - default_callbacks_cfg = { - "setup_callback": { - "target": "main.SetupCallback", - "params": { - "resume": opt.resume, - "now": now, - "logdir": logdir, - "ckptdir": ckptdir, - "cfgdir": cfgdir, - "config": config, - "lightning_config": lightning_config, - } - }, - "image_logger": { - "target": "main.ImageLogger", - "params": { - "batch_frequency": 750, - "max_images": 4, - "clamp": True - } - }, - "learning_rate_logger": { - "target": "main.LearningRateMonitor", - "params": { - "logging_interval": "step", - # "log_momentum": True - } - }, - "cuda_callback": { - "target": "main.CUDACallback" - }, - } - if version.parse(pl.__version__) >= version.parse('1.4.0'): - default_callbacks_cfg.update({'checkpoint_callback': modelckpt_cfg}) - - if "callbacks" in lightning_config: - callbacks_cfg = lightning_config.callbacks - else: - callbacks_cfg = OmegaConf.create() - - if 'metrics_over_trainsteps_checkpoint' in callbacks_cfg: - print( - 'Caution: Saving checkpoints every n train steps without deleting. This might require some free space.') - default_metrics_over_trainsteps_ckpt_dict = { - 'metrics_over_trainsteps_checkpoint': - {"target": 'pytorch_lightning.callbacks.ModelCheckpoint', - 'params': { - "dirpath": os.path.join(ckptdir, 'trainstep_checkpoints'), - "filename": "{epoch:06}-{step:09}", - "verbose": True, - 'save_top_k': -1, - 'every_n_train_steps': 10000, - 'save_weights_only': True - } - } - } - default_callbacks_cfg.update(default_metrics_over_trainsteps_ckpt_dict) - - callbacks_cfg = OmegaConf.merge(default_callbacks_cfg, callbacks_cfg) - if 'ignore_keys_callback' in callbacks_cfg and hasattr(trainer_opt, 'resume_from_checkpoint'): - callbacks_cfg.ignore_keys_callback.params['ckpt_path'] = trainer_opt.resume_from_checkpoint - elif 'ignore_keys_callback' in callbacks_cfg: - del callbacks_cfg['ignore_keys_callback'] - - trainer_kwargs["callbacks"] = [instantiate_from_config(callbacks_cfg[k]) for k in callbacks_cfg] - trainer_kwargs["max_steps"] = opt.max_steps if opt.max_steps is not None else trainer_opt.max_steps - - trainer = Trainer.from_argparse_args(trainer_opt, **trainer_kwargs) - trainer.logdir = logdir ### - - # data - config.data.params.train.params.data_root = opt.data_root - config.data.params.validation.params.data_root = opt.data_root - data = instantiate_from_config(config.data) - - data = instantiate_from_config(config.data) - # NOTE according to https://pytorch-lightning.readthedocs.io/en/latest/datamodules.html - # calling these ourselves should not be necessary but it is. - # lightning still takes care of proper multiprocessing though - data.prepare_data() - data.setup() - print("#### Data #####") - for k in data.datasets: - print(f"{k}, {data.datasets[k].__class__.__name__}, {len(data.datasets[k])}") - - # configure learning rate - bs, base_lr = config.data.params.batch_size, config.model.base_learning_rate - if not cpu: - ngpu = len(lightning_config.trainer.gpus.strip(",").split(',')) - else: - ngpu = 1 - if 'accumulate_grad_batches' in lightning_config.trainer: - accumulate_grad_batches = lightning_config.trainer.accumulate_grad_batches - else: - accumulate_grad_batches = 1 - print(f"accumulate_grad_batches = {accumulate_grad_batches}") - lightning_config.trainer.accumulate_grad_batches = accumulate_grad_batches - if opt.scale_lr: - model.learning_rate = accumulate_grad_batches * ngpu * bs * base_lr - print( - "Setting learning rate to {:.2e} = {} (accumulate_grad_batches) * {} (num_gpus) * {} (batchsize) * {:.2e} (base_lr)".format( - model.learning_rate, accumulate_grad_batches, ngpu, bs, base_lr)) - else: - model.learning_rate = base_lr - print("++++ NOT USING LR SCALING ++++") - print(f"Setting learning rate to {model.learning_rate:.2e}") - - - # allow checkpointing via USR1 - def melk(*args, **kwargs): - # run all checkpoint hooks - if trainer.global_rank == 0: - print("Summoning checkpoint.") - ckpt_path = os.path.join(ckptdir, "last.ckpt") - trainer.save_checkpoint(ckpt_path) - - - def divein(*args, **kwargs): - if trainer.global_rank == 0: - import pudb; - pudb.set_trace() - - - import signal - - signal.signal(signal.SIGUSR1, melk) - signal.signal(signal.SIGUSR2, divein) - - # run - if opt.train: - try: - # set_trace() - trainer.fit(model, data) - except Exception: - melk() - raise - if not opt.no_test and not trainer.interrupted: - trainer.test(model, data) - except Exception: - if opt.debug and trainer.global_rank == 0: - try: - import pudb as debugger - except ImportError: - import pdb as debugger - debugger.post_mortem() - raise - finally: - # move newly created debug project to debug_runs - if opt.debug and not opt.resume and trainer.global_rank == 0: - dst, name = os.path.split(logdir) - dst = os.path.join(dst, "debug_runs", name) - os.makedirs(os.path.split(dst)[0], exist_ok=True) - os.rename(logdir, dst) - if trainer.global_rank == 0: - print(trainer.profiler.summary()) diff --git a/spaces/Nee001/bing0/postcss.config.js b/spaces/Nee001/bing0/postcss.config.js deleted file mode 100644 index 33ad091d26d8a9dc95ebdf616e217d985ec215b8..0000000000000000000000000000000000000000 --- a/spaces/Nee001/bing0/postcss.config.js +++ /dev/null @@ -1,6 +0,0 @@ -module.exports = { - plugins: { - tailwindcss: {}, - autoprefixer: {}, - }, -} diff --git a/spaces/NimaBoscarino/climategan/climategan/eval_metrics.py b/spaces/NimaBoscarino/climategan/climategan/eval_metrics.py deleted file mode 100644 index b985413c2595339258ca473673989bd9b4ab2d27..0000000000000000000000000000000000000000 --- a/spaces/NimaBoscarino/climategan/climategan/eval_metrics.py +++ /dev/null @@ -1,635 +0,0 @@ -import cv2 -import numpy as np -import torch -from skimage import filters -from sklearn.metrics.pairwise import euclidean_distances -import matplotlib.pyplot as plt -import seaborn as sns -from copy import deepcopy - -# ------------------------------------------------------------------------------ -# ----- Evaluation metrics for a pair of binary mask images (pred, target) ----- -# ------------------------------------------------------------------------------ - - -def get_accuracy(arr1, arr2): - """pixel accuracy - - Args: - arr1 (np.array) - arr2 (np.array) - """ - return (arr1 == arr2).sum() / arr1.size - - -def trimap(pred_im, gt_im, thickness=8): - """Compute accuracy in a region of thickness around the contours - for binary images (0-1 values) - Args: - pred_im (Image): Prediction - gt_im (Image): Target - thickness (int, optional): [description]. Defaults to 8. - """ - W, H = gt_im.size - contours, hierarchy = cv2.findContours( - np.array(gt_im), mode=cv2.RETR_TREE, method=cv2.CHAIN_APPROX_SIMPLE - ) - mask_contour = np.zeros((H, W), dtype=np.int32) - cv2.drawContours( - mask_contour, contours, -1, (1), thickness=thickness, hierarchy=hierarchy - ) - gt_contour = np.array(gt_im)[np.where(mask_contour > 0)] - pred_contour = np.array(pred_im)[np.where(mask_contour > 0)] - return get_accuracy(pred_contour, gt_contour) - - -def iou(pred_im, gt_im): - """ - IoU for binary masks (0-1 values) - - Args: - pred_im ([type]): [description] - gt_im ([type]): [description] - """ - pred = np.array(pred_im) - gt = np.array(gt_im) - intersection = (pred * gt).sum() - union = (pred + gt).sum() - intersection - return intersection / union - - -def f1_score(pred_im, gt_im): - pred = np.array(pred_im) - gt = np.array(gt_im) - intersection = (pred * gt).sum() - return 2 * intersection / (pred + gt).sum() - - -def accuracy(pred_im, gt_im): - pred = np.array(pred_im) - gt = np.array(gt_im) - if len(gt_im.shape) == 4: - assert gt_im.shape[1] == 1 - gt_im = gt_im[:, 0, :, :] - if len(pred.shape) > len(gt_im.shape): - pred = np.argmax(pred, axis=1) - return float((pred == gt).sum()) / gt.size - - -def mIOU(pred, label, average="macro"): - """ - Adapted from: - https://stackoverflow.com/questions/62461379/multiclass-semantic-segmentation-model-evaluation - - Compute the mean IOU from pred and label tensors - pred is a tensor N x C x H x W with logits (softmax will be applied) - and label is a N x H x W tensor with int labels per pixel - - this does the same as sklearn's jaccard_score function if you choose average="macro" - Args: - pred (torch.tensor): predicted logits - label (torch.tensor): labels - average: "macro" or "weighted" - - Returns: - float: mIOU, can be nan - """ - num_classes = pred.shape[-3] - - pred = torch.argmax(pred, dim=1).squeeze(1) - present_iou_list = list() - pred = pred.view(-1) - label = label.view(-1) - # Note: Following for loop goes from 0 to (num_classes-1) - # and ignore_index is num_classes, thus ignore_index is - # not considered in computation of IoU. - interesting_classes = ( - [*range(num_classes)] if num_classes > 2 else [int(label.max().item())] - ) - weights = [] - - for sem_class in interesting_classes: - pred_inds = pred == sem_class - target_inds = label == sem_class - if (target_inds.long().sum().item() > 0) or (pred_inds.long().sum().item() > 0): - intersection_now = (pred_inds[target_inds]).long().sum().item() - union_now = ( - pred_inds.long().sum().item() - + target_inds.long().sum().item() - - intersection_now - ) - weights.append(pred_inds.long().sum().item()) - iou_now = float(intersection_now) / float(union_now) - present_iou_list.append(iou_now) - if not present_iou_list: - return float("nan") - elif average == "weighted": - weighted_avg = np.sum(np.multiply(weights, present_iou_list) / np.sum(weights)) - return weighted_avg - else: - return np.mean(present_iou_list) - - -def masker_classification_metrics( - pred, label, labels_dict={"cannot": 0, "must": 1, "may": 2} -): - """ - Classification metrics for the masker, and the corresponding maps. If the - predictions are soft, the errors are weighted accordingly. Metrics computed: - - tpr : float - True positive rate - - tpt : float - True positive total (divided by total population) - - tnr : float - True negative rate - - tnt : float - True negative total (divided by total population) - - fpr : float - False positive rate: rate of predicted mask on cannot flood - - fpt : float - False positive total (divided by total population) - - fnr : float - False negative rate: rate of missed mask on must flood - - fnt : float - False negative total (divided by total population) - - mnr : float - "May" negative rate (labeled as "may", predicted as no-mask) - - mpr : float - "May" positive rate (labeled as "may", predicted as mask) - - accuracy : float - Accuracy - - error : float - Error - - precision : float - Precision, considering only cannot and must flood labels - - f05 : float - F0.5 score, considering only cannot and must flood labels - - accuracy_must_may : float - Accuracy considering only the must and may areas - - Parameters - ---------- - pred : array-like - Mask prediction - - label : array-like - Mask ground truth labels - - labels_dict : dict - A dictionary with the identifier of each class (cannot, must, may) - - Returns - ------- - metrics_dict : dict - A dictionary with metric name and value pairs - - maps_dict : dict - A dictionary containing the metric maps - """ - tp_map = pred * np.asarray(label == labels_dict["must"], dtype=int) - tpr = np.sum(tp_map) / np.sum(label == labels_dict["must"]) - tpt = np.sum(tp_map) / np.prod(label.shape) - tn_map = (1.0 - pred) * np.asarray(label == labels_dict["cannot"], dtype=int) - tnr = np.sum(tn_map) / np.sum(label == labels_dict["cannot"]) - tnt = np.sum(tn_map) / np.prod(label.shape) - fp_map = pred * np.asarray(label == labels_dict["cannot"], dtype=int) - fpr = np.sum(fp_map) / np.sum(label == labels_dict["cannot"]) - fpt = np.sum(fp_map) / np.prod(label.shape) - fn_map = (1.0 - pred) * np.asarray(label == labels_dict["must"], dtype=int) - fnr = np.sum(fn_map) / np.sum(label == labels_dict["must"]) - fnt = np.sum(fn_map) / np.prod(label.shape) - may_neg_map = (1.0 - pred) * np.asarray(label == labels_dict["may"], dtype=int) - may_pos_map = pred * np.asarray(label == labels_dict["may"], dtype=int) - mnr = np.sum(may_neg_map) / np.sum(label == labels_dict["may"]) - mpr = np.sum(may_pos_map) / np.sum(label == labels_dict["may"]) - accuracy = tpt + tnt - error = fpt + fnt - - # Assertions - assert np.isclose(tpr, 1.0 - fnr), "TPR: {:.4f}, FNR: {:.4f}".format(tpr, fnr) - assert np.isclose(tnr, 1.0 - fpr), "TNR: {:.4f}, FPR: {:.4f}".format(tnr, fpr) - assert np.isclose(mpr, 1.0 - mnr), "MPR: {:.4f}, MNR: {:.4f}".format(mpr, mnr) - - precision = np.sum(tp_map) / (np.sum(tp_map) + np.sum(fp_map) + 1e-9) - beta = 0.5 - f05 = ((1 + beta ** 2) * precision * tpr) / (beta ** 2 * precision + tpr + 1e-9) - accuracy_must_may = (np.sum(tp_map) + np.sum(may_neg_map)) / ( - np.sum(label == labels_dict["must"]) + np.sum(label == labels_dict["may"]) - ) - - metrics_dict = { - "tpr": tpr, - "tpt": tpt, - "tnr": tnr, - "tnt": tnt, - "fpr": fpr, - "fpt": fpt, - "fnr": fnr, - "fnt": fnt, - "mpr": mpr, - "mnr": mnr, - "accuracy": accuracy, - "error": error, - "precision": precision, - "f05": f05, - "accuracy_must_may": accuracy_must_may, - } - maps_dict = { - "tp": tp_map, - "tn": tn_map, - "fp": fp_map, - "fn": fn_map, - "may_pos": may_pos_map, - "may_neg": may_neg_map, - } - - return metrics_dict, maps_dict - - -def pred_cannot(pred, label, label_cannot=0): - """ - Metric for the masker: Computes false positive rate and its map. If the - predictions are soft, the errors are weighted accordingly. - - Parameters - ---------- - pred : array-like - Mask prediction - - label : array-like - Mask ground truth labels - - label_cannot : int - The label index of "cannot flood" - - Returns - ------- - fp_map : array-like - The map of false positives: predicted mask on cannot flood - - fpr : float - False positive rate: rate of predicted mask on cannot flood - """ - fp_map = pred * np.asarray(label == label_cannot, dtype=int) - fpr = np.sum(fp_map) / np.sum(label == label_cannot) - return fp_map, fpr - - -def missed_must(pred, label, label_must=1): - """ - Metric for the masker: Computes false negative rate and its map. If the - predictions are soft, the errors are weighted accordingly. - - Parameters - ---------- - pred : array-like - Mask prediction - - label : array-like - Mask ground truth labels - - label_must : int - The label index of "must flood" - - Returns - ------- - fn_map : array-like - The map of false negatives: missed mask on must flood - - fnr : float - False negative rate: rate of missed mask on must flood - """ - fn_map = (1.0 - pred) * np.asarray(label == label_must, dtype=int) - fnr = np.sum(fn_map) / np.sum(label == label_must) - return fn_map, fnr - - -def may_flood(pred, label, label_may=2): - """ - Metric for the masker: Computes "may" negative and "may" positive rates and their - map. If the predictions are soft, the "errors" are weighted accordingly. - - Parameters - ---------- - pred : array-like - Mask prediction - - label : array-like - Mask ground truth labels - - label_may : int - The label index of "may flood" - - Returns - ------- - may_neg_map : array-like - The map of "may" negatives - - may_pos_map : array-like - The map of "may" positives - - mnr : float - "May" negative rate - - mpr : float - "May" positive rate - """ - may_neg_map = (1.0 - pred) * np.asarray(label == label_may, dtype=int) - may_pos_map = pred * np.asarray(label == label_may, dtype=int) - mnr = np.sum(may_neg_map) / np.sum(label == label_may) - mpr = np.sum(may_pos_map) / np.sum(label == label_may) - return may_neg_map, may_pos_map, mnr, mpr - - -def masker_metrics(pred, label, label_cannot=0, label_must=1): - """ - Computes a set of metrics for the masker - - Parameters - ---------- - pred : array-like - Mask prediction - - label : array-like - Mask ground truth labels - - label_must : int - The label index of "must flood" - - label_cannot : int - The label index of "cannot flood" - - Returns - ------- - tpr : float - True positive rate - - tnr : float - True negative rate - - precision : float - Precision, considering only cannot and must flood labels - - f1 : float - F1 score, considering only cannot and must flood labels - """ - tp_map = pred * np.asarray(label == label_must, dtype=int) - tpr = np.sum(tp_map) / np.sum(label == label_must) - tn_map = (1.0 - pred) * np.asarray(label == label_cannot, dtype=int) - tnr = np.sum(tn_map) / np.sum(label == label_cannot) - fp_map = pred * np.asarray(label == label_cannot, dtype=int) - fn_map = (1.0 - pred) * np.asarray(label == label_must, dtype=int) # noqa: F841 - precision = np.sum(tp_map) / (np.sum(tp_map) + np.sum(fp_map)) - f1 = 2 * (precision * tpr) / (precision + tpr) - return tpr, tnr, precision, f1 - - -def get_confusion_matrix(tpr, tnr, fpr, fnr, mpr, mnr): - """ - Constructs the confusion matrix of a masker prediction over a set of samples - - Parameters - ---------- - tpr : vector-like - True positive rate - - tnr : vector-like - True negative rate - - fpr : vector-like - False positive rate - - fnr : vector-like - False negative rate - - mpr : vector-like - "May" positive rate - - mnr : vector-like - "May" negative rate - - Returns - ------- - confusion_matrix : 3x3 array - Confusion matrix: [i, j] = [pred, true] - | tnr fnr mnr | - | fpr tpr mpr | - | 0. 0, 0, | - - confusion_matrix_std : 3x3 array - Standard deviation of the confusion matrix - """ - # Compute mean and standard deviations over all samples - tpr_m = np.mean(tpr) - tpr_s = np.std(tpr) - tnr_m = np.mean(tnr) - tnr_s = np.std(tnr) - fpr_m = np.mean(fpr) - fpr_s = np.std(fpr) - fnr_m = np.mean(fnr) - fnr_s = np.std(fnr) - mpr_m = np.mean(mpr) - mpr_s = np.std(mpr) - mnr_m = np.mean(mnr) - mnr_s = np.std(mnr) - - # Assertions - assert np.isclose(tpr_m, 1.0 - fnr_m), "TPR: {:.4f}, FNR: {:.4f}".format( - tpr_m, fnr_m - ) - assert np.isclose(tnr_m, 1.0 - fpr_m), "TNR: {:.4f}, FPR: {:.4f}".format( - tnr_m, fpr_m - ) - assert np.isclose(mpr_m, 1.0 - mnr_m), "MPR: {:.4f}, MNR: {:.4f}".format( - mpr_m, mnr_m - ) - - # Fill confusion matrix - confusion_matrix = np.zeros((3, 3)) - confusion_matrix[0, 0] = tnr_m - confusion_matrix[0, 1] = fnr_m - confusion_matrix[0, 2] = mnr_m - confusion_matrix[1, 0] = fpr_m - confusion_matrix[1, 1] = tpr_m - confusion_matrix[1, 2] = mpr_m - confusion_matrix[2, 2] = 0.0 - - # Standard deviation - confusion_matrix_std = np.zeros((3, 3)) - confusion_matrix_std[0, 0] = tnr_s - confusion_matrix_std[0, 1] = fnr_s - confusion_matrix_std[0, 2] = mnr_s - confusion_matrix_std[1, 0] = fpr_s - confusion_matrix_std[1, 1] = tpr_s - confusion_matrix_std[1, 2] = mpr_s - confusion_matrix_std[2, 2] = 0.0 - return confusion_matrix, confusion_matrix_std - - -def edges_coherence_std_min(pred, label, label_must=1, bin_th=0.5): - """ - The standard deviation of the minimum distance between the edge of the prediction - and the edge of the "must flood" label. - - Parameters - ---------- - pred : array-like - Mask prediction - - label : array-like - Mask ground truth labels - - label_must : int - The label index of "must flood" - - bin_th : float - The threshold for the binarization of the prediction - - Returns - ------- - metric : float - The value of the metric - - pred_edge : array-like - The edges images of the prediction, for visualization - - label_edge : array-like - The edges images of the "must flood" label, for visualization - """ - # Keep must flood label only - label = deepcopy(label) - label[label != label_must] = -1 - label[label == label_must] = 1 - label[label != label_must] = 0 - label = np.asarray(label, dtype=float) - - # Binarize prediction - pred = np.asarray(pred > bin_th, dtype=float) - - # Compute edges - pred = filters.sobel(pred) - label = filters.sobel(label) - - # Location of edges - pred_coord = np.argwhere(pred > 0) - label_coord = np.argwhere(label > 0) - - # Handle blank predictions - if pred_coord.shape[0] == 0: - return 1.0, pred, label - - # Normalized pairwise distances between pred and label - dist_mat = np.divide(euclidean_distances(pred_coord, label_coord), pred.shape[0]) - - # Standard deviation of the minimum distance from pred to label - edge_coherence = np.std(np.min(dist_mat, axis=1)) - - return edge_coherence, pred, label - - -def boxplot_metric( - output_filename, - df, - metric, - dict_metrics, - do_stripplot=False, - dict_models=None, - dpi=300, - **snskwargs -): - f = plt.figure(dpi=dpi) - - if do_stripplot: - ax = sns.boxplot(x="model", y=metric, data=df, fliersize=0.0, **snskwargs) - ax = sns.stripplot( - x="model", y=metric, data=df, size=2.0, color="gray", **snskwargs - ) - else: - ax = sns.boxplot(x="model", y=metric, data=df, **snskwargs) - - # Set axes labels - ax.set_xlabel("Models", rotation=0, fontsize="medium") - ax.set_ylabel(dict_metrics[metric], rotation=90, fontsize="medium") - - # Spines - sns.despine(left=True, bottom=True) - - # X-Tick labels - if dict_models: - xticklabels = [dict_models[t.get_text()] for t in ax.get_xticklabels()] - ax.set_xticklabels( - xticklabels, - rotation=20, - verticalalignment="top", - horizontalalignment="right", - fontsize="xx-small", - ) - - f.savefig( - output_filename, - dpi=f.dpi, - bbox_inches="tight", - facecolor="white", - transparent=False, - ) - f.clear() - plt.close(f) - - -def clustermap_metric( - output_filename, - df, - metric, - dict_metrics, - method="average", - cluster_metric="euclidean", - dict_models=None, - dpi=300, - **snskwargs -): - ax_grid = sns.clustermap(data=df, method=method, metric=cluster_metric, **snskwargs) - ax_heatmap = ax_grid.ax_heatmap - ax_cbar = ax_grid.ax_cbar - - # Set axes labels - ax_heatmap.set_xlabel("Models", rotation=0, fontsize="medium") - ax_heatmap.set_ylabel("Images", rotation=90, fontsize="medium") - - # Set title - ax_cbar.set_title(dict_metrics[metric], rotation=0, fontsize="x-large") - - # X-Tick labels - if dict_models: - xticklabels = [dict_models[t.get_text()] for t in ax_heatmap.get_xticklabels()] - ax_heatmap.set_xticklabels( - xticklabels, - rotation=20, - verticalalignment="top", - horizontalalignment="right", - fontsize="small", - ) - - ax_grid.fig.savefig( - output_filename, - dpi=dpi, - bbox_inches="tight", - facecolor="white", - transparent=False, - ) - ax_grid.fig.clear() - plt.close(ax_grid.fig) diff --git a/spaces/NoriZC/vits-models/app.py b/spaces/NoriZC/vits-models/app.py deleted file mode 100644 index 31cdc30680f88fe0a9a7e96575218eeeca606ad1..0000000000000000000000000000000000000000 --- a/spaces/NoriZC/vits-models/app.py +++ /dev/null @@ -1,290 +0,0 @@ -# coding=utf-8 -import os -import re -import argparse -import utils -import commons -import json -import torch -import gradio as gr -from models import SynthesizerTrn -from text import text_to_sequence, _clean_text -from torch import no_grad, LongTensor -import gradio.processing_utils as gr_processing_utils -import logging -logging.getLogger('numba').setLevel(logging.WARNING) -limitation = os.getenv("SYSTEM") == "spaces" # limit text and audio length in huggingface spaces - -hps_ms = utils.get_hparams_from_file(r'config/config.json') - -audio_postprocess_ori = gr.Audio.postprocess - -def audio_postprocess(self, y): - data = audio_postprocess_ori(self, y) - if data is None: - return None - return gr_processing_utils.encode_url_or_file_to_base64(data["name"]) - - -gr.Audio.postprocess = audio_postprocess - -def get_text(text, hps, is_symbol): - text_norm, clean_text = text_to_sequence(text, hps.symbols, [] if is_symbol else hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = LongTensor(text_norm) - return text_norm, clean_text - -def create_tts_fn(net_g_ms, speaker_id): - def tts_fn(text, language, noise_scale, noise_scale_w, length_scale, is_symbol): - text = text.replace('\n', ' ').replace('\r', '').replace(" ", "") - if limitation: - text_len = len(re.sub("\[([A-Z]{2})\]", "", text)) - max_len = 100 - if is_symbol: - max_len *= 3 - if text_len > max_len: - return "Error: Text is too long", None - if not is_symbol: - if language == 0: - text = f"[ZH]{text}[ZH]" - elif language == 1: - text = f"[JA]{text}[JA]" - else: - text = f"{text}" - stn_tst, clean_text = get_text(text, hps_ms, is_symbol) - with no_grad(): - x_tst = stn_tst.unsqueeze(0).to(device) - x_tst_lengths = LongTensor([stn_tst.size(0)]).to(device) - sid = LongTensor([speaker_id]).to(device) - audio = net_g_ms.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=noise_scale, noise_scale_w=noise_scale_w, - length_scale=length_scale)[0][0, 0].data.cpu().float().numpy() - - return "Success", (22050, audio) - return tts_fn - -def create_to_symbol_fn(hps): - def to_symbol_fn(is_symbol_input, input_text, temp_lang): - if temp_lang == 0: - clean_text = f'[ZH]{input_text}[ZH]' - elif temp_lang == 1: - clean_text = f'[JA]{input_text}[JA]' - else: - clean_text = input_text - return _clean_text(clean_text, hps.data.text_cleaners) if is_symbol_input else '' - - return to_symbol_fn -def change_lang(language): - if language == 0: - return 0.6, 0.668, 1.2 - elif language == 1: - return 0.6, 0.668, 1 - else: - return 0.6, 0.668, 1 - -download_audio_js = """ -() =>{{ - let root = document.querySelector("body > gradio-app"); - if (root.shadowRoot != null) - root = root.shadowRoot; - let audio = root.querySelector("#tts-audio-{audio_id}").querySelector("audio"); - let text = root.querySelector("#input-text-{audio_id}").querySelector("textarea"); - if (audio == undefined) - return; - text = text.value; - if (text == undefined) - text = Math.floor(Math.random()*100000000); - audio = audio.src; - let oA = document.createElement("a"); - oA.download = text.substr(0, 20)+'.wav'; - oA.href = audio; - document.body.appendChild(oA); - oA.click(); - oA.remove(); -}} -""" - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--device', type=str, default='cpu') - parser.add_argument('--api', action="store_true", default=False) - parser.add_argument("--share", action="store_true", default=False, help="share gradio app") - parser.add_argument("--all", action="store_true", default=False, help="enable all models") - args = parser.parse_args() - device = torch.device(args.device) - categories = ["Honkai: Star Rail", "Blue Archive", "Lycoris Recoil"] - others = { - "Princess Connect! Re:Dive": "https://huggingface.co/spaces/sayashi/vits-models-pcr", - "Genshin Impact": "https://huggingface.co/spaces/sayashi/vits-models-genshin-bh3", - "Honkai Impact 3rd": "https://huggingface.co/spaces/sayashi/vits-models-genshin-bh3", - "Overwatch 2": "https://huggingface.co/spaces/sayashi/vits-models-ow2", - } - if args.all: - categories = ["Honkai: Star Rail", "Blue Archive", "Lycoris Recoil", "Princess Connect! Re:Dive", "Genshin Impact", "Honkai Impact 3rd", "Overwatch 2"] - others = {} - models = [] - with open("pretrained_models/info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for i, info in models_info.items(): - if info['title'].split("-")[0] not in categories or not info['enable']: - continue - sid = info['sid'] - name_en = info['name_en'] - name_zh = info['name_zh'] - title = info['title'] - cover = f"pretrained_models/{i}/{info['cover']}" - example = info['example'] - language = info['language'] - net_g_ms = SynthesizerTrn( - len(hps_ms.symbols), - hps_ms.data.filter_length // 2 + 1, - hps_ms.train.segment_size // hps_ms.data.hop_length, - n_speakers=hps_ms.data.n_speakers if info['type'] == "multi" else 0, - **hps_ms.model) - utils.load_checkpoint(f'pretrained_models/{i}/{i}.pth', net_g_ms, None) - _ = net_g_ms.eval().to(device) - models.append((sid, name_en, name_zh, title, cover, example, language, net_g_ms, create_tts_fn(net_g_ms, sid), create_to_symbol_fn(hps_ms))) - with gr.Blocks() as app: - gr.Markdown( - "#
vits-models\n" - "##
Please do not generate content that could infringe upon the rights or cause harm to individuals or organizations.\n" - "##
请不要生成会对个人以及组织造成侵害的内容\n\n" - "[![image](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/10QOk9NPgoKZUXkIhhuVaZ7SYra1MPMKH?usp=share_link)\n\n" - "[![Duplicate this Space](https://huggingface.co/datasets/huggingface/badges/raw/main/duplicate-this-space-sm-dark.svg)](https://huggingface.co/spaces/sayashi/vits-models?duplicate=true)\n\n" - "[![Finetune your own model](https://badgen.net/badge/icon/github?icon=github&label=Finetune%20your%20own%20model)](https://github.com/SayaSS/vits-finetuning)" - ) - - with gr.Tabs(): - for category in categories: - with gr.TabItem(category): - with gr.TabItem("EN"): - for (sid, name_en, name_zh, title, cover, example, language, net_g_ms, tts_fn, to_symbol_fn) in models: - if title.split("-")[0] != category: - continue - with gr.TabItem(name_en): - with gr.Row(): - gr.Markdown( - '
' - f'{title}' - f'' if cover else "" - '
' - ) - with gr.Row(): - with gr.Column(): - input_text = gr.Textbox(label="Text (100 words limitation)" if limitation else "Text", lines=5, value=example, elem_id=f"input-text-en-{name_en.replace(' ','')}") - lang = gr.Dropdown(label="Language", choices=["Chinese", "Japanese", "Mix(wrap the Chinese text with [ZH][ZH], wrap the Japanese text with [JA][JA])"], - type="index", value=language) - with gr.Accordion(label="Advanced Options", open=False): - symbol_input = gr.Checkbox(value=False, label="Symbol input") - symbol_list = gr.Dataset(label="Symbol list", components=[input_text], - samples=[[x] for x in hps_ms.symbols]) - symbol_list_json = gr.Json(value=hps_ms.symbols, visible=False) - btn = gr.Button(value="Generate", variant="primary") - with gr.Row(): - ns = gr.Slider(label="noise_scale", minimum=0.1, maximum=1.0, step=0.1, value=0.6, interactive=True) - nsw = gr.Slider(label="noise_scale_w", minimum=0.1, maximum=1.0, step=0.1, value=0.668, interactive=True) - ls = gr.Slider(label="length_scale", minimum=0.1, maximum=2.0, step=0.1, value=1.2 if language=="Chinese" else 1, interactive=True) - with gr.Column(): - o1 = gr.Textbox(label="Output Message") - o2 = gr.Audio(label="Output Audio", elem_id=f"tts-audio-en-{name_en.replace(' ','')}") - download = gr.Button("Download Audio") - btn.click(tts_fn, inputs=[input_text, lang, ns, nsw, ls, symbol_input], outputs=[o1, o2], api_name=f"tts-{name_en}") - download.click(None, [], [], _js=download_audio_js.format(audio_id=f"en-{name_en.replace(' ', '')}")) - lang.change(change_lang, inputs=[lang], outputs=[ns, nsw, ls]) - symbol_input.change( - to_symbol_fn, - [symbol_input, input_text, lang], - [input_text] - ) - symbol_list.click(None, [symbol_list, symbol_list_json], [input_text], - _js=f""" - (i,symbols) => {{ - let root = document.querySelector("body > gradio-app"); - if (root.shadowRoot != null) - root = root.shadowRoot; - let text_input = root.querySelector("#input-text-en-{name_en.replace(' ', '')}").querySelector("textarea"); - let startPos = text_input.selectionStart; - let endPos = text_input.selectionEnd; - let oldTxt = text_input.value; - let result = oldTxt.substring(0, startPos) + symbols[i] + oldTxt.substring(endPos); - text_input.value = result; - let x = window.scrollX, y = window.scrollY; - text_input.focus(); - text_input.selectionStart = startPos + symbols[i].length; - text_input.selectionEnd = startPos + symbols[i].length; - text_input.blur(); - window.scrollTo(x, y); - return text_input.value; - }}""") - with gr.TabItem("中文"): - for (sid, name_en, name_zh, title, cover, example, language, net_g_ms, tts_fn, to_symbol_fn) in models: - if title.split("-")[0] != category: - continue - with gr.TabItem(name_zh): - with gr.Row(): - gr.Markdown( - '
' - f'{title}' - f'' if cover else "" - '
' - ) - with gr.Row(): - with gr.Column(): - input_text = gr.Textbox(label="文本 (100字上限)" if limitation else "文本", lines=5, value=example, elem_id=f"input-text-zh-{name_zh}") - lang = gr.Dropdown(label="语言", choices=["中文", "日语", "中日混合(中文用[ZH][ZH]包裹起来,日文用[JA][JA]包裹起来)"], - type="index", value="中文"if language == "Chinese" else "日语") - with gr.Accordion(label="高级选项", open=False): - symbol_input = gr.Checkbox(value=False, label="符号输入") - symbol_list = gr.Dataset(label="符号列表", components=[input_text], - samples=[[x] for x in hps_ms.symbols]) - symbol_list_json = gr.Json(value=hps_ms.symbols, visible=False) - btn = gr.Button(value="生成", variant="primary") - with gr.Row(): - ns = gr.Slider(label="控制感情变化程度", minimum=0.1, maximum=1.0, step=0.1, value=0.6, interactive=True) - nsw = gr.Slider(label="控制音素发音长度", minimum=0.1, maximum=1.0, step=0.1, value=0.668, interactive=True) - ls = gr.Slider(label="控制整体语速", minimum=0.1, maximum=2.0, step=0.1, value=1.2 if language=="Chinese" else 1, interactive=True) - with gr.Column(): - o1 = gr.Textbox(label="输出信息") - o2 = gr.Audio(label="输出音频", elem_id=f"tts-audio-zh-{name_zh}") - download = gr.Button("下载音频") - btn.click(tts_fn, inputs=[input_text, lang, ns, nsw, ls, symbol_input], outputs=[o1, o2]) - download.click(None, [], [], _js=download_audio_js.format(audio_id=f"zh-{name_zh}")) - lang.change(change_lang, inputs=[lang], outputs=[ns, nsw, ls]) - symbol_input.change( - to_symbol_fn, - [symbol_input, input_text, lang], - [input_text] - ) - symbol_list.click(None, [symbol_list, symbol_list_json], [input_text], - _js=f""" - (i,symbols) => {{ - let root = document.querySelector("body > gradio-app"); - if (root.shadowRoot != null) - root = root.shadowRoot; - let text_input = root.querySelector("#input-text-zh-{name_zh}").querySelector("textarea"); - let startPos = text_input.selectionStart; - let endPos = text_input.selectionEnd; - let oldTxt = text_input.value; - let result = oldTxt.substring(0, startPos) + symbols[i] + oldTxt.substring(endPos); - text_input.value = result; - let x = window.scrollX, y = window.scrollY; - text_input.focus(); - text_input.selectionStart = startPos + symbols[i].length; - text_input.selectionEnd = startPos + symbols[i].length; - text_input.blur(); - window.scrollTo(x, y); - return text_input.value; - }}""") - for category, link in others.items(): - with gr.TabItem(category): - gr.Markdown( - f''' -
-

Click to Go

- - -
- ''' - ) - app.queue(concurrency_count=1, api_open=args.api).launch(share=args.share) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_roberta.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_roberta.py deleted file mode 100644 index b0b9cfd31e8cb1e03ae74403886d2fb5266e0443..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_roberta.py +++ /dev/null @@ -1,314 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import functools -import unittest -from typing import Any, Dict, Sequence - -import fairseq -import fairseq.options -import fairseq.tasks -import torch -from tests.utils import dummy_dictionary - -VOCAB_SIZE = 100 - - -@fairseq.tasks.register_task("fake_task") -class FakeTask(fairseq.tasks.LegacyFairseqTask): - def __init__(self, args): - super().__init__(args) - self.dictionary = dummy_dictionary(VOCAB_SIZE - 4) - assert len(self.dictionary) == VOCAB_SIZE - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary - - -@functools.lru_cache() -def get_toy_model( - device: str, - architecture: str = "roberta_enc_dec", - **extra_args: Any, -): - assert device in ("gpu", "cpu") - kwargs = { - "arch": architecture, - # Use characteristics dimensions - "encoder_layers": 3, - "encoder_embed_dim": 12, - "encoder_ffn_embed_dim": 14, - "encoder_attention_heads": 4, - "decoder_layers": 3, - "decoder_embed_dim": 12, - "decoder_ffn_embed_dim": 14, - "decoder_attention_heads": 4, - # Disable dropout so we have comparable tests. - "dropout": 0, - "attention_dropout": 0, - "activation_dropout": 0, - "encoder_layerdrop": 0, - # required args - "tokens_per_sample": 256, - "data": "/tmp/test_roberta", - } - kwargs.update(extra_args) - fake_task = FakeTask(kwargs) - args = fairseq.options.get_args( - task="online_backtranslation", - mono_langs="en,ro", - valid_lang_pairs="en-ro", - **kwargs, - ) - torch.manual_seed(0) - model = fake_task.build_model(args) - if device == "gpu": - model.cuda() - return fake_task, model - - -def mk_sample( - lang: str, device: str, tok: Sequence[int] = None, batch_size: int = 2 -) -> Dict[str, Any]: - assert device in ("gpu", "cpu") - if not tok: - if lang == "en": - tok = [10, 11, 12, 13, 14, 15, 2] - else: - tok = [20, 21, 22, 23, 24, 25, 26, 27, 2] - - batch = torch.stack([torch.tensor(tok, dtype=torch.long)] * batch_size) - if device == "gpu": - batch = batch.cuda() - sample = { - "net_input": { - "src_tokens": batch, - "prev_output_tokens": batch, - "src_lengths": torch.tensor( - [len(tok)] * batch_size, dtype=torch.long, device=batch.device - ), - }, - "target": batch[:, 1:], - } - return sample - - -def cpu_gpu(fn): - def helper(self): - fn(self, "cpu") - if torch.cuda.is_available(): - fn(self, "gpu") - - return helper - - -def architectures(fn): - def helper(self): - for arch in ["roberta_enc_dec", "transformer"]: - fn(self, arch) - - return helper - - -class RobertaTest(unittest.TestCase): - def assertTensorEqual(self, t1, t2, delta: float = 1e-6): - self.assertEqual(t1.size(), t2.size(), "size mismatch") - if delta == 0.0: - self.assertEqual(t1.ne(t2).long().sum(), 0) - else: - self.assertEqual(((t2 - t1).abs() > delta).long().sum(), 0) - - def assertSharing(self, model, link_groups: Sequence[Sequence[str]]): - ids = {} - for group in link_groups: - group_ids = {name: id(params(model, name)) for name in group} - shared_id = group_ids[group[0]] - self.assertEqual(group_ids, {name: shared_id for name in group}) - self.assertNotIn(shared_id, ids) - ids[shared_id] = group - - def test_roberta_shared_params(self): - _, roberta = get_toy_model("cpu", architecture="roberta") - self.assertSharing( - roberta, - [ - [ - "encoder.sentence_encoder.embed_tokens.weight", - "encoder.lm_head.weight", - ] - ], - ) - - _, roberta = get_toy_model( - "cpu", architecture="roberta", untie_weights_roberta=True - ) - self.assertSharing( - roberta, - [ - ["encoder.sentence_encoder.embed_tokens.weight"], - ["encoder.lm_head.weight"], - ], - ) - - def test_roberta_enc_dec_shared_params(self): - # 3 distinct embeddings - _, enc_dec = get_toy_model("cpu", architecture="roberta_enc_dec") - self.assertSharing( - enc_dec, - [ - ["encoder.embed_tokens.weight"], - ["decoder.embed_tokens.weight"], - ["decoder.output_projection.weight"], - ], - ) - - # 2 distinct embeddings, one for encoder, one for decoder - _, enc_dec = get_toy_model( - "cpu", architecture="roberta_enc_dec", share_decoder_input_output_embed=True - ) - self.assertSharing( - enc_dec, - [ - ["encoder.embed_tokens.weight"], - [ - "decoder.embed_tokens.weight", - "decoder.output_projection.weight", - ], - ], - ) - - # shared embeddings - _, enc_dec = get_toy_model( - "cpu", architecture="roberta_enc_dec", share_all_embeddings=True - ) - self.assertSharing( - enc_dec, - [ - [ - "encoder.embed_tokens.weight", - "decoder.embed_tokens.weight", - "decoder.output_projection.weight", - ] - ], - ) - - def test_roberta_max_positions_is_correctly_set(self): - device = "cpu" - task, model = get_toy_model(device) - max_pos = model.max_decoder_positions() - self.assertEqual(max_pos, 256) - self.assertEqual(max_pos, model.decoder.max_positions()) - self.assertEqual(max_pos, model.encoder.max_positions()) - self.assertEqual(max_pos, model.encoder.embed_positions.max_positions) - - sentence = [31 for _ in range(max_pos)] - sample = mk_sample("en", device, sentence, batch_size=1) - self.assertEqual(list(sample["net_input"]["src_lengths"]), [max_pos]) - self.assertEqual(len(sample["net_input"]["src_tokens"][0]), max_pos) - x, _ = model.forward(**sample["net_input"]) - self.assertEqual(x.shape, (1, max_pos, VOCAB_SIZE)) - - @cpu_gpu - def test_roberta_forward_backward(self, device: str): - _, model = get_toy_model(device) - sample = mk_sample("en", device) - en_tokens = sample["net_input"]["src_tokens"] - (bs, l) = en_tokens.shape - # Forward - logits, _ = model(**sample["net_input"]) - self.assertEqual(logits.shape, (bs, l, VOCAB_SIZE)) - - # Backward - loss = logits.sum() - loss.backward() - - @cpu_gpu - def test_roberta_forward_backward_bs1(self, device: str): - _, model = get_toy_model(device) - sample = mk_sample("en", device, batch_size=1) - o, _ = model.forward(**sample["net_input"]) - loss = o.sum() - sample2 = mk_sample("ro", device, batch_size=1) - o, _ = model.forward(**sample2["net_input"]) - loss += o.sum() - loss.backward() - - @cpu_gpu - def test_roberta_batching(self, device: str): - """ - Checks that the batch of size 2 give twice the same results than the batch of size 1. - """ - _, model = get_toy_model(device) - sample = mk_sample("en", device, batch_size=1) - slen = sample["net_input"]["src_lengths"][0] - sample2 = mk_sample("en", device, batch_size=2) - with torch.no_grad(): - z = model.encoder.forward( - sample["net_input"]["src_tokens"], sample["net_input"]["src_lengths"] - ) - z = z["encoder_out"][-1] - logits, _ = model.forward(**sample["net_input"]) - - z2 = model.encoder.forward( - sample2["net_input"]["src_tokens"], sample["net_input"]["src_lengths"] - ) - z2 = z2["encoder_out"][-1] - logits2, _ = model.forward(**sample2["net_input"]) - - self.assertEqual(z.shape, (slen, 1, 12)) - self.assertEqual(z2.shape, (slen, 2, 12)) - self.assertTensorEqual(logits2[0], logits2[1]) - self.assertTensorEqual(logits[0], logits2[0]) - - @cpu_gpu - def test_roberta_incremental_decoder(self, device: str): - """ - Checks that incremental decoding yields the same result than non incremental one. - """ - task, model = get_toy_model(device) - - en_sample = mk_sample("en", device) - en_tokens = en_sample["net_input"]["src_tokens"] - ro_sample = mk_sample("ro", device) - ro_tokens = ro_sample["net_input"]["src_tokens"] - - en_enc = model.encoder.forward( - en_tokens, src_lengths=en_sample["net_input"]["src_lengths"] - ) - (bs, tgt_len) = ro_tokens.shape - - # Decode without incremental state - ro_dec, _ = model.decoder.forward(ro_tokens, encoder_out=en_enc) - self.assertEqual(ro_dec.shape, (bs, tgt_len, VOCAB_SIZE)) - self.assertTensorEqual(ro_dec[0], ro_dec[1]) - - # Decode with incremental state - inc_state = {} - ro_dec_inc = [] - for l in range(tgt_len): - ro, _ = model.decoder.forward( - ro_tokens[:, : l + 1], encoder_out=en_enc, incremental_state=inc_state - ) - self.assertEqual(ro.shape, (bs, 1, VOCAB_SIZE)) - ro_dec_inc.append(ro) - - for l in range(tgt_len): - # Intra-batch - self.assertTensorEqual(ro_dec_inc[l][0], ro_dec_inc[l][1]) - # Incremental vs non-incremental - self.assertTensorEqual(ro_dec_inc[l][:, 0], ro_dec[:, l]) - - -def params(model, name): - if "." not in name: - return getattr(model, name) - - prefix, name = name.split(".", 1) - return params(getattr(model, prefix), name) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/hubert/simple_kmeans/dump_hubert_feature.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/hubert/simple_kmeans/dump_hubert_feature.py deleted file mode 100644 index 5c7b67f8b1967ca515c5f7606253b46f903ea37e..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/hubert/simple_kmeans/dump_hubert_feature.py +++ /dev/null @@ -1,93 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import sys - -import fairseq -import soundfile as sf -import torch -import torch.nn.functional as F - -from feature_utils import get_path_iterator, dump_feature - - -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("dump_hubert_feature") - - -class HubertFeatureReader(object): - def __init__(self, ckpt_path, layer, max_chunk=1600000): - ( - model, - cfg, - task, - ) = fairseq.checkpoint_utils.load_model_ensemble_and_task([ckpt_path]) - self.model = model[0].eval().cuda() - self.task = task - self.layer = layer - self.max_chunk = max_chunk - logger.info(f"TASK CONFIG:\n{self.task.cfg}") - logger.info(f" max_chunk = {self.max_chunk}") - - def read_audio(self, path, ref_len=None): - wav, sr = sf.read(path) - assert sr == self.task.cfg.sample_rate, sr - if wav.ndim == 2: - wav = wav.mean(-1) - assert wav.ndim == 1, wav.ndim - if ref_len is not None and abs(ref_len - len(wav)) > 160: - logging.warning(f"ref {ref_len} != read {len(wav)} ({path})") - return wav - - def get_feats(self, path, ref_len=None): - x = self.read_audio(path, ref_len) - with torch.no_grad(): - x = torch.from_numpy(x).float().cuda() - if self.task.cfg.normalize: - x = F.layer_norm(x, x.shape) - x = x.view(1, -1) - - feat = [] - for start in range(0, x.size(1), self.max_chunk): - x_chunk = x[:, start: start + self.max_chunk] - feat_chunk, _ = self.model.extract_features( - source=x_chunk, - padding_mask=None, - mask=False, - output_layer=self.layer, - ) - feat.append(feat_chunk) - return torch.cat(feat, 1).squeeze(0) - - -def main(tsv_dir, split, ckpt_path, layer, nshard, rank, feat_dir, max_chunk): - reader = HubertFeatureReader(ckpt_path, layer, max_chunk) - generator, num = get_path_iterator(f"{tsv_dir}/{split}.tsv", nshard, rank) - dump_feature(reader, generator, num, split, nshard, rank, feat_dir) - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("tsv_dir") - parser.add_argument("split") - parser.add_argument("ckpt_path") - parser.add_argument("layer", type=int) - parser.add_argument("nshard", type=int) - parser.add_argument("rank", type=int) - parser.add_argument("feat_dir") - parser.add_argument("--max_chunk", type=int, default=1600000) - args = parser.parse_args() - logger.info(args) - - main(**vars(args)) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lm.sh b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lm.sh deleted file mode 100644 index c2edcefede2da3b6a991b9c8fbc78c96d46d27cb..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lm.sh +++ /dev/null @@ -1,35 +0,0 @@ -#!/usr/bin/env bash - -langdir="" -lmdir="" - -. ./cmd.sh -. ./path.sh -. parse_options.sh - -arpa_lm=$1 -data=$2 - -if [ -z $langdir ]; then - langdir=$data/lang -fi -if [ -z $lmdir ]; then - lmdir=$data/lang_test -fi - -if [ ! -d $langdir ]; then - echo "$langdir not found. run local/prepare_lang.sh first" && exit 1 -fi - -mkdir -p $lmdir -cp -r $langdir/* $lmdir - -if [[ "$arpa_lm" == *.gz ]]; then - gunzip -c $arpa_lm | arpa2fst --disambig-symbol=#0 --read-symbol-table=$lmdir/words.txt - $lmdir/G.fst -else - arpa2fst --disambig-symbol=#0 --read-symbol-table=$lmdir/words.txt $arpa_lm $lmdir/G.fst -fi -fstisstochastic $lmdir/G.fst -utils/validate_lang.pl $lmdir || exit 1 - -echo "done preparing lm ($lmdir)" diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/wav2vec/wav2vec.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/wav2vec/wav2vec.py deleted file mode 100644 index af6604da10f504baabff50bf14a6eb2214bffef3..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/wav2vec/wav2vec.py +++ /dev/null @@ -1,630 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field -import logging -import math -from typing import Optional, Tuple -from omegaconf import II -import sys - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from fairseq.models import BaseFairseqModel, register_model -from fairseq.modules import ( - Fp32GroupNorm, - Fp32LayerNorm, - GumbelVectorQuantizer, - KmeansVectorQuantizer, - TransposeLast, -) -from fairseq.tasks import FairseqTask -from fairseq.utils import buffered_arange - - -logger = logging.getLogger(__name__) - - -AGGREGATOR_CHOICES = ChoiceEnum(["cnn", "gru"]) -PROJECT_FEATURES_CHOICES = ChoiceEnum(["none", "same", "new"]) -ACTIVATION_CHOICES = ChoiceEnum(["relu", "gelu"]) -VQ_TYPE_CHOICES = ChoiceEnum(["none", "gumbel", "kmeans"]) - - -@dataclass -class Wav2VecConfig(FairseqDataclass): - prediction_steps: int = field( - default=12, metadata={"help": "number of steps ahead to predict"} - ) - sample_distance: Optional[int] = field( - default=None, - metadata={ - "help": "sample distance from target. does not work properly with cross-sampling" - }, - ) - cross_sample_negatives: int = field( - default=0, metadata={"help": "num of cross sampled negatives"} - ) - num_negatives: int = field( - default=10, metadata={"help": "num of sampled negatives"} - ) - conv_feature_layers: str = field( - default="[(512, 10, 5), (512, 8, 4), (512, 4, 2), (512, 4, 2), (512, 4, 2), (512, 1, 1), (512, 1, 1), (512, 1, 1)]", - metadata={ - "help": "convolutional feature extraction layers [(dim, kernel_size, stride), ...]" - }, - ) - conv_aggregator_layers: str = field( - default="[(512, 2, 1), (512, 3, 1), (512, 4, 1), (512, 5, 1), (512, 6, 1), (512, 7, 1), (512, 8, 1), (512, 9, 1), (512, 10, 1), (512, 11, 1), (512, 12, 1), (512, 13, 1)]", - metadata={ - "help": "convolutional aggregator layers [(dim, kernel_size, stride), ...]" - }, - ) - dropout: float = field( - default=0.0, metadata={"help": "dropout to apply within the model"} - ) - dropout_features: float = field( - default=0.0, metadata={"help": "dropout to apply to the features"} - ) - dropout_agg: float = field( - default=0.0, metadata={"help": "dropout to apply after aggregation step"} - ) - aggregator: AGGREGATOR_CHOICES = field( - default="cnn", metadata={"help": "type of aggregator to use"} - ) - gru_dim: int = field(default=512, metadata={"help": "GRU dimensionality"}) - no_conv_bias: bool = field( - default=False, metadata={"help": "if set, does not learn bias for conv layers"} - ) - agg_zero_pad: bool = field( - default=False, - metadata={"help": "if set, zero pads in aggregator instead of repl pad"}, - ) - skip_connections_feat: bool = field( - default=False, - metadata={"help": "if set, adds skip connections to the feature extractor"}, - ) - skip_connections_agg: bool = field( - default=True, - metadata={"help": "if set, adds skip connections to the aggregator"}, - ) - residual_scale: float = field( - default=0.5, metadata={"help": "scales residual by sqrt(value)"} - ) - log_compression: bool = field( - default=True, - metadata={"help": "if set, adds a log compression to feature extractor"}, - ) - balanced_classes: bool = field( - default=False, - metadata={"help": "if set, loss is scaled to balance for number of negatives"}, - ) - project_features: PROJECT_FEATURES_CHOICES = field( - default="none", - metadata={ - "help": "if not none, features are projected using the (same or new) aggregator" - }, - ) - non_affine_group_norm: bool = field( - default=False, metadata={"help": "if set, group norm is not affine"} - ) - offset: str = field( - default="auto", - metadata={ - "help": "if set to 'auto', it is computed automatically from the receptive field, else set to int value" - }, - ) - activation: ACTIVATION_CHOICES = field( - default="relu", - metadata={ - "help": "if set to 'auto', it is computed automatically from the receptive field, else set to int value" - }, - ) - vq_type: VQ_TYPE_CHOICES = field( - default="none", metadata={"help": "which type of quantizer to use"} - ) - vq_vars: int = field( - default=320, - metadata={"help": "project to this many vector quantized variables per group"}, - ) - vq_groups: int = field( - default=2, metadata={"help": "number of groups of latent variables"} - ) - vq_dim: int = field( - default=0, - metadata={ - "help": "uses this dimensionality for quantized vectors. 0 to use model dim // groups" - }, - ) - vq_depth: int = field( - default=1, metadata={"help": "number of layers for vq weight projection"} - ) - combine_groups: bool = field( - default=False, metadata={"help": "if set, variables are shared among groups"} - ) - vq_temp: Tuple[float, float, float] = field( - default=(2.0, 0.5, 0.999995), - metadata={ - "help": "temperature for latent variable sampling with gumbel softmax. should be a tuple of 3 values (start, end, decay)" - }, - ) - vq_gamma: float = field( - default=0.25, - metadata={"help": "gamma parameter for kmeans style vector quantization"}, - ) - infonce: bool = II("criterion.infonce") - - -@register_model("wav2vec", dataclass=Wav2VecConfig) -class Wav2VecModel(BaseFairseqModel): - @classmethod - def build_model(cls, cfg: Wav2VecConfig, task: FairseqTask): - """Build a new model instance.""" - - model = Wav2VecModel(cfg) - logger.info(model) - return model - - def __init__(self, cfg: Wav2VecConfig): - super().__init__() - - self.prediction_steps = cfg.prediction_steps - offset = cfg.offset - - if cfg.activation == "relu": - activation = nn.ReLU() - elif cfg.activation == "gelu": - activation = nn.GELU() - else: - raise Exception("unknown activation " + cfg.activation) - - feature_enc_layers = eval(cfg.conv_feature_layers) - self.feature_extractor = ConvFeatureExtractionModel( - conv_layers=feature_enc_layers, - dropout=0.0, - log_compression=cfg.log_compression, - skip_connections=cfg.skip_connections_feat, - residual_scale=cfg.residual_scale, - non_affine_group_norm=cfg.non_affine_group_norm, - activation=activation, - ) - embed = feature_enc_layers[-1][0] - - self.vector_quantizer = None - if cfg.vq_type == "gumbel": - self.vector_quantizer = GumbelVectorQuantizer( - dim=embed, - num_vars=cfg.vq_vars, - temp=cfg.vq_temp, - groups=cfg.vq_groups, - combine_groups=cfg.combine_groups, - vq_dim=cfg.vq_dim if cfg.vq_dim > 0 else embed, - time_first=False, - activation=activation, - weight_proj_depth=cfg.vq_depth, - weight_proj_factor=2, - ) - elif cfg.vq_type == "kmeans": - self.vector_quantizer = KmeansVectorQuantizer( - dim=embed, - num_vars=cfg.vq_vars, - groups=cfg.vq_groups, - combine_groups=cfg.combine_groups, - vq_dim=cfg.vq_dim if cfg.vq_dim > 0 else embed, - time_first=False, - gamma=cfg.vq_gamma, - ) - else: - assert ( - cfg.vq_type == "none" or cfg.vq_type is None - ), "Unknown quantizer type" - - if cfg.offset == "auto": - jin = 0 - rin = 0 - for _, k, stride in feature_enc_layers: - if rin == 0: - rin = k - rin = rin + (k - 1) * jin - if jin == 0: - jin = stride - else: - jin *= stride - offset = math.ceil(rin / jin) - - offset = int(offset) - - def make_aggregator(): - if cfg.aggregator == "cnn": - agg_layers = eval(cfg.conv_aggregator_layers) - agg_dim = agg_layers[-1][0] - feature_aggregator = ConvAggegator( - conv_layers=agg_layers, - embed=embed, - dropout=cfg.dropout, - skip_connections=cfg.skip_connections_agg, - residual_scale=cfg.residual_scale, - non_affine_group_norm=cfg.non_affine_group_norm, - conv_bias=not cfg.no_conv_bias, - zero_pad=cfg.agg_zero_pad, - activation=activation, - ) - elif cfg.aggregator == "gru": - agg_dim = cfg.gru_dim - feature_aggregator = nn.Sequential( - TransposeLast(), - nn.GRU( - input_size=embed, - hidden_size=agg_dim, - num_layers=1, - dropout=cfg.dropout, - ), - TransposeLast(deconstruct_idx=0), - ) - else: - raise Exception("unknown aggregator type " + cfg.aggregator) - - return feature_aggregator, agg_dim - - self.feature_aggregator, agg_dim = make_aggregator() - - self.wav2vec_predictions = Wav2VecPredictionsModel( - in_dim=agg_dim, - out_dim=embed, - prediction_steps=cfg.prediction_steps, - n_negatives=cfg.num_negatives, - cross_sample_negatives=cfg.cross_sample_negatives, - sample_distance=cfg.sample_distance, - dropout=cfg.dropout, - offset=offset, - balanced_classes=cfg.balanced_classes, - infonce=cfg.infonce, - ) - - self.dropout_feats = nn.Dropout(p=cfg.dropout_features) - self.dropout_agg = nn.Dropout(p=cfg.dropout_agg) - - if cfg.project_features == "none": - self.project_features = None - elif cfg.project_features == "same": - self.project_features = self.feature_aggregator - elif cfg.project_features == "new": - self.project_features, _ = make_aggregator() - - def forward(self, source): - result = {} - - features = self.feature_extractor(source) - if self.vector_quantizer: - q_res = self.vector_quantizer(features) - features = q_res["x"] - for k in q_res.keys(): - if k != "x": - result[k] = q_res[k] - - x = self.dropout_feats(features) - x = self.feature_aggregator(x) - x = self.dropout_agg(x) - - if self.project_features is not None: - features = self.project_features(features) - x, targets = self.wav2vec_predictions(x, features) - result["cpc_logits"] = x - result["cpc_targets"] = targets - - return result - - def upgrade_state_dict_named(self, state_dict, name): - super().upgrade_state_dict_named(state_dict, name) - - def max_positions(self): - """Maximum length supported by the model.""" - return sys.maxsize - - def get_logits(self, net_output): - logits = net_output["cpc_logits"] - return logits - - def get_targets(self, sample, net_output): - t = net_output["cpc_targets"] - if isinstance(t, tuple): - t = t[0] - return t.contiguous() - - def get_target_weights(self, targets, net_output): - targets = net_output["cpc_targets"] - if isinstance(targets, tuple) and targets[-1] is not None: - return targets[-1] - return None - - def get_extra_losses(self, net_output): - loss = None - if "prob_perplexity" in net_output: - loss = net_output["num_vars"] - net_output["prob_perplexity"] - elif "kmeans_loss" in net_output: - loss = net_output["kmeans_loss"] - - return loss - - -def norm_block(is_layer_norm, dim, affine=True): - if is_layer_norm: - mod = nn.Sequential( - TransposeLast(), - Fp32LayerNorm(dim, elementwise_affine=affine), - TransposeLast(), - ) - else: - mod = Fp32GroupNorm(1, dim, affine=affine) - - return mod - - -class ConvFeatureExtractionModel(nn.Module): - def __init__( - self, - conv_layers, - dropout, - log_compression, - skip_connections, - residual_scale, - non_affine_group_norm, - activation, - ): - super().__init__() - - def block(n_in, n_out, k, stride): - return nn.Sequential( - nn.Conv1d(n_in, n_out, k, stride=stride, bias=False), - nn.Dropout(p=dropout), - norm_block( - is_layer_norm=False, dim=n_out, affine=not non_affine_group_norm - ), - activation, - ) - - in_d = 1 - self.conv_layers = nn.ModuleList() - for dim, k, stride in conv_layers: - self.conv_layers.append(block(in_d, dim, k, stride)) - in_d = dim - - self.log_compression = log_compression - self.skip_connections = skip_connections - self.residual_scale = math.sqrt(residual_scale) - - def forward(self, x): - # BxT -> BxCxT - x = x.unsqueeze(1) - - for conv in self.conv_layers: - residual = x - x = conv(x) - if self.skip_connections and x.size(1) == residual.size(1): - tsz = x.size(2) - r_tsz = residual.size(2) - residual = residual[..., :: r_tsz // tsz][..., :tsz] - x = (x + residual) * self.residual_scale - - if self.log_compression: - x = x.abs() - x = x + 1 - x = x.log() - - return x - - -class ZeroPad1d(nn.Module): - def __init__(self, pad_left, pad_right): - super().__init__() - self.pad_left = pad_left - self.pad_right = pad_right - - def forward(self, x): - return F.pad(x, (self.pad_left, self.pad_right)) - - -class ConvAggegator(nn.Module): - def __init__( - self, - conv_layers, - embed, - dropout, - skip_connections, - residual_scale, - non_affine_group_norm, - conv_bias, - zero_pad, - activation, - ): - super().__init__() - - def block(n_in, n_out, k, stride): - # padding dims only really make sense for stride = 1 - ka = k // 2 - kb = ka - 1 if k % 2 == 0 else ka - - pad = ( - ZeroPad1d(ka + kb, 0) if zero_pad else nn.ReplicationPad1d((ka + kb, 0)) - ) - - return nn.Sequential( - pad, - nn.Conv1d(n_in, n_out, k, stride=stride, bias=conv_bias), - nn.Dropout(p=dropout), - norm_block(False, n_out, affine=not non_affine_group_norm), - activation, - ) - - in_d = embed - self.conv_layers = nn.ModuleList() - self.residual_proj = nn.ModuleList() - for dim, k, stride in conv_layers: - if in_d != dim and skip_connections: - self.residual_proj.append(nn.Conv1d(in_d, dim, 1, bias=False)) - else: - self.residual_proj.append(None) - - self.conv_layers.append(block(in_d, dim, k, stride)) - in_d = dim - self.conv_layers = nn.Sequential(*self.conv_layers) - self.skip_connections = skip_connections - self.residual_scale = math.sqrt(residual_scale) - - def forward(self, x): - for rproj, conv in zip(self.residual_proj, self.conv_layers): - residual = x - x = conv(x) - if self.skip_connections: - if rproj is not None: - residual = rproj(residual) - x = (x + residual) * self.residual_scale - return x - - -class Wav2VecPredictionsModel(nn.Module): - def __init__( - self, - in_dim, - out_dim, - prediction_steps, - n_negatives, - cross_sample_negatives, - sample_distance, - dropout, - offset, - balanced_classes, - infonce, - ): - super().__init__() - - self.n_negatives = n_negatives - self.cross_sample_negatives = cross_sample_negatives - self.sample_distance = sample_distance - self.project_to_steps = nn.ConvTranspose2d( - in_dim, out_dim, (1, prediction_steps) - ) - self.dropout = nn.Dropout(p=dropout) - self.offset = offset - self.balanced_classes = balanced_classes - self.infonce = infonce - - def sample_negatives(self, y): - bsz, fsz, tsz = y.shape - - y = y.transpose(0, 1) # BCT -> CBT - y = y.contiguous().view(fsz, -1) # CBT => C(BxT) - - cross_high = tsz * bsz - high = tsz if self.sample_distance is None else min(tsz, self.sample_distance) - assert high > 1 - - neg_idxs = torch.randint(low=0, high=high, size=(bsz, self.n_negatives * tsz)) - - with torch.no_grad(): - if self.n_negatives > 0: - tszs = ( - buffered_arange(tsz) - .unsqueeze(-1) - .expand(-1, self.n_negatives) - .flatten() - ) - - neg_idxs = torch.randint( - low=0, high=high - 1, size=(bsz, self.n_negatives * tsz) - ) - neg_idxs[neg_idxs >= tszs] += 1 - - if self.cross_sample_negatives > 0: - tszs = ( - buffered_arange(tsz) - .unsqueeze(-1) - .expand(-1, self.cross_sample_negatives) - .flatten() - ) - - cross_neg_idxs = torch.randint( - low=0, - high=cross_high - 1, - size=(bsz, self.cross_sample_negatives * tsz), - ) - cross_neg_idxs[cross_neg_idxs >= tszs] += 1 - - if self.n_negatives > 0: - for i in range(1, bsz): - neg_idxs[i] += i * high - else: - neg_idxs = cross_neg_idxs - - if self.cross_sample_negatives > 0 and self.n_negatives > 0: - neg_idxs = torch.cat([neg_idxs, cross_neg_idxs], dim=1) - - negs = y[..., neg_idxs.view(-1)] - negs = negs.view( - fsz, bsz, self.n_negatives + self.cross_sample_negatives, tsz - ).permute( - 2, 1, 0, 3 - ) # to NxBxCxT - - return negs - - def forward(self, x, y): - - x = x.unsqueeze(-1) - x = self.project_to_steps(x) # BxCxTxS - x = self.dropout(x) - - negatives = self.sample_negatives(y) - y = y.unsqueeze(0) - targets = torch.cat([y, negatives], dim=0) # Copies x B x C x T - - copies = targets.size(0) - bsz, dim, tsz, steps = x.shape - steps = min(steps, tsz - self.offset) - - predictions = x.new( - bsz * copies * (tsz - self.offset + 1) * steps - - ((steps + 1) * steps // 2) * copies * bsz - ) - if self.infonce: - labels = predictions.new_full( - (predictions.shape[0] // copies,), 0, dtype=torch.long - ) - else: - labels = torch.zeros_like(predictions) - weights = ( - torch.full_like(labels, 1 / self.n_negatives) - if self.balanced_classes and not self.infonce - else None - ) - - start = end = 0 - for i in range(steps): - offset = i + self.offset - end = start + (tsz - offset) * bsz * copies - if self.infonce: - predictions[start:end] = torch.einsum( - "bct,nbct->tbn", x[..., :-offset, i], targets[..., offset:] - ).flatten() - else: - pos_num = (end - start) // copies - predictions[start:end] = torch.einsum( - "bct,nbct->nbt", x[..., :-offset, i], targets[..., offset:] - ).flatten() - labels[start : start + pos_num] = 1.0 - if weights is not None: - weights[start : start + pos_num] = 1.0 - start = end - assert end == predictions.numel(), "{} != {}".format(end, predictions.numel()) - - if self.infonce: - predictions = predictions.view(-1, copies) - else: - if weights is not None: - labels = (labels, weights) - - return predictions, labels diff --git a/spaces/OkamiFeng/Bark-with-Voice-Cloning/util/parseinput.py b/spaces/OkamiFeng/Bark-with-Voice-Cloning/util/parseinput.py deleted file mode 100644 index f2102648cf169f0a52bb66755308fee5f81247e0..0000000000000000000000000000000000000000 --- a/spaces/OkamiFeng/Bark-with-Voice-Cloning/util/parseinput.py +++ /dev/null @@ -1,129 +0,0 @@ -import re -import xml.etree.ElementTree as ET -from xml.sax import saxutils -#import nltk - -# Chunked generation originally from https://github.com/serp-ai/bark-with-voice-clone -def split_and_recombine_text(text, desired_length=100, max_length=150): - # return nltk.sent_tokenize(text) - - # from https://github.com/neonbjb/tortoise-tts - """Split text it into chunks of a desired length trying to keep sentences intact.""" - # normalize text, remove redundant whitespace and convert non-ascii quotes to ascii - text = re.sub(r"\n\n+", "\n", text) - text = re.sub(r"\s+", " ", text) - text = re.sub(r"[“”]", '"', text) - - rv = [] - in_quote = False - current = "" - split_pos = [] - pos = -1 - end_pos = len(text) - 1 - - def seek(delta): - nonlocal pos, in_quote, current - is_neg = delta < 0 - for _ in range(abs(delta)): - if is_neg: - pos -= 1 - current = current[:-1] - else: - pos += 1 - current += text[pos] - if text[pos] == '"': - in_quote = not in_quote - return text[pos] - - def peek(delta): - p = pos + delta - return text[p] if p < end_pos and p >= 0 else "" - - def commit(): - nonlocal rv, current, split_pos - rv.append(current) - current = "" - split_pos = [] - - while pos < end_pos: - c = seek(1) - # do we need to force a split? - if len(current) >= max_length: - if len(split_pos) > 0 and len(current) > (desired_length / 2): - # we have at least one sentence and we are over half the desired length, seek back to the last split - d = pos - split_pos[-1] - seek(-d) - else: - # no full sentences, seek back until we are not in the middle of a word and split there - while c not in "!?.,\n " and pos > 0 and len(current) > desired_length: - c = seek(-1) - commit() - # check for sentence boundaries - elif not in_quote and (c in "!?]\n" or (c == "." and peek(1) in "\n ")): - # seek forward if we have consecutive boundary markers but still within the max length - while ( - pos < len(text) - 1 and len(current) < max_length and peek(1) in "!?.]" - ): - c = seek(1) - split_pos.append(pos) - if len(current) >= desired_length: - commit() - # treat end of quote as a boundary if its followed by a space or newline - elif in_quote and peek(1) == '"' and peek(2) in "\n ": - seek(2) - split_pos.append(pos) - rv.append(current) - - # clean up, remove lines with only whitespace or punctuation - rv = [s.strip() for s in rv] - rv = [s for s in rv if len(s) > 0 and not re.match(r"^[\s\.,;:!?]*$", s)] - - return rv - -def is_ssml(value): - try: - ET.fromstring(value) - except ET.ParseError: - return False - return True - -def build_ssml(rawtext, selected_voice): - texts = rawtext.split("\n") - joinedparts = "" - for textpart in texts: - textpart = textpart.strip() - if len(textpart) < 1: - continue - joinedparts = joinedparts + f"\n{saxutils.escape(textpart)}" - ssml = f""" - - {joinedparts} - - """ - return ssml - -def create_clips_from_ssml(ssmlinput): - # Parse the XML - tree = ET.ElementTree(ET.fromstring(ssmlinput)) - root = tree.getroot() - - # Create an empty list - voice_list = [] - - # Loop through all voice tags - for voice in root.iter('{http://www.w3.org/2001/10/synthesis}voice'): - # Extract the voice name attribute and the content text - voice_name = voice.attrib['name'] - voice_content = voice.text.strip() if voice.text else '' - if(len(voice_content) > 0): - parts = split_and_recombine_text(voice_content) - for p in parts: - if(len(p) > 1): - # add to tuple list - voice_list.append((voice_name, p)) - return voice_list - diff --git a/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/service/app.py b/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/service/app.py deleted file mode 100644 index 3185cc3be0215d92a89a88a5b2a79f7e69b08518..0000000000000000000000000000000000000000 --- a/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/service/app.py +++ /dev/null @@ -1,142 +0,0 @@ -import time -from flask import Flask, request, jsonify, make_response -from flask_restplus import Api, Resource, fields -from threading import Thread -from sheep_env import SheepEnv - -flask_app = Flask(__name__) -app = Api( - app=flask_app, - version="0.0.1", - title="DI-sheep App", - description="Play Sheep with Deep Reinforcement Learning, Powered by OpenDILab" -) - -name_space = app.namespace('DI-sheep', description='DI-sheep APIs') -model = app.model( - 'DI-sheep params', { - 'command': fields.String(required=False, description="Command Field", help="reset, step"), - 'argument': fields.Integer(required=False, description="Argument Field", help="reset->level, step->action"), - } -) -MAX_ENV_NUM = 50 -ENV_TIMEOUT_SECOND = 60 -envs = {} - - -def env_monitor(): - while True: - cur_time = time.time() - pop_keys = [] - for k, v in envs.items(): - if cur_time - v['update_time'] >= ENV_TIMEOUT_SECOND: - pop_keys.append(k) - for k in pop_keys: - envs.pop(k) - time.sleep(1) - - -app.env_thread = Thread(target=env_monitor, daemon=True) -app.env_thread.start() - - -@name_space.route("/") -class MainClass(Resource): - - def options(self): - response = make_response() - response.headers.add("Access-Control-Allow-Origin", "*") - response.headers.add('Access-Control-Allow-Headers', "*") - response.headers.add('Access-Control-Allow-Methods', "*") - return response - - @app.expect(model) - def post(self): - try: - t_start = time.time() - data = request.json - cmd, arg, uid = data['command'], data['argument'], data['uid'] - ip = request.remote_addr - ip = str(ip) + str(uid) - - if ip not in envs: - if cmd == 'reset': - if len(envs) >= MAX_ENV_NUM: - response = jsonify( - { - "statusCode": 501, - "status": "No enough env resource, please wait a moment", - } - ) - response.headers.add('Access-Control-Allow-Origin', '*') - return response - else: - env = SheepEnv(1, agent=False) - envs[ip] = {'env': env, 'update_time': time.time()} - else: - response = jsonify( - { - "statusCode": 501, - "status": "No response for too long time, please reset the game", - } - ) - response.headers.add('Access-Control-Allow-Origin', '*') - return response - else: - env = envs[ip]['env'] - envs[ip]['update_time'] = time.time() - if cmd == 'reset': - env.reset(arg) - scene = [item.to_json() for item in env.scene if item is not None] - response = jsonify( - { - "statusCode": 200, - "status": "Execution action", - "result": { - "scene": scene, - "max_item_num": env.total_item_num, - } - } - ) - elif cmd == 'step': - _, _, done, _ = env.step(arg) - scene = [item.to_json() for item in env.scene if item is not None] - bucket = [item.to_json() for item in env.bucket] - response = jsonify( - { - "statusCode": 200, - "status": "Execution action", - "result": { - "scene": scene, - "bucket": bucket, - "done": done, - } - } - ) - else: - response = jsonify({ - "statusCode": 500, - "status": "Invalid command: {}".format(cmd), - }) - response.headers.add('Access-Control-Allow-Origin', '*') - return response - print('backend process time: {}'.format(time.time() - t_start)) - print('current env number: {}'.format(len(envs))) - response.headers.add('Access-Control-Allow-Origin', '*') - return response - except Exception as e: - import traceback - print(repr(e)) - print(traceback.format_exc()) - response = jsonify({ - "statusCode": 500, - "status": "Could not execute action", - }) - response.headers.add('Access-Control-Allow-Origin', '*') - return response - -if __name__ == "__main__": - flask_app.run() - - - diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/hooks/logger/__init__.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/hooks/logger/__init__.py deleted file mode 100644 index a0b6b345640a895368ac8a647afef6f24333d90e..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/hooks/logger/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base import LoggerHook -from .dvclive import DvcliveLoggerHook -from .mlflow import MlflowLoggerHook -from .neptune import NeptuneLoggerHook -from .pavi import PaviLoggerHook -from .tensorboard import TensorboardLoggerHook -from .text import TextLoggerHook -from .wandb import WandbLoggerHook - -__all__ = [ - 'LoggerHook', 'MlflowLoggerHook', 'PaviLoggerHook', - 'TensorboardLoggerHook', 'TextLoggerHook', 'WandbLoggerHook', - 'NeptuneLoggerHook', 'DvcliveLoggerHook' -] diff --git a/spaces/PaddlePaddle/jieba_paddle/app.py b/spaces/PaddlePaddle/jieba_paddle/app.py deleted file mode 100644 index 2900b4c30c47f4246d6e968d8aca17c978df3549..0000000000000000000000000000000000000000 --- a/spaces/PaddlePaddle/jieba_paddle/app.py +++ /dev/null @@ -1,16 +0,0 @@ -import gradio as gr -import paddlehub as hub - - -jieba_paddle = hub.Module(name="jieba_paddle") - -def inference(text): - results = jieba_paddle.cut(sentence=text) - return results - - -title="jieba_paddle" -description="jieba_paddle is a word segmentation model based on paddlepaddle deep learning framework." - -examples=[['今天是个好日子']] -gr.Interface(inference,"text",[gr.outputs.Textbox(label="words")],title=title,description=description,examples=examples).launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/peg.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/peg.go deleted file mode 100644 index c65685bc87b1e124660fedc845b7415e3dde95a2..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/peg.go and /dev/null differ diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/q.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/q.go deleted file mode 100644 index 8796d365dac64e9492b9a4a5ee3bc0576d1e4458..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/q.go and /dev/null differ diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/web/response.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/web/response.go deleted file mode 100644 index a41c98abb3cfd80f0826937c775fa5abd5d833ef..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/web/response.go and /dev/null differ diff --git a/spaces/PeepDaSlan9/AutoGPT/tests/unit/json_tests.py b/spaces/PeepDaSlan9/AutoGPT/tests/unit/json_tests.py deleted file mode 100644 index 25c383377708359b5cfec28e0625343c5692f15c..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/AutoGPT/tests/unit/json_tests.py +++ /dev/null @@ -1,114 +0,0 @@ -import unittest - -from autogpt.json_utils.json_fix_llm import fix_and_parse_json - - -class TestParseJson(unittest.TestCase): - def test_valid_json(self): - # Test that a valid JSON string is parsed correctly - json_str = '{"name": "John", "age": 30, "city": "New York"}' - obj = fix_and_parse_json(json_str) - self.assertEqual(obj, {"name": "John", "age": 30, "city": "New York"}) - - def test_invalid_json_minor(self): - # Test that an invalid JSON string can be fixed with gpt - json_str = '{"name": "John", "age": 30, "city": "New York",}' - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=False), - {"name": "John", "age": 30, "city": "New York"}, - ) - - def test_invalid_json_major_with_gpt(self): - # Test that an invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = 'BEGIN: "name": "John" - "age": 30 - "city": "New York" :END' - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=True), - {"name": "John", "age": 30, "city": "New York"}, - ) - - def test_invalid_json_major_without_gpt(self): - # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = 'BEGIN: "name": "John" - "age": 30 - "city": "New York" :END' - # Assert that this raises an exception: - with self.assertRaises(Exception): - fix_and_parse_json(json_str, try_to_fix_with_gpt=False) - - def test_invalid_json_leading_sentence_with_gpt(self): - # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = """I suggest we start by browsing the repository to find any issues that we can fix. - -{ - "command": { - "name": "browse_website", - "args":{ - "url": "https://github.com/Torantulino/Auto-GPT" - } - }, - "thoughts": - { - "text": "I suggest we start browsing the repository to find any issues that we can fix.", - "reasoning": "Browsing the repository will give us an idea of the current state of the codebase and identify any issues that we can address to improve the repo.", - "plan": "- Look through the repository to find any issues.\n- Investigate any issues to determine what needs to be fixed\n- Identify possible solutions to fix the issues\n- Open Pull Requests with fixes", - "criticism": "I should be careful while browsing so as not to accidentally introduce any new bugs or issues.", - "speak": "I will start browsing the repository to find any issues we can fix." - } -}""" - good_obj = { - "command": { - "name": "browse_website", - "args": {"url": "https://github.com/Torantulino/Auto-GPT"}, - }, - "thoughts": { - "text": "I suggest we start browsing the repository to find any issues that we can fix.", - "reasoning": "Browsing the repository will give us an idea of the current state of the codebase and identify any issues that we can address to improve the repo.", - "plan": "- Look through the repository to find any issues.\n- Investigate any issues to determine what needs to be fixed\n- Identify possible solutions to fix the issues\n- Open Pull Requests with fixes", - "criticism": "I should be careful while browsing so as not to accidentally introduce any new bugs or issues.", - "speak": "I will start browsing the repository to find any issues we can fix.", - }, - } - # Assert that this raises an exception: - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=False), good_obj - ) - - def test_invalid_json_leading_sentence_with_gpt(self): - # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = """I will first need to browse the repository (https://github.com/Torantulino/Auto-GPT) and identify any potential bugs that need fixing. I will use the "browse_website" command for this. - -{ - "command": { - "name": "browse_website", - "args":{ - "url": "https://github.com/Torantulino/Auto-GPT" - } - }, - "thoughts": - { - "text": "Browsing the repository to identify potential bugs", - "reasoning": "Before fixing bugs, I need to identify what needs fixing. I will use the 'browse_website' command to analyze the repository.", - "plan": "- Analyze the repository for potential bugs and areas of improvement", - "criticism": "I need to ensure I am thorough and pay attention to detail while browsing the repository.", - "speak": "I am browsing the repository to identify potential bugs." - } -}""" - good_obj = { - "command": { - "name": "browse_website", - "args": {"url": "https://github.com/Torantulino/Auto-GPT"}, - }, - "thoughts": { - "text": "Browsing the repository to identify potential bugs", - "reasoning": "Before fixing bugs, I need to identify what needs fixing. I will use the 'browse_website' command to analyze the repository.", - "plan": "- Analyze the repository for potential bugs and areas of improvement", - "criticism": "I need to ensure I am thorough and pay attention to detail while browsing the repository.", - "speak": "I am browsing the repository to identify potential bugs.", - }, - } - # Assert that this raises an exception: - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=False), good_obj - ) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/Pengyey/bingo-chuchu/src/components/ui/dropdown-menu.tsx b/spaces/Pengyey/bingo-chuchu/src/components/ui/dropdown-menu.tsx deleted file mode 100644 index 184d4e6007ef85187446362f69532ab077897fea..0000000000000000000000000000000000000000 --- a/spaces/Pengyey/bingo-chuchu/src/components/ui/dropdown-menu.tsx +++ /dev/null @@ -1,128 +0,0 @@ -'use client' - -import * as React from 'react' -import * as DropdownMenuPrimitive from '@radix-ui/react-dropdown-menu' - -import { cn } from '@/lib/utils' - -const DropdownMenu = DropdownMenuPrimitive.Root - -const DropdownMenuTrigger = DropdownMenuPrimitive.Trigger - -const DropdownMenuGroup = DropdownMenuPrimitive.Group - -const DropdownMenuPortal = DropdownMenuPrimitive.Portal - -const DropdownMenuSub = DropdownMenuPrimitive.Sub - -const DropdownMenuRadioGroup = DropdownMenuPrimitive.RadioGroup - -const DropdownMenuSubContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DropdownMenuSubContent.displayName = - DropdownMenuPrimitive.SubContent.displayName - -const DropdownMenuContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, sideOffset = 4, ...props }, ref) => ( - - - -)) -DropdownMenuContent.displayName = DropdownMenuPrimitive.Content.displayName - -const DropdownMenuItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, ...props }, ref) => ( - -)) -DropdownMenuItem.displayName = DropdownMenuPrimitive.Item.displayName - -const DropdownMenuLabel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, ...props }, ref) => ( - -)) -DropdownMenuLabel.displayName = DropdownMenuPrimitive.Label.displayName - -const DropdownMenuSeparator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DropdownMenuSeparator.displayName = DropdownMenuPrimitive.Separator.displayName - -const DropdownMenuShortcut = ({ - className, - ...props -}: React.HTMLAttributes) => { - return ( - - ) -} -DropdownMenuShortcut.displayName = 'DropdownMenuShortcut' - -export { - DropdownMenu, - DropdownMenuTrigger, - DropdownMenuContent, - DropdownMenuItem, - DropdownMenuLabel, - DropdownMenuSeparator, - DropdownMenuShortcut, - DropdownMenuGroup, - DropdownMenuPortal, - DropdownMenuSub, - DropdownMenuSubContent, - DropdownMenuRadioGroup -} diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/bricks/context_block.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/bricks/context_block.py deleted file mode 100644 index d60fdb904c749ce3b251510dff3cc63cea70d42e..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/bricks/context_block.py +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn - -from ..utils import constant_init, kaiming_init -from .registry import PLUGIN_LAYERS - - -def last_zero_init(m): - if isinstance(m, nn.Sequential): - constant_init(m[-1], val=0) - else: - constant_init(m, val=0) - - -@PLUGIN_LAYERS.register_module() -class ContextBlock(nn.Module): - """ContextBlock module in GCNet. - - See 'GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond' - (https://arxiv.org/abs/1904.11492) for details. - - Args: - in_channels (int): Channels of the input feature map. - ratio (float): Ratio of channels of transform bottleneck - pooling_type (str): Pooling method for context modeling. - Options are 'att' and 'avg', stand for attention pooling and - average pooling respectively. Default: 'att'. - fusion_types (Sequence[str]): Fusion method for feature fusion, - Options are 'channels_add', 'channel_mul', stand for channelwise - addition and multiplication respectively. Default: ('channel_add',) - """ - - _abbr_ = 'context_block' - - def __init__(self, - in_channels, - ratio, - pooling_type='att', - fusion_types=('channel_add', )): - super(ContextBlock, self).__init__() - assert pooling_type in ['avg', 'att'] - assert isinstance(fusion_types, (list, tuple)) - valid_fusion_types = ['channel_add', 'channel_mul'] - assert all([f in valid_fusion_types for f in fusion_types]) - assert len(fusion_types) > 0, 'at least one fusion should be used' - self.in_channels = in_channels - self.ratio = ratio - self.planes = int(in_channels * ratio) - self.pooling_type = pooling_type - self.fusion_types = fusion_types - if pooling_type == 'att': - self.conv_mask = nn.Conv2d(in_channels, 1, kernel_size=1) - self.softmax = nn.Softmax(dim=2) - else: - self.avg_pool = nn.AdaptiveAvgPool2d(1) - if 'channel_add' in fusion_types: - self.channel_add_conv = nn.Sequential( - nn.Conv2d(self.in_channels, self.planes, kernel_size=1), - nn.LayerNorm([self.planes, 1, 1]), - nn.ReLU(inplace=True), # yapf: disable - nn.Conv2d(self.planes, self.in_channels, kernel_size=1)) - else: - self.channel_add_conv = None - if 'channel_mul' in fusion_types: - self.channel_mul_conv = nn.Sequential( - nn.Conv2d(self.in_channels, self.planes, kernel_size=1), - nn.LayerNorm([self.planes, 1, 1]), - nn.ReLU(inplace=True), # yapf: disable - nn.Conv2d(self.planes, self.in_channels, kernel_size=1)) - else: - self.channel_mul_conv = None - self.reset_parameters() - - def reset_parameters(self): - if self.pooling_type == 'att': - kaiming_init(self.conv_mask, mode='fan_in') - self.conv_mask.inited = True - - if self.channel_add_conv is not None: - last_zero_init(self.channel_add_conv) - if self.channel_mul_conv is not None: - last_zero_init(self.channel_mul_conv) - - def spatial_pool(self, x): - batch, channel, height, width = x.size() - if self.pooling_type == 'att': - input_x = x - # [N, C, H * W] - input_x = input_x.view(batch, channel, height * width) - # [N, 1, C, H * W] - input_x = input_x.unsqueeze(1) - # [N, 1, H, W] - context_mask = self.conv_mask(x) - # [N, 1, H * W] - context_mask = context_mask.view(batch, 1, height * width) - # [N, 1, H * W] - context_mask = self.softmax(context_mask) - # [N, 1, H * W, 1] - context_mask = context_mask.unsqueeze(-1) - # [N, 1, C, 1] - context = torch.matmul(input_x, context_mask) - # [N, C, 1, 1] - context = context.view(batch, channel, 1, 1) - else: - # [N, C, 1, 1] - context = self.avg_pool(x) - - return context - - def forward(self, x): - # [N, C, 1, 1] - context = self.spatial_pool(x) - - out = x - if self.channel_mul_conv is not None: - # [N, C, 1, 1] - channel_mul_term = torch.sigmoid(self.channel_mul_conv(context)) - out = out * channel_mul_term - if self.channel_add_conv is not None: - # [N, C, 1, 1] - channel_add_term = self.channel_add_conv(context) - out = out + channel_add_term - - return out diff --git a/spaces/Plurigrid/LifeSim/src/app/agents/ant.ts b/spaces/Plurigrid/LifeSim/src/app/agents/ant.ts deleted file mode 100644 index 0c3d40f7b8010361be14fcf5a06b80e53b25f86f..0000000000000000000000000000000000000000 --- a/spaces/Plurigrid/LifeSim/src/app/agents/ant.ts +++ /dev/null @@ -1,42 +0,0 @@ -import { pick } from "./pick" -import { Agent, Scene } from "./types" - -const actions = [ - "working on lavae", - "slicing leaves", - "attacking a beetle", - "foraging", - "cutting a sugar cube", - "collecting sugar", - "collecting aphids" -] - -const positions = [ - "on a leave", - "on a tree branch", - "on sand", - "on the ground" -] - -export const agent: Agent = { - title: "Ant", - type: "ant", - simulate: (): Scene => { - const action = pick(actions) - const position = pick(positions) - - const prompt = [ - `close-up shot of a couple of ants`, - action, - position, - `high res`, - `documentary`, - ].join(", ") - - return { - action, - position, - prompt - } - } -} diff --git a/spaces/QINGFNEG/White-box-Cartoonization/wbc/network.py b/spaces/QINGFNEG/White-box-Cartoonization/wbc/network.py deleted file mode 100644 index 6f16cee1aa1994d0a78c524f459764de5164e637..0000000000000000000000000000000000000000 --- a/spaces/QINGFNEG/White-box-Cartoonization/wbc/network.py +++ /dev/null @@ -1,62 +0,0 @@ -import tensorflow as tf -import numpy as np -import tensorflow.contrib.slim as slim - - - -def resblock(inputs, out_channel=32, name='resblock'): - - with tf.variable_scope(name): - - x = slim.convolution2d(inputs, out_channel, [3, 3], - activation_fn=None, scope='conv1') - x = tf.nn.leaky_relu(x) - x = slim.convolution2d(x, out_channel, [3, 3], - activation_fn=None, scope='conv2') - - return x + inputs - - - - -def unet_generator(inputs, channel=32, num_blocks=4, name='generator', reuse=False): - with tf.variable_scope(name, reuse=reuse): - - x0 = slim.convolution2d(inputs, channel, [7, 7], activation_fn=None) - x0 = tf.nn.leaky_relu(x0) - - x1 = slim.convolution2d(x0, channel, [3, 3], stride=2, activation_fn=None) - x1 = tf.nn.leaky_relu(x1) - x1 = slim.convolution2d(x1, channel*2, [3, 3], activation_fn=None) - x1 = tf.nn.leaky_relu(x1) - - x2 = slim.convolution2d(x1, channel*2, [3, 3], stride=2, activation_fn=None) - x2 = tf.nn.leaky_relu(x2) - x2 = slim.convolution2d(x2, channel*4, [3, 3], activation_fn=None) - x2 = tf.nn.leaky_relu(x2) - - for idx in range(num_blocks): - x2 = resblock(x2, out_channel=channel*4, name='block_{}'.format(idx)) - - x2 = slim.convolution2d(x2, channel*2, [3, 3], activation_fn=None) - x2 = tf.nn.leaky_relu(x2) - - h1, w1 = tf.shape(x2)[1], tf.shape(x2)[2] - x3 = tf.image.resize_bilinear(x2, (h1*2, w1*2)) - x3 = slim.convolution2d(x3+x1, channel*2, [3, 3], activation_fn=None) - x3 = tf.nn.leaky_relu(x3) - x3 = slim.convolution2d(x3, channel, [3, 3], activation_fn=None) - x3 = tf.nn.leaky_relu(x3) - - h2, w2 = tf.shape(x3)[1], tf.shape(x3)[2] - x4 = tf.image.resize_bilinear(x3, (h2*2, w2*2)) - x4 = slim.convolution2d(x4+x0, channel, [3, 3], activation_fn=None) - x4 = tf.nn.leaky_relu(x4) - x4 = slim.convolution2d(x4, 3, [7, 7], activation_fn=None) - - return x4 - -if __name__ == '__main__': - - - pass \ No newline at end of file diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/vcs/subversion.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/vcs/subversion.py deleted file mode 100644 index 2cd6f0ae9d29a1e8cb58033b077f9b0ea7ceac5c..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/vcs/subversion.py +++ /dev/null @@ -1,324 +0,0 @@ -import logging -import os -import re -from typing import List, Optional, Tuple - -from pip._internal.utils.misc import ( - HiddenText, - display_path, - is_console_interactive, - is_installable_dir, - split_auth_from_netloc, -) -from pip._internal.utils.subprocess import CommandArgs, make_command -from pip._internal.vcs.versioncontrol import ( - AuthInfo, - RemoteNotFoundError, - RevOptions, - VersionControl, - vcs, -) - -logger = logging.getLogger(__name__) - -_svn_xml_url_re = re.compile('url="([^"]+)"') -_svn_rev_re = re.compile(r'committed-rev="(\d+)"') -_svn_info_xml_rev_re = re.compile(r'\s*revision="(\d+)"') -_svn_info_xml_url_re = re.compile(r"(.*)") - - -class Subversion(VersionControl): - name = "svn" - dirname = ".svn" - repo_name = "checkout" - schemes = ("svn+ssh", "svn+http", "svn+https", "svn+svn", "svn+file") - - @classmethod - def should_add_vcs_url_prefix(cls, remote_url: str) -> bool: - return True - - @staticmethod - def get_base_rev_args(rev: str) -> List[str]: - return ["-r", rev] - - @classmethod - def get_revision(cls, location: str) -> str: - """ - Return the maximum revision for all files under a given location - """ - # Note: taken from setuptools.command.egg_info - revision = 0 - - for base, dirs, _ in os.walk(location): - if cls.dirname not in dirs: - dirs[:] = [] - continue # no sense walking uncontrolled subdirs - dirs.remove(cls.dirname) - entries_fn = os.path.join(base, cls.dirname, "entries") - if not os.path.exists(entries_fn): - # FIXME: should we warn? - continue - - dirurl, localrev = cls._get_svn_url_rev(base) - - if base == location: - assert dirurl is not None - base = dirurl + "/" # save the root url - elif not dirurl or not dirurl.startswith(base): - dirs[:] = [] - continue # not part of the same svn tree, skip it - revision = max(revision, localrev) - return str(revision) - - @classmethod - def get_netloc_and_auth( - cls, netloc: str, scheme: str - ) -> Tuple[str, Tuple[Optional[str], Optional[str]]]: - """ - This override allows the auth information to be passed to svn via the - --username and --password options instead of via the URL. - """ - if scheme == "ssh": - # The --username and --password options can't be used for - # svn+ssh URLs, so keep the auth information in the URL. - return super().get_netloc_and_auth(netloc, scheme) - - return split_auth_from_netloc(netloc) - - @classmethod - def get_url_rev_and_auth(cls, url: str) -> Tuple[str, Optional[str], AuthInfo]: - # hotfix the URL scheme after removing svn+ from svn+ssh:// readd it - url, rev, user_pass = super().get_url_rev_and_auth(url) - if url.startswith("ssh://"): - url = "svn+" + url - return url, rev, user_pass - - @staticmethod - def make_rev_args( - username: Optional[str], password: Optional[HiddenText] - ) -> CommandArgs: - extra_args: CommandArgs = [] - if username: - extra_args += ["--username", username] - if password: - extra_args += ["--password", password] - - return extra_args - - @classmethod - def get_remote_url(cls, location: str) -> str: - # In cases where the source is in a subdirectory, we have to look up in - # the location until we find a valid project root. - orig_location = location - while not is_installable_dir(location): - last_location = location - location = os.path.dirname(location) - if location == last_location: - # We've traversed up to the root of the filesystem without - # finding a Python project. - logger.warning( - "Could not find Python project for directory %s (tried all " - "parent directories)", - orig_location, - ) - raise RemoteNotFoundError - - url, _rev = cls._get_svn_url_rev(location) - if url is None: - raise RemoteNotFoundError - - return url - - @classmethod - def _get_svn_url_rev(cls, location: str) -> Tuple[Optional[str], int]: - from pip._internal.exceptions import InstallationError - - entries_path = os.path.join(location, cls.dirname, "entries") - if os.path.exists(entries_path): - with open(entries_path) as f: - data = f.read() - else: # subversion >= 1.7 does not have the 'entries' file - data = "" - - url = None - if data.startswith("8") or data.startswith("9") or data.startswith("10"): - entries = list(map(str.splitlines, data.split("\n\x0c\n"))) - del entries[0][0] # get rid of the '8' - url = entries[0][3] - revs = [int(d[9]) for d in entries if len(d) > 9 and d[9]] + [0] - elif data.startswith("= 1.7 - # Note that using get_remote_call_options is not necessary here - # because `svn info` is being run against a local directory. - # We don't need to worry about making sure interactive mode - # is being used to prompt for passwords, because passwords - # are only potentially needed for remote server requests. - xml = cls.run_command( - ["info", "--xml", location], - show_stdout=False, - stdout_only=True, - ) - match = _svn_info_xml_url_re.search(xml) - assert match is not None - url = match.group(1) - revs = [int(m.group(1)) for m in _svn_info_xml_rev_re.finditer(xml)] - except InstallationError: - url, revs = None, [] - - if revs: - rev = max(revs) - else: - rev = 0 - - return url, rev - - @classmethod - def is_commit_id_equal(cls, dest: str, name: Optional[str]) -> bool: - """Always assume the versions don't match""" - return False - - def __init__(self, use_interactive: Optional[bool] = None) -> None: - if use_interactive is None: - use_interactive = is_console_interactive() - self.use_interactive = use_interactive - - # This member is used to cache the fetched version of the current - # ``svn`` client. - # Special value definitions: - # None: Not evaluated yet. - # Empty tuple: Could not parse version. - self._vcs_version: Optional[Tuple[int, ...]] = None - - super().__init__() - - def call_vcs_version(self) -> Tuple[int, ...]: - """Query the version of the currently installed Subversion client. - - :return: A tuple containing the parts of the version information or - ``()`` if the version returned from ``svn`` could not be parsed. - :raises: BadCommand: If ``svn`` is not installed. - """ - # Example versions: - # svn, version 1.10.3 (r1842928) - # compiled Feb 25 2019, 14:20:39 on x86_64-apple-darwin17.0.0 - # svn, version 1.7.14 (r1542130) - # compiled Mar 28 2018, 08:49:13 on x86_64-pc-linux-gnu - # svn, version 1.12.0-SlikSvn (SlikSvn/1.12.0) - # compiled May 28 2019, 13:44:56 on x86_64-microsoft-windows6.2 - version_prefix = "svn, version " - version = self.run_command(["--version"], show_stdout=False, stdout_only=True) - if not version.startswith(version_prefix): - return () - - version = version[len(version_prefix) :].split()[0] - version_list = version.partition("-")[0].split(".") - try: - parsed_version = tuple(map(int, version_list)) - except ValueError: - return () - - return parsed_version - - def get_vcs_version(self) -> Tuple[int, ...]: - """Return the version of the currently installed Subversion client. - - If the version of the Subversion client has already been queried, - a cached value will be used. - - :return: A tuple containing the parts of the version information or - ``()`` if the version returned from ``svn`` could not be parsed. - :raises: BadCommand: If ``svn`` is not installed. - """ - if self._vcs_version is not None: - # Use cached version, if available. - # If parsing the version failed previously (empty tuple), - # do not attempt to parse it again. - return self._vcs_version - - vcs_version = self.call_vcs_version() - self._vcs_version = vcs_version - return vcs_version - - def get_remote_call_options(self) -> CommandArgs: - """Return options to be used on calls to Subversion that contact the server. - - These options are applicable for the following ``svn`` subcommands used - in this class. - - - checkout - - switch - - update - - :return: A list of command line arguments to pass to ``svn``. - """ - if not self.use_interactive: - # --non-interactive switch is available since Subversion 0.14.4. - # Subversion < 1.8 runs in interactive mode by default. - return ["--non-interactive"] - - svn_version = self.get_vcs_version() - # By default, Subversion >= 1.8 runs in non-interactive mode if - # stdin is not a TTY. Since that is how pip invokes SVN, in - # call_subprocess(), pip must pass --force-interactive to ensure - # the user can be prompted for a password, if required. - # SVN added the --force-interactive option in SVN 1.8. Since - # e.g. RHEL/CentOS 7, which is supported until 2024, ships with - # SVN 1.7, pip should continue to support SVN 1.7. Therefore, pip - # can't safely add the option if the SVN version is < 1.8 (or unknown). - if svn_version >= (1, 8): - return ["--force-interactive"] - - return [] - - def fetch_new( - self, dest: str, url: HiddenText, rev_options: RevOptions, verbosity: int - ) -> None: - rev_display = rev_options.to_display() - logger.info( - "Checking out %s%s to %s", - url, - rev_display, - display_path(dest), - ) - if verbosity <= 0: - flag = "--quiet" - else: - flag = "" - cmd_args = make_command( - "checkout", - flag, - self.get_remote_call_options(), - rev_options.to_args(), - url, - dest, - ) - self.run_command(cmd_args) - - def switch(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None: - cmd_args = make_command( - "switch", - self.get_remote_call_options(), - rev_options.to_args(), - url, - dest, - ) - self.run_command(cmd_args) - - def update(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None: - cmd_args = make_command( - "update", - self.get_remote_call_options(), - rev_options.to_args(), - dest, - ) - self.run_command(cmd_args) - - -vcs.register(Subversion) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/chardet/johabprober.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/chardet/johabprober.py deleted file mode 100644 index 6f359d193f73aec10b3f05aeff788fd274d4ebba..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/chardet/johabprober.py +++ /dev/null @@ -1,47 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is mozilla.org code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 1998 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -from .chardistribution import JOHABDistributionAnalysis -from .codingstatemachine import CodingStateMachine -from .mbcharsetprober import MultiByteCharSetProber -from .mbcssm import JOHAB_SM_MODEL - - -class JOHABProber(MultiByteCharSetProber): - def __init__(self): - super().__init__() - self.coding_sm = CodingStateMachine(JOHAB_SM_MODEL) - self.distribution_analyzer = JOHABDistributionAnalysis() - self.reset() - - @property - def charset_name(self): - return "Johab" - - @property - def language(self): - return "Korean" diff --git a/spaces/Realcat/image-matching-webui/third_party/DKM/dkm/losses/depth_match_regression_loss.py b/spaces/Realcat/image-matching-webui/third_party/DKM/dkm/losses/depth_match_regression_loss.py deleted file mode 100644 index 80da70347b4b4addc721e2a14ed489f8683fd48a..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/DKM/dkm/losses/depth_match_regression_loss.py +++ /dev/null @@ -1,128 +0,0 @@ -from einops.einops import rearrange -import torch -import torch.nn as nn -import torch.nn.functional as F -from dkm.utils.utils import warp_kpts - - -class DepthRegressionLoss(nn.Module): - def __init__( - self, - robust=True, - center_coords=False, - scale_normalize=False, - ce_weight=0.01, - local_loss=True, - local_dist=4.0, - local_largest_scale=8, - ): - super().__init__() - self.robust = robust # measured in pixels - self.center_coords = center_coords - self.scale_normalize = scale_normalize - self.ce_weight = ce_weight - self.local_loss = local_loss - self.local_dist = local_dist - self.local_largest_scale = local_largest_scale - - def geometric_dist(self, depth1, depth2, T_1to2, K1, K2, dense_matches, scale): - """[summary] - - Args: - H ([type]): [description] - scale ([type]): [description] - - Returns: - [type]: [description] - """ - b, h1, w1, d = dense_matches.shape - with torch.no_grad(): - x1_n = torch.meshgrid( - *[ - torch.linspace( - -1 + 1 / n, 1 - 1 / n, n, device=dense_matches.device - ) - for n in (b, h1, w1) - ] - ) - x1_n = torch.stack((x1_n[2], x1_n[1]), dim=-1).reshape(b, h1 * w1, 2) - mask, x2 = warp_kpts( - x1_n.double(), - depth1.double(), - depth2.double(), - T_1to2.double(), - K1.double(), - K2.double(), - ) - prob = mask.float().reshape(b, h1, w1) - gd = (dense_matches - x2.reshape(b, h1, w1, 2)).norm(dim=-1) # *scale? - return gd, prob - - def dense_depth_loss(self, dense_certainty, prob, gd, scale, eps=1e-8): - """[summary] - - Args: - dense_certainty ([type]): [description] - prob ([type]): [description] - eps ([type], optional): [description]. Defaults to 1e-8. - - Returns: - [type]: [description] - """ - smooth_prob = prob - ce_loss = F.binary_cross_entropy_with_logits(dense_certainty[:, 0], smooth_prob) - depth_loss = gd[prob > 0] - if not torch.any(prob > 0).item(): - depth_loss = (gd * 0.0).mean() # Prevent issues where prob is 0 everywhere - return { - f"ce_loss_{scale}": ce_loss.mean(), - f"depth_loss_{scale}": depth_loss.mean(), - } - - def forward(self, dense_corresps, batch): - """[summary] - - Args: - out ([type]): [description] - batch ([type]): [description] - - Returns: - [type]: [description] - """ - scales = list(dense_corresps.keys()) - tot_loss = 0.0 - prev_gd = 0.0 - for scale in scales: - dense_scale_corresps = dense_corresps[scale] - dense_scale_certainty, dense_scale_coords = ( - dense_scale_corresps["dense_certainty"], - dense_scale_corresps["dense_flow"], - ) - dense_scale_coords = rearrange(dense_scale_coords, "b d h w -> b h w d") - b, h, w, d = dense_scale_coords.shape - gd, prob = self.geometric_dist( - batch["query_depth"], - batch["support_depth"], - batch["T_1to2"], - batch["K1"], - batch["K2"], - dense_scale_coords, - scale, - ) - if ( - scale <= self.local_largest_scale and self.local_loss - ): # Thought here is that fine matching loss should not be punished by coarse mistakes, but should identify wrong matching - prob = prob * ( - F.interpolate(prev_gd[:, None], size=(h, w), mode="nearest")[:, 0] - < (2 / 512) * (self.local_dist * scale) - ) - depth_losses = self.dense_depth_loss(dense_scale_certainty, prob, gd, scale) - scale_loss = ( - self.ce_weight * depth_losses[f"ce_loss_{scale}"] - + depth_losses[f"depth_loss_{scale}"] - ) # scale ce loss for coarser scales - if self.scale_normalize: - scale_loss = scale_loss * 1 / scale - tot_loss = tot_loss + scale_loss - prev_gd = gd.detach() - return tot_loss diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/configs/_base_/datasets/chase_db1.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/configs/_base_/datasets/chase_db1.py deleted file mode 100644 index 298594ea925f87f22b37094a2ec50e370aec96a0..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/configs/_base_/datasets/chase_db1.py +++ /dev/null @@ -1,59 +0,0 @@ -# dataset settings -dataset_type = 'ChaseDB1Dataset' -data_root = 'data/CHASE_DB1' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -img_scale = (960, 999) -crop_size = (128, 128) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations'), - dict(type='Resize', img_scale=img_scale, ratio_range=(0.5, 2.0)), - dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), - dict(type='RandomFlip', prob=0.5), - dict(type='PhotoMetricDistortion'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_semantic_seg']) -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=img_scale, - # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75, 2.0], - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) -] - -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=40000, - dataset=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/training', - ann_dir='annotations/training', - pipeline=train_pipeline)), - val=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/validation', - ann_dir='annotations/validation', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/validation', - ann_dir='annotations/validation', - pipeline=test_pipeline)) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/modulated_deform_conv.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/modulated_deform_conv.py deleted file mode 100644 index 75559579cf053abcc99538606cbb88c723faf783..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/modulated_deform_conv.py +++ /dev/null @@ -1,282 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -import torch.nn as nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.utils import _pair, _single - -from annotator.uniformer.mmcv.utils import deprecated_api_warning -from ..cnn import CONV_LAYERS -from ..utils import ext_loader, print_log - -ext_module = ext_loader.load_ext( - '_ext', - ['modulated_deform_conv_forward', 'modulated_deform_conv_backward']) - - -class ModulatedDeformConv2dFunction(Function): - - @staticmethod - def symbolic(g, input, offset, mask, weight, bias, stride, padding, - dilation, groups, deform_groups): - input_tensors = [input, offset, mask, weight] - if bias is not None: - input_tensors.append(bias) - return g.op( - 'mmcv::MMCVModulatedDeformConv2d', - *input_tensors, - stride_i=stride, - padding_i=padding, - dilation_i=dilation, - groups_i=groups, - deform_groups_i=deform_groups) - - @staticmethod - def forward(ctx, - input, - offset, - mask, - weight, - bias=None, - stride=1, - padding=0, - dilation=1, - groups=1, - deform_groups=1): - if input is not None and input.dim() != 4: - raise ValueError( - f'Expected 4D tensor as input, got {input.dim()}D tensor \ - instead.') - ctx.stride = _pair(stride) - ctx.padding = _pair(padding) - ctx.dilation = _pair(dilation) - ctx.groups = groups - ctx.deform_groups = deform_groups - ctx.with_bias = bias is not None - if not ctx.with_bias: - bias = input.new_empty(0) # fake tensor - # When pytorch version >= 1.6.0, amp is adopted for fp16 mode; - # amp won't cast the type of model (float32), but "offset" is cast - # to float16 by nn.Conv2d automatically, leading to the type - # mismatch with input (when it is float32) or weight. - # The flag for whether to use fp16 or amp is the type of "offset", - # we cast weight and input to temporarily support fp16 and amp - # whatever the pytorch version is. - input = input.type_as(offset) - weight = weight.type_as(input) - ctx.save_for_backward(input, offset, mask, weight, bias) - output = input.new_empty( - ModulatedDeformConv2dFunction._output_size(ctx, input, weight)) - ctx._bufs = [input.new_empty(0), input.new_empty(0)] - ext_module.modulated_deform_conv_forward( - input, - weight, - bias, - ctx._bufs[0], - offset, - mask, - output, - ctx._bufs[1], - kernel_h=weight.size(2), - kernel_w=weight.size(3), - stride_h=ctx.stride[0], - stride_w=ctx.stride[1], - pad_h=ctx.padding[0], - pad_w=ctx.padding[1], - dilation_h=ctx.dilation[0], - dilation_w=ctx.dilation[1], - group=ctx.groups, - deformable_group=ctx.deform_groups, - with_bias=ctx.with_bias) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - input, offset, mask, weight, bias = ctx.saved_tensors - grad_input = torch.zeros_like(input) - grad_offset = torch.zeros_like(offset) - grad_mask = torch.zeros_like(mask) - grad_weight = torch.zeros_like(weight) - grad_bias = torch.zeros_like(bias) - grad_output = grad_output.contiguous() - ext_module.modulated_deform_conv_backward( - input, - weight, - bias, - ctx._bufs[0], - offset, - mask, - ctx._bufs[1], - grad_input, - grad_weight, - grad_bias, - grad_offset, - grad_mask, - grad_output, - kernel_h=weight.size(2), - kernel_w=weight.size(3), - stride_h=ctx.stride[0], - stride_w=ctx.stride[1], - pad_h=ctx.padding[0], - pad_w=ctx.padding[1], - dilation_h=ctx.dilation[0], - dilation_w=ctx.dilation[1], - group=ctx.groups, - deformable_group=ctx.deform_groups, - with_bias=ctx.with_bias) - if not ctx.with_bias: - grad_bias = None - - return (grad_input, grad_offset, grad_mask, grad_weight, grad_bias, - None, None, None, None, None) - - @staticmethod - def _output_size(ctx, input, weight): - channels = weight.size(0) - output_size = (input.size(0), channels) - for d in range(input.dim() - 2): - in_size = input.size(d + 2) - pad = ctx.padding[d] - kernel = ctx.dilation[d] * (weight.size(d + 2) - 1) + 1 - stride_ = ctx.stride[d] - output_size += ((in_size + (2 * pad) - kernel) // stride_ + 1, ) - if not all(map(lambda s: s > 0, output_size)): - raise ValueError( - 'convolution input is too small (output would be ' + - 'x'.join(map(str, output_size)) + ')') - return output_size - - -modulated_deform_conv2d = ModulatedDeformConv2dFunction.apply - - -class ModulatedDeformConv2d(nn.Module): - - @deprecated_api_warning({'deformable_groups': 'deform_groups'}, - cls_name='ModulatedDeformConv2d') - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - deform_groups=1, - bias=True): - super(ModulatedDeformConv2d, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.stride = _pair(stride) - self.padding = _pair(padding) - self.dilation = _pair(dilation) - self.groups = groups - self.deform_groups = deform_groups - # enable compatibility with nn.Conv2d - self.transposed = False - self.output_padding = _single(0) - - self.weight = nn.Parameter( - torch.Tensor(out_channels, in_channels // groups, - *self.kernel_size)) - if bias: - self.bias = nn.Parameter(torch.Tensor(out_channels)) - else: - self.register_parameter('bias', None) - self.init_weights() - - def init_weights(self): - n = self.in_channels - for k in self.kernel_size: - n *= k - stdv = 1. / math.sqrt(n) - self.weight.data.uniform_(-stdv, stdv) - if self.bias is not None: - self.bias.data.zero_() - - def forward(self, x, offset, mask): - return modulated_deform_conv2d(x, offset, mask, self.weight, self.bias, - self.stride, self.padding, - self.dilation, self.groups, - self.deform_groups) - - -@CONV_LAYERS.register_module('DCNv2') -class ModulatedDeformConv2dPack(ModulatedDeformConv2d): - """A ModulatedDeformable Conv Encapsulation that acts as normal Conv - layers. - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): Same as nn.Conv2d. - kernel_size (int or tuple[int]): Same as nn.Conv2d. - stride (int): Same as nn.Conv2d, while tuple is not supported. - padding (int): Same as nn.Conv2d, while tuple is not supported. - dilation (int): Same as nn.Conv2d, while tuple is not supported. - groups (int): Same as nn.Conv2d. - bias (bool or str): If specified as `auto`, it will be decided by the - norm_cfg. Bias will be set as True if norm_cfg is None, otherwise - False. - """ - - _version = 2 - - def __init__(self, *args, **kwargs): - super(ModulatedDeformConv2dPack, self).__init__(*args, **kwargs) - self.conv_offset = nn.Conv2d( - self.in_channels, - self.deform_groups * 3 * self.kernel_size[0] * self.kernel_size[1], - kernel_size=self.kernel_size, - stride=self.stride, - padding=self.padding, - dilation=self.dilation, - bias=True) - self.init_weights() - - def init_weights(self): - super(ModulatedDeformConv2dPack, self).init_weights() - if hasattr(self, 'conv_offset'): - self.conv_offset.weight.data.zero_() - self.conv_offset.bias.data.zero_() - - def forward(self, x): - out = self.conv_offset(x) - o1, o2, mask = torch.chunk(out, 3, dim=1) - offset = torch.cat((o1, o2), dim=1) - mask = torch.sigmoid(mask) - return modulated_deform_conv2d(x, offset, mask, self.weight, self.bias, - self.stride, self.padding, - self.dilation, self.groups, - self.deform_groups) - - def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, - missing_keys, unexpected_keys, error_msgs): - version = local_metadata.get('version', None) - - if version is None or version < 2: - # the key is different in early versions - # In version < 2, ModulatedDeformConvPack - # loads previous benchmark models. - if (prefix + 'conv_offset.weight' not in state_dict - and prefix[:-1] + '_offset.weight' in state_dict): - state_dict[prefix + 'conv_offset.weight'] = state_dict.pop( - prefix[:-1] + '_offset.weight') - if (prefix + 'conv_offset.bias' not in state_dict - and prefix[:-1] + '_offset.bias' in state_dict): - state_dict[prefix + - 'conv_offset.bias'] = state_dict.pop(prefix[:-1] + - '_offset.bias') - - if version is not None and version > 1: - print_log( - f'ModulatedDeformConvPack {prefix.rstrip(".")} is upgraded to ' - 'version 2.', - logger='root') - - super()._load_from_state_dict(state_dict, prefix, local_metadata, - strict, missing_keys, unexpected_keys, - error_msgs) diff --git a/spaces/Roboflow/webcamGPT/README.md b/spaces/Roboflow/webcamGPT/README.md deleted file mode 100644 index 8c8efe9f823bfbd49a5dc730cc65ed69591ad5b8..0000000000000000000000000000000000000000 --- a/spaces/Roboflow/webcamGPT/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: WebcamGPT -emoji: 📸 -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/datasets/dataset_512.py b/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/datasets/dataset_512.py deleted file mode 100644 index 27fc1ce862f1b00e427670d393d70bec56d063da..0000000000000000000000000000000000000000 --- a/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/datasets/dataset_512.py +++ /dev/null @@ -1,286 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import cv2 -import os -import numpy as np -import zipfile -import PIL.Image -import json -import torch -import dnnlib -import random - -try: - import pyspng -except ImportError: - pyspng = None - -from datasets.mask_generator_512 import RandomMask - -#---------------------------------------------------------------------------- - -class Dataset(torch.utils.data.Dataset): - def __init__(self, - name, # Name of the dataset. - raw_shape, # Shape of the raw image data (NCHW). - max_size = None, # Artificially limit the size of the dataset. None = no limit. Applied before xflip. - use_labels = False, # Enable conditioning labels? False = label dimension is zero. - xflip = False, # Artificially double the size of the dataset via x-flips. Applied after max_size. - random_seed = 0, # Random seed to use when applying max_size. - ): - self._name = name - self._raw_shape = list(raw_shape) - self._use_labels = use_labels - self._raw_labels = None - self._label_shape = None - - # Apply max_size. - self._raw_idx = np.arange(self._raw_shape[0], dtype=np.int64) - if (max_size is not None) and (self._raw_idx.size > max_size): - np.random.RandomState(random_seed).shuffle(self._raw_idx) - self._raw_idx = np.sort(self._raw_idx[:max_size]) - - # Apply xflip. - self._xflip = np.zeros(self._raw_idx.size, dtype=np.uint8) - if xflip: - self._raw_idx = np.tile(self._raw_idx, 2) - self._xflip = np.concatenate([self._xflip, np.ones_like(self._xflip)]) - - def _get_raw_labels(self): - if self._raw_labels is None: - self._raw_labels = self._load_raw_labels() if self._use_labels else None - if self._raw_labels is None: - self._raw_labels = np.zeros([self._raw_shape[0], 0], dtype=np.float32) - assert isinstance(self._raw_labels, np.ndarray) - assert self._raw_labels.shape[0] == self._raw_shape[0] - assert self._raw_labels.dtype in [np.float32, np.int64] - if self._raw_labels.dtype == np.int64: - assert self._raw_labels.ndim == 1 - assert np.all(self._raw_labels >= 0) - return self._raw_labels - - def close(self): # to be overridden by subclass - pass - - def _load_raw_image(self, raw_idx): # to be overridden by subclass - raise NotImplementedError - - def _load_raw_labels(self): # to be overridden by subclass - raise NotImplementedError - - def __getstate__(self): - return dict(self.__dict__, _raw_labels=None) - - def __del__(self): - try: - self.close() - except: - pass - - def __len__(self): - return self._raw_idx.size - - def __getitem__(self, idx): - image = self._load_raw_image(self._raw_idx[idx]) - assert isinstance(image, np.ndarray) - assert list(image.shape) == self.image_shape - assert image.dtype == np.uint8 - if self._xflip[idx]: - assert image.ndim == 3 # CHW - image = image[:, :, ::-1] - return image.copy(), self.get_label(idx) - - def get_label(self, idx): - label = self._get_raw_labels()[self._raw_idx[idx]] - if label.dtype == np.int64: - onehot = np.zeros(self.label_shape, dtype=np.float32) - onehot[label] = 1 - label = onehot - return label.copy() - - def get_details(self, idx): - d = dnnlib.EasyDict() - d.raw_idx = int(self._raw_idx[idx]) - d.xflip = (int(self._xflip[idx]) != 0) - d.raw_label = self._get_raw_labels()[d.raw_idx].copy() - return d - - @property - def name(self): - return self._name - - @property - def image_shape(self): - return list(self._raw_shape[1:]) - - @property - def num_channels(self): - assert len(self.image_shape) == 3 # CHW - return self.image_shape[0] - - @property - def resolution(self): - assert len(self.image_shape) == 3 # CHW - assert self.image_shape[1] == self.image_shape[2] - return self.image_shape[1] - - @property - def label_shape(self): - if self._label_shape is None: - raw_labels = self._get_raw_labels() - if raw_labels.dtype == np.int64: - self._label_shape = [int(np.max(raw_labels)) + 1] - else: - self._label_shape = raw_labels.shape[1:] - return list(self._label_shape) - - @property - def label_dim(self): - assert len(self.label_shape) == 1 - return self.label_shape[0] - - @property - def has_labels(self): - return any(x != 0 for x in self.label_shape) - - @property - def has_onehot_labels(self): - return self._get_raw_labels().dtype == np.int64 - - -#---------------------------------------------------------------------------- - - -class ImageFolderMaskDataset(Dataset): - def __init__(self, - path, # Path to directory or zip. - resolution = None, # Ensure specific resolution, None = highest available. - hole_range=[0,1], - **super_kwargs, # Additional arguments for the Dataset base class. - ): - self._path = path - self._zipfile = None - self._hole_range = hole_range - - if os.path.isdir(self._path): - self._type = 'dir' - self._all_fnames = {os.path.relpath(os.path.join(root, fname), start=self._path) for root, _dirs, files in os.walk(self._path) for fname in files} - elif self._file_ext(self._path) == '.zip': - self._type = 'zip' - self._all_fnames = set(self._get_zipfile().namelist()) - else: - raise IOError('Path must point to a directory or zip') - - PIL.Image.init() - self._image_fnames = sorted(fname for fname in self._all_fnames if self._file_ext(fname) in PIL.Image.EXTENSION) - if len(self._image_fnames) == 0: - raise IOError('No image files found in the specified path') - - name = os.path.splitext(os.path.basename(self._path))[0] - raw_shape = [len(self._image_fnames)] + list(self._load_raw_image(0).shape) - if resolution is not None and (raw_shape[2] != resolution or raw_shape[3] != resolution): - raise IOError('Image files do not match the specified resolution') - super().__init__(name=name, raw_shape=raw_shape, **super_kwargs) - - @staticmethod - def _file_ext(fname): - return os.path.splitext(fname)[1].lower() - - def _get_zipfile(self): - assert self._type == 'zip' - if self._zipfile is None: - self._zipfile = zipfile.ZipFile(self._path) - return self._zipfile - - def _open_file(self, fname): - if self._type == 'dir': - return open(os.path.join(self._path, fname), 'rb') - if self._type == 'zip': - return self._get_zipfile().open(fname, 'r') - return None - - def close(self): - try: - if self._zipfile is not None: - self._zipfile.close() - finally: - self._zipfile = None - - def __getstate__(self): - return dict(super().__getstate__(), _zipfile=None) - - def _load_raw_image(self, raw_idx): - fname = self._image_fnames[raw_idx] - with self._open_file(fname) as f: - if pyspng is not None and self._file_ext(fname) == '.png': - image = pyspng.load(f.read()) - else: - image = np.array(PIL.Image.open(f)) - if image.ndim == 2: - image = image[:, :, np.newaxis] # HW => HWC - - # for grayscale image - if image.shape[2] == 1: - image = np.repeat(image, 3, axis=2) - - # restricted to 512x512 - res = 512 - H, W, C = image.shape - if H < res or W < res: - top = 0 - bottom = max(0, res - H) - left = 0 - right = max(0, res - W) - image = cv2.copyMakeBorder(image, top, bottom, left, right, cv2.BORDER_REFLECT) - H, W, C = image.shape - h = random.randint(0, H - res) - w = random.randint(0, W - res) - image = image[h:h+res, w:w+res, :] - - image = np.ascontiguousarray(image.transpose(2, 0, 1)) # HWC => CHW - - return image - - def _load_raw_labels(self): - fname = 'labels.json' - if fname not in self._all_fnames: - return None - with self._open_file(fname) as f: - labels = json.load(f)['labels'] - if labels is None: - return None - labels = dict(labels) - labels = [labels[fname.replace('\\', '/')] for fname in self._image_fnames] - labels = np.array(labels) - labels = labels.astype({1: np.int64, 2: np.float32}[labels.ndim]) - return labels - - def __getitem__(self, idx): - image = self._load_raw_image(self._raw_idx[idx]) - - assert isinstance(image, np.ndarray) - assert list(image.shape) == self.image_shape - assert image.dtype == np.uint8 - if self._xflip[idx]: - assert image.ndim == 3 # CHW - image = image[:, :, ::-1] - mask = RandomMask(image.shape[-1], hole_range=self._hole_range) # hole as 0, reserved as 1 - return image.copy(), mask, self.get_label(idx) - - -if __name__ == '__main__': - res = 512 - dpath = '/data/liwenbo/datasets/Places365/standard/val_large' - D = ImageFolderMaskDataset(path=dpath) - print(D.__len__()) - for i in range(D.__len__()): - print(i) - a, b, c = D.__getitem__(i) - if a.shape != (3, 512, 512): - print(i, a.shape) diff --git a/spaces/Ryzal/rvc-models-new/lib/infer_pack/models_onnx.py b/spaces/Ryzal/rvc-models-new/lib/infer_pack/models_onnx.py deleted file mode 100644 index 963e67b29f828e9fdd096397952054fe77cf3d10..0000000000000000000000000000000000000000 --- a/spaces/Ryzal/rvc-models-new/lib/infer_pack/models_onnx.py +++ /dev/null @@ -1,819 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from lib.infer_pack import modules -from lib.infer_pack import attentions -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from lib.infer_pack.commons import init_weights -import numpy as np -from lib.infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMsNSFsidM(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - version, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - if version == "v1": - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - else: - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - self.speaker_map = None - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def construct_spkmixmap(self, n_speaker): - self.speaker_map = torch.zeros((n_speaker, 1, 1, self.gin_channels)) - for i in range(n_speaker): - self.speaker_map[i] = self.emb_g(torch.LongTensor([[i]])) - self.speaker_map = self.speaker_map.unsqueeze(0) - - def forward(self, phone, phone_lengths, pitch, nsff0, g, rnd, max_len=None): - if self.speaker_map is not None: # [N, S] * [S, B, 1, H] - g = g.reshape((g.shape[0], g.shape[1], 1, 1, 1)) # [N, S, B, 1, 1] - g = g * self.speaker_map # [N, S, B, 1, H] - g = torch.sum(g, dim=1) # [N, 1, B, 1, H] - g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N] - else: - g = g.unsqueeze(0) - g = self.emb_g(g).transpose(1, 2) - - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/Sakukaze/VITS-Umamusume-voice-synthesizer/Libtorch C++ Infer/VITS-LibTorch.cpp b/spaces/Sakukaze/VITS-Umamusume-voice-synthesizer/Libtorch C++ Infer/VITS-LibTorch.cpp deleted file mode 100644 index afdd98e45af2fbeb2ba63961f45167dd3ecd4685..0000000000000000000000000000000000000000 --- a/spaces/Sakukaze/VITS-Umamusume-voice-synthesizer/Libtorch C++ Infer/VITS-LibTorch.cpp +++ /dev/null @@ -1,121 +0,0 @@ -#include -#include -#include -#include -#include -#include -#include -#include -#include -typedef int64_t int64; -namespace Shirakana { - - struct WavHead { - char RIFF[4]; - long int size0; - char WAVE[4]; - char FMT[4]; - long int size1; - short int fmttag; - short int channel; - long int samplespersec; - long int bytepersec; - short int blockalign; - short int bitpersamples; - char DATA[4]; - long int size2; - }; - - int conArr2Wav(int64 size, int16_t* input, const char* filename) { - WavHead head = { {'R','I','F','F'},0,{'W','A','V','E'},{'f','m','t',' '},16, - 1,1,22050,22050 * 2,2,16,{'d','a','t','a'}, - 0 }; - head.size0 = size * 2 + 36; - head.size2 = size * 2; - std::ofstream ocout; - char* outputData = (char*)input; - ocout.open(filename, std::ios::out | std::ios::binary); - ocout.write((char*)&head, 44); - ocout.write(outputData, (int32_t)(size * 2)); - ocout.close(); - return 0; - } - - inline std::wstring to_wide_string(const std::string& input) - { - std::wstring_convert> converter; - return converter.from_bytes(input); - } - - inline std::string to_byte_string(const std::wstring& input) - { - std::wstring_convert> converter; - return converter.to_bytes(input); - } -} - -#define val const auto -int main() -{ - torch::jit::Module Vits; - std::string buffer; - std::vector text; - std::vector data; - while(true) - { - while (true) - { - std::cin >> buffer; - if (buffer == "end") - return 0; - if(buffer == "model") - { - std::cin >> buffer; - Vits = torch::jit::load(buffer); - continue; - } - if (buffer == "endinfer") - { - Shirakana::conArr2Wav(data.size(), data.data(), "temp\\tmp.wav"); - data.clear(); - std::cout << "endofinfe"; - continue; - } - if (buffer == "line") - { - std::cin >> buffer; - while (buffer.find("endline")==std::string::npos) - { - text.push_back(std::atoi(buffer.c_str())); - std::cin >> buffer; - } - val InputTensor = torch::from_blob(text.data(), { 1,static_cast(text.size()) }, torch::kInt64); - std::array TextLength{ static_cast(text.size()) }; - val InputTensor_length = torch::from_blob(TextLength.data(), { 1 }, torch::kInt64); - std::vector inputs; - inputs.push_back(InputTensor); - inputs.push_back(InputTensor_length); - if (buffer.length() > 7) - { - std::array speakerIndex{ (int64)atoi(buffer.substr(7).c_str()) }; - inputs.push_back(torch::from_blob(speakerIndex.data(), { 1 }, torch::kLong)); - } - val output = Vits.forward(inputs).toTuple()->elements()[0].toTensor().multiply(32276.0F); - val outputSize = output.sizes().at(2); - val floatOutput = output.data_ptr(); - int16_t* outputTmp = (int16_t*)malloc(sizeof(float) * outputSize); - if (outputTmp == nullptr) { - throw std::exception("内存不足"); - } - for (int i = 0; i < outputSize; i++) { - *(outputTmp + i) = (int16_t) * (floatOutput + i); - } - data.insert(data.end(), outputTmp, outputTmp+outputSize); - free(outputTmp); - text.clear(); - std::cout << "endofline"; - } - } - } - //model S:\VSGIT\ShirakanaTTSUI\build\x64\Release\Mods\AtriVITS\AtriVITS_LJS.pt -} \ No newline at end of file diff --git a/spaces/SankarSrin/image-matting-app/ppmatting/metrics/__init__.py b/spaces/SankarSrin/image-matting-app/ppmatting/metrics/__init__.py deleted file mode 100644 index 836f0a973bf4331d36982252d47f7279e7c24752..0000000000000000000000000000000000000000 --- a/spaces/SankarSrin/image-matting-app/ppmatting/metrics/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .metric import MSE, SAD, Grad, Conn - -metrics_class_dict = {'sad': SAD, 'mse': MSE, 'grad': Grad, 'conn': Conn} diff --git a/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/ONNXVITS_to_onnx.py b/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/ONNXVITS_to_onnx.py deleted file mode 100644 index 846e39849535ed08accb10d7001f2431a851d372..0000000000000000000000000000000000000000 --- a/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/ONNXVITS_to_onnx.py +++ /dev/null @@ -1,31 +0,0 @@ -import ONNXVITS_models -import utils -from text import text_to_sequence -import torch -import commons - -def get_text(text, hps): - text_norm = text_to_sequence(text, hps.symbols, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - -hps = utils.get_hparams_from_file("../vits/pretrained_models/uma87.json") -symbols = hps.symbols -net_g = ONNXVITS_models.SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model) -_ = net_g.eval() -_ = utils.load_checkpoint("../vits/pretrained_models/uma_1153000.pth", net_g) - -text1 = get_text("ありがとうございます。", hps) -stn_tst = text1 -with torch.no_grad(): - x_tst = stn_tst.unsqueeze(0) - x_tst_lengths = torch.LongTensor([stn_tst.size(0)]) - sid = torch.tensor([0]) - o = net_g(x_tst, x_tst_lengths, sid=sid, noise_scale=.667, noise_scale_w=0.8, length_scale=1) \ No newline at end of file diff --git a/spaces/SuYuanS/AudioCraft_Plus/tests/models/test_audiogen.py b/spaces/SuYuanS/AudioCraft_Plus/tests/models/test_audiogen.py deleted file mode 100644 index 3850af066cedd5ea38bd9aead9634d6aaf938218..0000000000000000000000000000000000000000 --- a/spaces/SuYuanS/AudioCraft_Plus/tests/models/test_audiogen.py +++ /dev/null @@ -1,53 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import pytest -import torch - -from audiocraft.models import AudioGen - - -class TestAudioGenModel: - def get_audiogen(self): - ag = AudioGen.get_pretrained(name='debug', device='cpu') - ag.set_generation_params(duration=2.0, extend_stride=2.) - return ag - - def test_base(self): - ag = self.get_audiogen() - assert ag.frame_rate == 25 - assert ag.sample_rate == 16000 - assert ag.audio_channels == 1 - - def test_generate_continuation(self): - ag = self.get_audiogen() - prompt = torch.randn(3, 1, 16000) - wav = ag.generate_continuation(prompt, 16000) - assert list(wav.shape) == [3, 1, 32000] - - prompt = torch.randn(2, 1, 16000) - wav = ag.generate_continuation( - prompt, 16000, ['youpi', 'lapin dort']) - assert list(wav.shape) == [2, 1, 32000] - - prompt = torch.randn(2, 1, 16000) - with pytest.raises(AssertionError): - wav = ag.generate_continuation( - prompt, 16000, ['youpi', 'lapin dort', 'one too many']) - - def test_generate(self): - ag = self.get_audiogen() - wav = ag.generate( - ['youpi', 'lapin dort']) - assert list(wav.shape) == [2, 1, 32000] - - def test_generate_long(self): - ag = self.get_audiogen() - ag.max_duration = 3. - ag.set_generation_params(duration=4., extend_stride=2.) - wav = ag.generate( - ['youpi', 'lapin dort']) - assert list(wav.shape) == [2, 1, 16000 * 4] diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/docs.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/docs.py deleted file mode 100644 index 6a97815cdc7c8ba6c2b5c74a8c097b882717cee8..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/docs.py +++ /dev/null @@ -1,3 +0,0 @@ -import os - -GENERATING_DOCUMENTATION = os.environ.get("IN_SPHINX_RUN", None) == "True" diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/server/fastapi/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/server/fastapi/__init__.py deleted file mode 100644 index 1de6e1b07d9a7e66bc12362a157b715dfc4a7665..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/server/fastapi/__init__.py +++ /dev/null @@ -1,272 +0,0 @@ -from typing import Any, Callable, Dict, List, Sequence -import fastapi -from fastapi import FastAPI as _FastAPI, Response -from fastapi.responses import JSONResponse - -from fastapi.middleware.cors import CORSMiddleware -from fastapi.routing import APIRoute -from fastapi import HTTPException, status -from uuid import UUID - -import pandas as pd - -import chromadb -from chromadb.api.models.Collection import Collection -from chromadb.api.types import GetResult, QueryResult -from chromadb.config import Settings -import chromadb.server -import chromadb.api -from chromadb.errors import ( - ChromaError, - InvalidUUIDError, - InvalidDimensionException, -) -from chromadb.server.fastapi.types import ( - AddEmbedding, - DeleteEmbedding, - GetEmbedding, - QueryEmbedding, - RawSql, # Results, - CreateCollection, - UpdateCollection, - UpdateEmbedding, -) -from starlette.requests import Request - -import logging -from chromadb.telemetry import ServerContext, Telemetry - -logger = logging.getLogger(__name__) - - -def use_route_names_as_operation_ids(app: _FastAPI) -> None: - """ - Simplify operation IDs so that generated API clients have simpler function - names. - Should be called only after all routes have been added. - """ - for route in app.routes: - if isinstance(route, APIRoute): - route.operation_id = route.name - - -async def catch_exceptions_middleware( - request: Request, call_next: Callable[[Request], Any] -) -> Response: - try: - return await call_next(request) - except ChromaError as e: - return JSONResponse( - content={"error": e.name(), "message": e.message()}, status_code=e.code() - ) - except Exception as e: - logger.exception(e) - return JSONResponse(content={"error": repr(e)}, status_code=500) - - -def _uuid(uuid_str: str) -> UUID: - try: - return UUID(uuid_str) - except ValueError: - raise InvalidUUIDError(f"Could not parse {uuid_str} as a UUID") - - -class FastAPI(chromadb.server.Server): - def __init__(self, settings: Settings): - super().__init__(settings) - Telemetry.SERVER_CONTEXT = ServerContext.FASTAPI - self._app = fastapi.FastAPI(debug=True) - self._api: chromadb.api.API = chromadb.Client(settings) - - self._app.middleware("http")(catch_exceptions_middleware) - self._app.add_middleware( - CORSMiddleware, - allow_headers=["*"], - allow_origins=settings.chroma_server_cors_allow_origins, - allow_methods=["*"], - ) - - self.router = fastapi.APIRouter() - - self.router.add_api_route("/api/v1", self.root, methods=["GET"]) - self.router.add_api_route("/api/v1/reset", self.reset, methods=["POST"]) - self.router.add_api_route("/api/v1/version", self.version, methods=["GET"]) - self.router.add_api_route("/api/v1/heartbeat", self.heartbeat, methods=["GET"]) - self.router.add_api_route("/api/v1/persist", self.persist, methods=["POST"]) - self.router.add_api_route("/api/v1/raw_sql", self.raw_sql, methods=["POST"]) - - self.router.add_api_route( - "/api/v1/collections", self.list_collections, methods=["GET"] - ) - self.router.add_api_route( - "/api/v1/collections", self.create_collection, methods=["POST"] - ) - - self.router.add_api_route( - "/api/v1/collections/{collection_id}/add", - self.add, - methods=["POST"], - status_code=status.HTTP_201_CREATED, - ) - self.router.add_api_route( - "/api/v1/collections/{collection_id}/update", self.update, methods=["POST"] - ) - self.router.add_api_route( - "/api/v1/collections/{collection_id}/upsert", self.upsert, methods=["POST"] - ) - self.router.add_api_route( - "/api/v1/collections/{collection_id}/get", self.get, methods=["POST"] - ) - self.router.add_api_route( - "/api/v1/collections/{collection_id}/delete", self.delete, methods=["POST"] - ) - self.router.add_api_route( - "/api/v1/collections/{collection_id}/count", self.count, methods=["GET"] - ) - self.router.add_api_route( - "/api/v1/collections/{collection_id}/query", - self.get_nearest_neighbors, - methods=["POST"], - ) - self.router.add_api_route( - "/api/v1/collections/{collection_name}/create_index", - self.create_index, - methods=["POST"], - ) - self.router.add_api_route( - "/api/v1/collections/{collection_name}", - self.get_collection, - methods=["GET"], - ) - self.router.add_api_route( - "/api/v1/collections/{collection_id}", - self.update_collection, - methods=["PUT"], - ) - self.router.add_api_route( - "/api/v1/collections/{collection_name}", - self.delete_collection, - methods=["DELETE"], - ) - - self._app.include_router(self.router) - - use_route_names_as_operation_ids(self._app) - - def app(self) -> fastapi.FastAPI: - return self._app - - def root(self) -> Dict[str, int]: - return {"nanosecond heartbeat": self._api.heartbeat()} - - def heartbeat(self) -> Dict[str, int]: - return self.root() - - def persist(self) -> None: - self._api.persist() - - def version(self) -> str: - return self._api.get_version() - - def list_collections(self) -> Sequence[Collection]: - return self._api.list_collections() - - def create_collection(self, collection: CreateCollection) -> Collection: - return self._api.create_collection( - name=collection.name, - metadata=collection.metadata, - get_or_create=collection.get_or_create, - ) - - def get_collection(self, collection_name: str) -> Collection: - return self._api.get_collection(collection_name) - - def update_collection( - self, collection_id: str, collection: UpdateCollection - ) -> None: - return self._api._modify( - id=_uuid(collection_id), - new_name=collection.new_name, - new_metadata=collection.new_metadata, - ) - - def delete_collection(self, collection_name: str) -> None: - return self._api.delete_collection(collection_name) - - def add(self, collection_id: str, add: AddEmbedding) -> None: - try: - result = self._api._add( - collection_id=_uuid(collection_id), - embeddings=add.embeddings, - metadatas=add.metadatas, - documents=add.documents, - ids=add.ids, - increment_index=add.increment_index, - ) - except InvalidDimensionException as e: - raise HTTPException(status_code=500, detail=str(e)) - return result - - def update(self, collection_id: str, add: UpdateEmbedding) -> None: - return self._api._update( - ids=add.ids, - collection_id=_uuid(collection_id), - embeddings=add.embeddings, - documents=add.documents, - metadatas=add.metadatas, - ) - - def upsert(self, collection_id: str, upsert: AddEmbedding) -> None: - return self._api._upsert( - collection_id=_uuid(collection_id), - ids=upsert.ids, - embeddings=upsert.embeddings, - documents=upsert.documents, - metadatas=upsert.metadatas, - increment_index=upsert.increment_index, - ) - - def get(self, collection_id: str, get: GetEmbedding) -> GetResult: - return self._api._get( - collection_id=_uuid(collection_id), - ids=get.ids, - where=get.where, - where_document=get.where_document, - sort=get.sort, - limit=get.limit, - offset=get.offset, - include=get.include, - ) - - def delete(self, collection_id: str, delete: DeleteEmbedding) -> List[UUID]: - return self._api._delete( - where=delete.where, - ids=delete.ids, - collection_id=_uuid(collection_id), - where_document=delete.where_document, - ) - - def count(self, collection_id: str) -> int: - return self._api._count(_uuid(collection_id)) - - def reset(self) -> bool: - return self._api.reset() - - def get_nearest_neighbors( - self, collection_id: str, query: QueryEmbedding - ) -> QueryResult: - nnresult = self._api._query( - collection_id=_uuid(collection_id), - where=query.where, # type: ignore - where_document=query.where_document, # type: ignore - query_embeddings=query.query_embeddings, - n_results=query.n_results, - include=query.include, - ) - return nnresult - - def raw_sql(self, raw_sql: RawSql) -> pd.DataFrame: - return self._api.raw_sql(raw_sql.raw_sql) - - def create_index(self, collection_name: str) -> bool: - return self._api.create_index(collection_name) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_comm.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_comm.py deleted file mode 100644 index b9ac9053ea885d47c51f51facf35b8bc34471e01..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_comm.py +++ /dev/null @@ -1,1847 +0,0 @@ -''' pydevd - a debugging daemon -This is the daemon you launch for python remote debugging. - -Protocol: -each command has a format: - id\tsequence-num\ttext - id: protocol command number - sequence-num: each request has a sequence number. Sequence numbers - originating at the debugger are odd, sequence numbers originating - at the daemon are even. Every response uses the same sequence number - as the request. - payload: it is protocol dependent. When response is a complex structure, it - is returned as XML. Each attribute value is urlencoded, and then the whole - payload is urlencoded again to prevent stray characters corrupting protocol/xml encodings - - Commands: - - NUMBER NAME FROM* ARGUMENTS RESPONSE NOTE -100 series: program execution - 101 RUN JAVA - - - 102 LIST_THREADS JAVA RETURN with XML listing of all threads - 103 THREAD_CREATE PYDB - XML with thread information - 104 THREAD_KILL JAVA id (or * to exit) kills the thread - PYDB id nofies JAVA that thread was killed - 105 THREAD_SUSPEND JAVA XML of the stack, suspends the thread - reason for suspension - PYDB id notifies JAVA that thread was suspended - - 106 CMD_THREAD_RUN JAVA id resume the thread - PYDB id \t reason notifies JAVA that thread was resumed - - 107 STEP_INTO JAVA thread_id - 108 STEP_OVER JAVA thread_id - 109 STEP_RETURN JAVA thread_id - - 110 GET_VARIABLE JAVA thread_id \t frame_id \t GET_VARIABLE with XML of var content - FRAME|GLOBAL \t attributes* - - 111 SET_BREAK JAVA file/line of the breakpoint - 112 REMOVE_BREAK JAVA file/line of the return - 113 CMD_EVALUATE_EXPRESSION JAVA expression result of evaluating the expression - 114 CMD_GET_FRAME JAVA request for frame contents - 115 CMD_EXEC_EXPRESSION JAVA - 116 CMD_WRITE_TO_CONSOLE PYDB - 117 CMD_CHANGE_VARIABLE - 118 CMD_RUN_TO_LINE - 119 CMD_RELOAD_CODE - 120 CMD_GET_COMPLETIONS JAVA - - 200 CMD_REDIRECT_OUTPUT JAVA streams to redirect as string - - 'STDOUT' (redirect only STDOUT) - 'STDERR' (redirect only STDERR) - 'STDOUT STDERR' (redirect both streams) - -500 series diagnostics/ok - 501 VERSION either Version string (1.0) Currently just used at startup - 502 RETURN either Depends on caller - - -900 series: errors - 901 ERROR either - This is reserved for unexpected errors. - - * JAVA - remote debugger, the java end - * PYDB - pydevd, the python end -''' - -import linecache -import os - -from _pydev_bundle.pydev_imports import _queue -from _pydev_bundle._pydev_saved_modules import time -from _pydev_bundle._pydev_saved_modules import threading -from _pydev_bundle._pydev_saved_modules import socket as socket_module -from _pydevd_bundle.pydevd_constants import (DebugInfoHolder, IS_WINDOWS, IS_JYTHON, IS_WASM, - IS_PY36_OR_GREATER, STATE_RUN, ASYNC_EVAL_TIMEOUT_SEC, - get_global_debugger, GetGlobalDebugger, set_global_debugger, # Keep for backward compatibility @UnusedImport - silence_warnings_decorator, filter_all_warnings, IS_PY311_OR_GREATER) -from _pydev_bundle.pydev_override import overrides -import weakref -from _pydev_bundle._pydev_completer import extract_token_and_qualifier -from _pydevd_bundle._debug_adapter.pydevd_schema import VariablesResponseBody, \ - SetVariableResponseBody, StepInTarget, StepInTargetsResponseBody -from _pydevd_bundle._debug_adapter import pydevd_base_schema, pydevd_schema -from _pydevd_bundle.pydevd_net_command import NetCommand -from _pydevd_bundle.pydevd_xml import ExceptionOnEvaluate -from _pydevd_bundle.pydevd_constants import ForkSafeLock, NULL -from _pydevd_bundle.pydevd_daemon_thread import PyDBDaemonThread -from _pydevd_bundle.pydevd_thread_lifecycle import pydevd_find_thread_by_id, resume_threads -from _pydevd_bundle.pydevd_dont_trace_files import PYDEV_FILE -import dis -import pydevd_file_utils -import itertools -from urllib.parse import quote_plus, unquote_plus -import pydevconsole -from _pydevd_bundle import pydevd_vars, pydevd_io, pydevd_reload -from _pydevd_bundle import pydevd_bytecode_utils -from _pydevd_bundle import pydevd_xml -from _pydevd_bundle import pydevd_vm_type -import sys -import traceback -from _pydevd_bundle.pydevd_utils import quote_smart as quote, compare_object_attrs_key, \ - notify_about_gevent_if_needed, isinstance_checked, ScopeRequest, getattr_checked, Timer -from _pydev_bundle import pydev_log, fsnotify -from _pydev_bundle.pydev_log import exception as pydev_log_exception -from _pydev_bundle import _pydev_completer - -from pydevd_tracing import get_exception_traceback_str -from _pydevd_bundle import pydevd_console -from _pydev_bundle.pydev_monkey import disable_trace_thread_modules, enable_trace_thread_modules -from io import StringIO - -# CMD_XXX constants imported for backward compatibility -from _pydevd_bundle.pydevd_comm_constants import * # @UnusedWildImport - -# Socket import aliases: -AF_INET, SOCK_STREAM, SHUT_WR, SOL_SOCKET, IPPROTO_TCP, socket = ( - socket_module.AF_INET, - socket_module.SOCK_STREAM, - socket_module.SHUT_WR, - socket_module.SOL_SOCKET, - socket_module.IPPROTO_TCP, - socket_module.socket, -) - -if IS_WINDOWS and not IS_JYTHON: - SO_EXCLUSIVEADDRUSE = socket_module.SO_EXCLUSIVEADDRUSE -if not IS_WASM: - SO_REUSEADDR = socket_module.SO_REUSEADDR - - -class ReaderThread(PyDBDaemonThread): - ''' reader thread reads and dispatches commands in an infinite loop ''' - - def __init__(self, sock, py_db, PyDevJsonCommandProcessor, process_net_command, terminate_on_socket_close=True): - assert sock is not None - PyDBDaemonThread.__init__(self, py_db) - self.__terminate_on_socket_close = terminate_on_socket_close - - self.sock = sock - self._buffer = b'' - self.name = "pydevd.Reader" - self.process_net_command = process_net_command - self.process_net_command_json = PyDevJsonCommandProcessor(self._from_json).process_net_command_json - - def _from_json(self, json_msg, update_ids_from_dap=False): - return pydevd_base_schema.from_json(json_msg, update_ids_from_dap, on_dict_loaded=self._on_dict_loaded) - - def _on_dict_loaded(self, dct): - for listener in self.py_db.dap_messages_listeners: - listener.after_receive(dct) - - @overrides(PyDBDaemonThread.do_kill_pydev_thread) - def do_kill_pydev_thread(self): - PyDBDaemonThread.do_kill_pydev_thread(self) - # Note that we no longer shutdown the reader, just the writer. The idea is that we shutdown - # the writer to send that the communication has finished, then, the client will shutdown its - # own writer when it receives an empty read, at which point this reader will also shutdown. - - # That way, we can *almost* guarantee that all messages have been properly sent -- it's not - # completely guaranteed because it's possible that the process exits before the whole - # message was sent as having this thread alive won't stop the process from exiting -- we - # have a timeout when exiting the process waiting for this thread to finish -- see: - # PyDB.dispose_and_kill_all_pydevd_threads()). - - # try: - # self.sock.shutdown(SHUT_RD) - # except: - # pass - # try: - # self.sock.close() - # except: - # pass - - def _read(self, size): - while True: - buffer_len = len(self._buffer) - if buffer_len == size: - ret = self._buffer - self._buffer = b'' - return ret - - if buffer_len > size: - ret = self._buffer[:size] - self._buffer = self._buffer[size:] - return ret - - try: - r = self.sock.recv(max(size - buffer_len, 1024)) - except OSError: - return b'' - if not r: - return b'' - self._buffer += r - - def _read_line(self): - while True: - i = self._buffer.find(b'\n') - if i != -1: - i += 1 # Add the newline to the return - ret = self._buffer[:i] - self._buffer = self._buffer[i:] - return ret - else: - try: - r = self.sock.recv(1024) - except OSError: - return b'' - if not r: - return b'' - self._buffer += r - - @overrides(PyDBDaemonThread._on_run) - def _on_run(self): - try: - content_len = -1 - - while True: - # i.e.: even if we received a kill, we should only exit the ReaderThread when the - # client itself closes the connection (although on kill received we stop actually - # processing anything read). - try: - notify_about_gevent_if_needed() - line = self._read_line() - - if len(line) == 0: - pydev_log.debug('ReaderThread: empty contents received (len(line) == 0).') - self._terminate_on_socket_close() - return # Finished communication. - - if self._kill_received: - continue - - if line.startswith(b'Content-Length:'): - content_len = int(line.strip().split(b':', 1)[1]) - continue - - if content_len != -1: - # If we previously received a content length, read until a '\r\n'. - if line == b'\r\n': - json_contents = self._read(content_len) - - content_len = -1 - - if len(json_contents) == 0: - pydev_log.debug('ReaderThread: empty contents received (len(json_contents) == 0).') - self._terminate_on_socket_close() - return # Finished communication. - - if self._kill_received: - continue - - # We just received a json message, let's process it. - self.process_net_command_json(self.py_db, json_contents) - - continue - else: - # No content len, regular line-based protocol message (remove trailing new-line). - if line.endswith(b'\n\n'): - line = line[:-2] - - elif line.endswith(b'\n'): - line = line[:-1] - - elif line.endswith(b'\r'): - line = line[:-1] - except: - if not self._kill_received: - pydev_log_exception() - self._terminate_on_socket_close() - return # Finished communication. - - # Note: the java backend is always expected to pass utf-8 encoded strings. We now work with str - # internally and thus, we may need to convert to the actual encoding where needed (i.e.: filenames - # on python 2 may need to be converted to the filesystem encoding). - if hasattr(line, 'decode'): - line = line.decode('utf-8') - - if DebugInfoHolder.DEBUG_TRACE_LEVEL >= 3: - pydev_log.debug('debugger: received >>%s<<\n', line) - - args = line.split('\t', 2) - try: - cmd_id = int(args[0]) - if DebugInfoHolder.DEBUG_TRACE_LEVEL >= 3: - pydev_log.debug('Received command: %s %s\n', ID_TO_MEANING.get(str(cmd_id), '???'), line) - self.process_command(cmd_id, int(args[1]), args[2]) - except: - if sys is not None and pydev_log_exception is not None: # Could happen at interpreter shutdown - pydev_log_exception("Can't process net command: %s.", line) - - except: - if not self._kill_received: - if sys is not None and pydev_log_exception is not None: # Could happen at interpreter shutdown - pydev_log_exception() - - self._terminate_on_socket_close() - finally: - pydev_log.debug('ReaderThread: exit') - - def _terminate_on_socket_close(self): - if self.__terminate_on_socket_close: - self.py_db.dispose_and_kill_all_pydevd_threads() - - def process_command(self, cmd_id, seq, text): - self.process_net_command(self.py_db, cmd_id, seq, text) - - -class FSNotifyThread(PyDBDaemonThread): - - def __init__(self, py_db, api, watch_dirs): - PyDBDaemonThread.__init__(self, py_db) - self.api = api - self.name = "pydevd.FSNotifyThread" - self.watcher = fsnotify.Watcher() - self.watch_dirs = watch_dirs - - @overrides(PyDBDaemonThread._on_run) - def _on_run(self): - try: - pydev_log.info('Watching directories for code reload:\n---\n%s\n---' % ('\n'.join(sorted(self.watch_dirs)))) - - # i.e.: The first call to set_tracked_paths will do a full scan, so, do it in the thread - # too (after everything is configured). - self.watcher.set_tracked_paths(self.watch_dirs) - while not self._kill_received: - for change_enum, change_path in self.watcher.iter_changes(): - # We're only interested in modified events - if change_enum == fsnotify.Change.modified: - pydev_log.info('Modified: %s', change_path) - self.api.request_reload_code(self.py_db, -1, None, change_path) - else: - pydev_log.info('Ignored (add or remove) change in: %s', change_path) - except: - pydev_log.exception('Error when waiting for filesystem changes in FSNotifyThread.') - - @overrides(PyDBDaemonThread.do_kill_pydev_thread) - def do_kill_pydev_thread(self): - self.watcher.dispose() - PyDBDaemonThread.do_kill_pydev_thread(self) - - -class WriterThread(PyDBDaemonThread): - ''' writer thread writes out the commands in an infinite loop ''' - - def __init__(self, sock, py_db, terminate_on_socket_close=True): - PyDBDaemonThread.__init__(self, py_db) - self.sock = sock - self.__terminate_on_socket_close = terminate_on_socket_close - self.name = "pydevd.Writer" - self._cmd_queue = _queue.Queue() - if pydevd_vm_type.get_vm_type() == 'python': - self.timeout = 0 - else: - self.timeout = 0.1 - - def add_command(self, cmd): - ''' cmd is NetCommand ''' - if not self._kill_received: # we don't take new data after everybody die - self._cmd_queue.put(cmd, False) - - @overrides(PyDBDaemonThread._on_run) - def _on_run(self): - ''' just loop and write responses ''' - - try: - while True: - try: - try: - cmd = self._cmd_queue.get(True, 0.1) - except _queue.Empty: - if self._kill_received: - pydev_log.debug('WriterThread: kill_received (sock.shutdown(SHUT_WR))') - try: - self.sock.shutdown(SHUT_WR) - except: - pass - # Note: don't close the socket, just send the shutdown, - # then, when no data is received on the reader, it can close - # the socket. - # See: https://blog.netherlabs.nl/articles/2009/01/18/the-ultimate-so_linger-page-or-why-is-my-tcp-not-reliable - - # try: - # self.sock.close() - # except: - # pass - - return # break if queue is empty and _kill_received - else: - continue - except: - # pydev_log.info('Finishing debug communication...(1)') - # when liberating the thread here, we could have errors because we were shutting down - # but the thread was still not liberated - return - - if cmd.as_dict is not None: - for listener in self.py_db.dap_messages_listeners: - listener.before_send(cmd.as_dict) - - notify_about_gevent_if_needed() - cmd.send(self.sock) - - if cmd.id == CMD_EXIT: - pydev_log.debug('WriterThread: CMD_EXIT received') - break - if time is None: - break # interpreter shutdown - time.sleep(self.timeout) - except Exception: - if self.__terminate_on_socket_close: - self.py_db.dispose_and_kill_all_pydevd_threads() - if DebugInfoHolder.DEBUG_TRACE_LEVEL > 0: - pydev_log_exception() - finally: - pydev_log.debug('WriterThread: exit') - - def empty(self): - return self._cmd_queue.empty() - - @overrides(PyDBDaemonThread.do_kill_pydev_thread) - def do_kill_pydev_thread(self): - if not self._kill_received: - # Add command before setting the kill flag (otherwise the command may not be added). - exit_cmd = self.py_db.cmd_factory.make_exit_command(self.py_db) - self.add_command(exit_cmd) - - PyDBDaemonThread.do_kill_pydev_thread(self) - - -def create_server_socket(host, port): - try: - server = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) - if IS_WINDOWS and not IS_JYTHON: - server.setsockopt(SOL_SOCKET, SO_EXCLUSIVEADDRUSE, 1) - elif not IS_WASM: - server.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1) - - server.bind((host, port)) - server.settimeout(None) - except Exception: - server.close() - raise - - return server - - -def start_server(port): - ''' binds to a port, waits for the debugger to connect ''' - s = create_server_socket(host='', port=port) - - try: - s.listen(1) - new_socket, _addr = s.accept() - pydev_log.info("Connection accepted") - # closing server socket is not necessary but we don't need it - s.close() - return new_socket - except: - pydev_log.exception("Could not bind to port: %s\n", port) - raise - - -def start_client(host, port): - ''' connects to a host/port ''' - pydev_log.info("Connecting to %s:%s", host, port) - - s = socket(AF_INET, SOCK_STREAM) - - # Set TCP keepalive on an open socket. - # It activates after 1 second (TCP_KEEPIDLE,) of idleness, - # then sends a keepalive ping once every 3 seconds (TCP_KEEPINTVL), - # and closes the connection after 5 failed ping (TCP_KEEPCNT), or 15 seconds - try: - s.setsockopt(SOL_SOCKET, socket_module.SO_KEEPALIVE, 1) - except (AttributeError, OSError): - pass # May not be available everywhere. - try: - s.setsockopt(socket_module.IPPROTO_TCP, socket_module.TCP_KEEPIDLE, 1) - except (AttributeError, OSError): - pass # May not be available everywhere. - try: - s.setsockopt(socket_module.IPPROTO_TCP, socket_module.TCP_KEEPINTVL, 3) - except (AttributeError, OSError): - pass # May not be available everywhere. - try: - s.setsockopt(socket_module.IPPROTO_TCP, socket_module.TCP_KEEPCNT, 5) - except (AttributeError, OSError): - pass # May not be available everywhere. - - try: - # 10 seconds default timeout - timeout = int(os.environ.get('PYDEVD_CONNECT_TIMEOUT', 10)) - s.settimeout(timeout) - s.connect((host, port)) - s.settimeout(None) # no timeout after connected - pydev_log.info("Connected.") - return s - except: - pydev_log.exception("Could not connect to %s: %s", host, port) - raise - - -INTERNAL_TERMINATE_THREAD = 1 -INTERNAL_SUSPEND_THREAD = 2 - - -class InternalThreadCommand(object): - ''' internal commands are generated/executed by the debugger. - - The reason for their existence is that some commands have to be executed - on specific threads. These are the InternalThreadCommands that get - get posted to PyDB. - ''' - - def __init__(self, thread_id, method=None, *args, **kwargs): - self.thread_id = thread_id - self.method = method - self.args = args - self.kwargs = kwargs - - def can_be_executed_by(self, thread_id): - '''By default, it must be in the same thread to be executed - ''' - return self.thread_id == thread_id or self.thread_id.endswith('|' + thread_id) - - def do_it(self, dbg): - try: - if self.method is not None: - self.method(dbg, *self.args, **self.kwargs) - else: - raise NotImplementedError("you have to override do_it") - finally: - self.args = None - self.kwargs = None - - def __str__(self): - return 'InternalThreadCommands(%s, %s, %s)' % (self.method, self.args, self.kwargs) - - __repr__ = __str__ - - -class InternalThreadCommandForAnyThread(InternalThreadCommand): - - def __init__(self, thread_id, method=None, *args, **kwargs): - assert thread_id == '*' - - InternalThreadCommand.__init__(self, thread_id, method, *args, **kwargs) - - self.executed = False - self.lock = ForkSafeLock() - - def can_be_executed_by(self, thread_id): - return True # Can be executed by any thread. - - def do_it(self, dbg): - with self.lock: - if self.executed: - return - self.executed = True - - InternalThreadCommand.do_it(self, dbg) - - -def _send_io_message(py_db, s): - cmd = py_db.cmd_factory.make_io_message(s, 2) - if py_db.writer is not None: - py_db.writer.add_command(cmd) - - -def internal_reload_code(dbg, seq, module_name, filename): - try: - found_module_to_reload = False - if module_name is not None: - module_name = module_name - if module_name not in sys.modules: - if '.' in module_name: - new_module_name = module_name.split('.')[-1] - if new_module_name in sys.modules: - module_name = new_module_name - - modules_to_reload = {} - module = sys.modules.get(module_name) - if module is not None: - modules_to_reload[id(module)] = (module, module_name) - - if filename: - filename = pydevd_file_utils.normcase(filename) - for module_name, module in sys.modules.copy().items(): - f = getattr_checked(module, '__file__') - if f is not None: - if f.endswith(('.pyc', '.pyo')): - f = f[:-1] - - if pydevd_file_utils.normcase(f) == filename: - modules_to_reload[id(module)] = (module, module_name) - - if not modules_to_reload: - if filename and module_name: - _send_io_message(dbg, 'code reload: Unable to find module %s to reload for path: %s\n' % (module_name, filename)) - elif filename: - _send_io_message(dbg, 'code reload: Unable to find module to reload for path: %s\n' % (filename,)) - elif module_name: - _send_io_message(dbg, 'code reload: Unable to find module to reload: %s\n' % (module_name,)) - - else: - # Too much info... - # _send_io_message(dbg, 'code reload: This usually means you are trying to reload the __main__ module (which cannot be reloaded).\n') - for module, module_name in modules_to_reload.values(): - _send_io_message(dbg, 'code reload: Start reloading module: "' + module_name + '" ... \n') - found_module_to_reload = True - - if pydevd_reload.xreload(module): - _send_io_message(dbg, 'code reload: reload finished\n') - else: - _send_io_message(dbg, 'code reload: reload finished without applying any change\n') - - cmd = dbg.cmd_factory.make_reloaded_code_message(seq, found_module_to_reload) - dbg.writer.add_command(cmd) - except: - pydev_log.exception('Error reloading code') - - -class InternalGetThreadStack(InternalThreadCommand): - ''' - This command will either wait for a given thread to be paused to get its stack or will provide - it anyways after a timeout (in which case the stack will be gotten but local variables won't - be available and it'll not be possible to interact with the frame as it's not actually - stopped in a breakpoint). - ''' - - def __init__(self, seq, thread_id, py_db, set_additional_thread_info, fmt, timeout=.5, start_frame=0, levels=0): - InternalThreadCommand.__init__(self, thread_id) - self._py_db = weakref.ref(py_db) - self._timeout = time.time() + timeout - self.seq = seq - self._cmd = None - self._fmt = fmt - self._start_frame = start_frame - self._levels = levels - - # Note: receives set_additional_thread_info to avoid a circular import - # in this module. - self._set_additional_thread_info = set_additional_thread_info - - @overrides(InternalThreadCommand.can_be_executed_by) - def can_be_executed_by(self, _thread_id): - timed_out = time.time() >= self._timeout - - py_db = self._py_db() - t = pydevd_find_thread_by_id(self.thread_id) - frame = None - if t and not getattr(t, 'pydev_do_not_trace', None): - additional_info = self._set_additional_thread_info(t) - frame = additional_info.get_topmost_frame(t) - try: - self._cmd = py_db.cmd_factory.make_get_thread_stack_message( - py_db, self.seq, self.thread_id, frame, self._fmt, must_be_suspended=not timed_out, start_frame=self._start_frame, levels=self._levels) - finally: - frame = None - t = None - - return self._cmd is not None or timed_out - - @overrides(InternalThreadCommand.do_it) - def do_it(self, dbg): - if self._cmd is not None: - dbg.writer.add_command(self._cmd) - self._cmd = None - - -def internal_step_in_thread(py_db, thread_id, cmd_id, set_additional_thread_info): - thread_to_step = pydevd_find_thread_by_id(thread_id) - if thread_to_step is not None: - info = set_additional_thread_info(thread_to_step) - info.pydev_original_step_cmd = cmd_id - info.pydev_step_cmd = cmd_id - info.pydev_step_stop = None - info.pydev_state = STATE_RUN - - if py_db.stepping_resumes_all_threads: - resume_threads('*', except_thread=thread_to_step) - - -def internal_smart_step_into(py_db, thread_id, offset, child_offset, set_additional_thread_info): - thread_to_step = pydevd_find_thread_by_id(thread_id) - if thread_to_step is not None: - info = set_additional_thread_info(thread_to_step) - info.pydev_original_step_cmd = CMD_SMART_STEP_INTO - info.pydev_step_cmd = CMD_SMART_STEP_INTO - info.pydev_step_stop = None - info.pydev_smart_parent_offset = int(offset) - info.pydev_smart_child_offset = int(child_offset) - info.pydev_state = STATE_RUN - - if py_db.stepping_resumes_all_threads: - resume_threads('*', except_thread=thread_to_step) - - -class InternalSetNextStatementThread(InternalThreadCommand): - - def __init__(self, thread_id, cmd_id, line, func_name, seq=0): - ''' - cmd_id may actually be one of: - - CMD_RUN_TO_LINE - CMD_SET_NEXT_STATEMENT - CMD_SMART_STEP_INTO - ''' - self.thread_id = thread_id - self.cmd_id = cmd_id - self.line = line - self.seq = seq - - self.func_name = func_name - - def do_it(self, dbg): - t = pydevd_find_thread_by_id(self.thread_id) - if t is not None: - info = t.additional_info - info.pydev_original_step_cmd = self.cmd_id - info.pydev_step_cmd = self.cmd_id - info.pydev_step_stop = None - info.pydev_next_line = int(self.line) - info.pydev_func_name = self.func_name - info.pydev_message = str(self.seq) - info.pydev_smart_parent_offset = -1 - info.pydev_smart_child_offset = -1 - info.pydev_state = STATE_RUN - - -@silence_warnings_decorator -def internal_get_variable_json(py_db, request): - ''' - :param VariablesRequest request: - ''' - arguments = request.arguments # : :type arguments: VariablesArguments - variables_reference = arguments.variablesReference - scope = None - if isinstance_checked(variables_reference, ScopeRequest): - scope = variables_reference - variables_reference = variables_reference.variable_reference - - fmt = arguments.format - if hasattr(fmt, 'to_dict'): - fmt = fmt.to_dict() - - variables = [] - try: - try: - variable = py_db.suspended_frames_manager.get_variable(variables_reference) - except KeyError: - pass - else: - for child_var in variable.get_children_variables(fmt=fmt, scope=scope): - variables.append(child_var.get_var_data(fmt=fmt)) - except: - try: - exc, exc_type, tb = sys.exc_info() - err = ''.join(traceback.format_exception(exc, exc_type, tb)) - variables = [{ - 'name': '', - 'value': err, - 'type': '', - 'variablesReference': 0 - }] - except: - err = '' - pydev_log.exception(err) - variables = [] - - body = VariablesResponseBody(variables) - variables_response = pydevd_base_schema.build_response(request, kwargs={'body':body}) - py_db.writer.add_command(NetCommand(CMD_RETURN, 0, variables_response, is_json=True)) - - -class InternalGetVariable(InternalThreadCommand): - ''' gets the value of a variable ''' - - def __init__(self, seq, thread_id, frame_id, scope, attrs): - self.sequence = seq - self.thread_id = thread_id - self.frame_id = frame_id - self.scope = scope - self.attributes = attrs - - @silence_warnings_decorator - def do_it(self, dbg): - ''' Converts request into python variable ''' - try: - xml = StringIO() - xml.write("") - type_name, val_dict = pydevd_vars.resolve_compound_variable_fields( - dbg, self.thread_id, self.frame_id, self.scope, self.attributes) - if val_dict is None: - val_dict = {} - - # assume properly ordered if resolver returns 'OrderedDict' - # check type as string to support OrderedDict backport for older Python - keys = list(val_dict) - if not (type_name == "OrderedDict" or val_dict.__class__.__name__ == "OrderedDict" or IS_PY36_OR_GREATER): - keys = sorted(keys, key=compare_object_attrs_key) - - timer = Timer() - for k in keys: - val = val_dict[k] - evaluate_full_value = pydevd_xml.should_evaluate_full_value(val) - xml.write(pydevd_xml.var_to_xml(val, k, evaluate_full_value=evaluate_full_value)) - timer.report_if_compute_repr_attr_slow(self.attributes, k, type(val)) - - xml.write("") - cmd = dbg.cmd_factory.make_get_variable_message(self.sequence, xml.getvalue()) - xml.close() - dbg.writer.add_command(cmd) - except Exception: - cmd = dbg.cmd_factory.make_error_message( - self.sequence, "Error resolving variables %s" % (get_exception_traceback_str(),)) - dbg.writer.add_command(cmd) - - -class InternalGetArray(InternalThreadCommand): - - def __init__(self, seq, roffset, coffset, rows, cols, format, thread_id, frame_id, scope, attrs): - self.sequence = seq - self.thread_id = thread_id - self.frame_id = frame_id - self.scope = scope - self.name = attrs.split("\t")[-1] - self.attrs = attrs - self.roffset = int(roffset) - self.coffset = int(coffset) - self.rows = int(rows) - self.cols = int(cols) - self.format = format - - def do_it(self, dbg): - try: - frame = dbg.find_frame(self.thread_id, self.frame_id) - var = pydevd_vars.eval_in_context(self.name, frame.f_globals, frame.f_locals, py_db=dbg) - xml = pydevd_vars.table_like_struct_to_xml(var, self.name, self.roffset, self.coffset, self.rows, self.cols, self.format) - cmd = dbg.cmd_factory.make_get_array_message(self.sequence, xml) - dbg.writer.add_command(cmd) - except: - cmd = dbg.cmd_factory.make_error_message(self.sequence, "Error resolving array: " + get_exception_traceback_str()) - dbg.writer.add_command(cmd) - - -def internal_change_variable(dbg, seq, thread_id, frame_id, scope, attr, value): - ''' Changes the value of a variable ''' - try: - frame = dbg.find_frame(thread_id, frame_id) - if frame is not None: - result = pydevd_vars.change_attr_expression(frame, attr, value, dbg) - else: - result = None - xml = "" - xml += pydevd_xml.var_to_xml(result, "") - xml += "" - cmd = dbg.cmd_factory.make_variable_changed_message(seq, xml) - dbg.writer.add_command(cmd) - except Exception: - cmd = dbg.cmd_factory.make_error_message(seq, "Error changing variable attr:%s expression:%s traceback:%s" % (attr, value, get_exception_traceback_str())) - dbg.writer.add_command(cmd) - - -def internal_change_variable_json(py_db, request): - ''' - The pydevd_vars.change_attr_expression(thread_id, frame_id, attr, value, dbg) can only - deal with changing at a frame level, so, currently changing the contents of something - in a different scope is currently not supported. - - :param SetVariableRequest request: - ''' - # : :type arguments: SetVariableArguments - arguments = request.arguments - variables_reference = arguments.variablesReference - scope = None - if isinstance_checked(variables_reference, ScopeRequest): - scope = variables_reference - variables_reference = variables_reference.variable_reference - - fmt = arguments.format - if hasattr(fmt, 'to_dict'): - fmt = fmt.to_dict() - - try: - variable = py_db.suspended_frames_manager.get_variable(variables_reference) - except KeyError: - variable = None - - if variable is None: - _write_variable_response( - py_db, request, value='', success=False, message='Unable to find variable container to change: %s.' % (variables_reference,)) - return - - child_var = variable.change_variable(arguments.name, arguments.value, py_db, fmt=fmt) - - if child_var is None: - _write_variable_response( - py_db, request, value='', success=False, message='Unable to change: %s.' % (arguments.name,)) - return - - var_data = child_var.get_var_data(fmt=fmt) - body = SetVariableResponseBody( - value=var_data['value'], - type=var_data['type'], - variablesReference=var_data.get('variablesReference'), - namedVariables=var_data.get('namedVariables'), - indexedVariables=var_data.get('indexedVariables'), - ) - variables_response = pydevd_base_schema.build_response(request, kwargs={'body':body}) - py_db.writer.add_command(NetCommand(CMD_RETURN, 0, variables_response, is_json=True)) - - -def _write_variable_response(py_db, request, value, success, message): - body = SetVariableResponseBody('') - variables_response = pydevd_base_schema.build_response( - request, - kwargs={ - 'body':body, - 'success': False, - 'message': message - }) - cmd = NetCommand(CMD_RETURN, 0, variables_response, is_json=True) - py_db.writer.add_command(cmd) - - -@silence_warnings_decorator -def internal_get_frame(dbg, seq, thread_id, frame_id): - ''' Converts request into python variable ''' - try: - frame = dbg.find_frame(thread_id, frame_id) - if frame is not None: - hidden_ns = pydevconsole.get_ipython_hidden_vars() - xml = "" - xml += pydevd_xml.frame_vars_to_xml(frame.f_locals, hidden_ns) - del frame - xml += "" - cmd = dbg.cmd_factory.make_get_frame_message(seq, xml) - dbg.writer.add_command(cmd) - else: - # pydevd_vars.dump_frames(thread_id) - # don't print this error: frame not found: means that the client is not synchronized (but that's ok) - cmd = dbg.cmd_factory.make_error_message(seq, "Frame not found: %s from thread: %s" % (frame_id, thread_id)) - dbg.writer.add_command(cmd) - except: - cmd = dbg.cmd_factory.make_error_message(seq, "Error resolving frame: %s from thread: %s" % (frame_id, thread_id)) - dbg.writer.add_command(cmd) - - -def internal_get_smart_step_into_variants(dbg, seq, thread_id, frame_id, start_line, end_line, set_additional_thread_info): - try: - thread = pydevd_find_thread_by_id(thread_id) - frame = dbg.find_frame(thread_id, frame_id) - - if thread is None or frame is None: - cmd = dbg.cmd_factory.make_error_message(seq, "Frame not found: %s from thread: %s" % (frame_id, thread_id)) - dbg.writer.add_command(cmd) - return - - if pydevd_bytecode_utils is None: - variants = [] - else: - variants = pydevd_bytecode_utils.calculate_smart_step_into_variants(frame, int(start_line), int(end_line)) - - info = set_additional_thread_info(thread) - - # Store the last request (may be used afterwards when stepping). - info.pydev_smart_step_into_variants = tuple(variants) - xml = "" - - for variant in variants: - if variant.children_variants: - for child_variant in variant.children_variants: - # If there are child variants, the current one is just an intermediary, so, - # just create variants for the child (notifying properly about the parent too). - xml += '' % ( - quote(child_variant.name), - str(child_variant.is_visited).lower(), - child_variant.line, - variant.offset, - child_variant.offset, - child_variant.call_order, - ) - else: - xml += '' % ( - quote(variant.name), - str(variant.is_visited).lower(), - variant.line, - variant.offset, - variant.call_order, - ) - - xml += "" - cmd = NetCommand(CMD_GET_SMART_STEP_INTO_VARIANTS, seq, xml) - dbg.writer.add_command(cmd) - except: - # Error is expected (if `dis` module cannot be used -- i.e.: Jython). - pydev_log.exception('Error calculating Smart Step Into Variants.') - cmd = dbg.cmd_factory.make_error_message( - seq, "Error getting smart step into variants for frame: %s from thread: %s" - % (frame_id, thread_id)) - dbg.writer.add_command(cmd) - - -def internal_get_step_in_targets_json(dbg, seq, thread_id, frame_id, request, set_additional_thread_info): - try: - thread = pydevd_find_thread_by_id(thread_id) - frame = dbg.find_frame(thread_id, frame_id) - - if thread is None or frame is None: - body = StepInTargetsResponseBody([]) - variables_response = pydevd_base_schema.build_response( - request, - kwargs={ - 'body': body, - 'success': False, - 'message': 'Thread to get step in targets seems to have resumed already.' - }) - cmd = NetCommand(CMD_RETURN, 0, variables_response, is_json=True) - dbg.writer.add_command(cmd) - return - - start_line = 0 - end_line = 99999999 - if pydevd_bytecode_utils is None: - variants = [] - else: - variants = pydevd_bytecode_utils.calculate_smart_step_into_variants(frame, start_line, end_line) - - info = set_additional_thread_info(thread) - targets = [] - counter = itertools.count(0) - target_id_to_variant = {} - for variant in variants: - if not variant.is_visited: - if variant.children_variants: - for child_variant in variant.children_variants: - target_id = next(counter) - - if child_variant.call_order > 1: - targets.append(StepInTarget(id=target_id, label='%s (call %s)' % (child_variant.name, child_variant.call_order),)) - else: - targets.append(StepInTarget(id=target_id, label=child_variant.name)) - target_id_to_variant[target_id] = child_variant - - if len(targets) >= 15: # Show at most 15 targets. - break - else: - target_id = next(counter) - if variant.call_order > 1: - targets.append(StepInTarget(id=target_id, label='%s (call %s)' % (variant.name, variant.call_order),)) - else: - targets.append(StepInTarget(id=target_id, label=variant.name)) - target_id_to_variant[target_id] = variant - - if len(targets) >= 15: # Show at most 15 targets. - break - - # Store the last request (may be used afterwards when stepping). - info.pydev_smart_step_into_variants = tuple(variants) - info.target_id_to_smart_step_into_variant = target_id_to_variant - - body = StepInTargetsResponseBody(targets=targets) - response = pydevd_base_schema.build_response(request, kwargs={'body': body}) - cmd = NetCommand(CMD_RETURN, 0, response, is_json=True) - dbg.writer.add_command(cmd) - except Exception as e: - # Error is expected (if `dis` module cannot be used -- i.e.: Jython). - pydev_log.exception('Error calculating Smart Step Into Variants.') - body = StepInTargetsResponseBody([]) - variables_response = pydevd_base_schema.build_response( - request, - kwargs={ - 'body': body, - 'success': False, - 'message': str(e) - }) - cmd = NetCommand(CMD_RETURN, 0, variables_response, is_json=True) - dbg.writer.add_command(cmd) - - -def internal_get_next_statement_targets(dbg, seq, thread_id, frame_id): - ''' gets the valid line numbers for use with set next statement ''' - try: - frame = dbg.find_frame(thread_id, frame_id) - if frame is not None: - code = frame.f_code - xml = "" - try: - linestarts = dis.findlinestarts(code) - except: - # i.e.: jython doesn't provide co_lnotab, so, we can only keep at the current line. - xml += "%d" % (frame.f_lineno,) - else: - for _, line in linestarts: - xml += "%d" % (line,) - del frame - xml += "" - cmd = dbg.cmd_factory.make_get_next_statement_targets_message(seq, xml) - dbg.writer.add_command(cmd) - else: - cmd = dbg.cmd_factory.make_error_message(seq, "Frame not found: %s from thread: %s" % (frame_id, thread_id)) - dbg.writer.add_command(cmd) - except: - cmd = dbg.cmd_factory.make_error_message(seq, "Error resolving frame: %s from thread: %s" % (frame_id, thread_id)) - dbg.writer.add_command(cmd) - - -def _evaluate_response(py_db, request, result, error_message=''): - is_error = isinstance(result, ExceptionOnEvaluate) - if is_error: - result = result.result - if not error_message: - body = pydevd_schema.EvaluateResponseBody(result=result, variablesReference=0) - variables_response = pydevd_base_schema.build_response(request, kwargs={'body':body}) - py_db.writer.add_command(NetCommand(CMD_RETURN, 0, variables_response, is_json=True)) - else: - body = pydevd_schema.EvaluateResponseBody(result=result, variablesReference=0) - variables_response = pydevd_base_schema.build_response(request, kwargs={ - 'body':body, 'success':False, 'message': error_message}) - py_db.writer.add_command(NetCommand(CMD_RETURN, 0, variables_response, is_json=True)) - - -_global_frame = None - - -def internal_evaluate_expression_json(py_db, request, thread_id): - ''' - :param EvaluateRequest request: - ''' - global _global_frame - # : :type arguments: EvaluateArguments - - arguments = request.arguments - expression = arguments.expression - frame_id = arguments.frameId - context = arguments.context - fmt = arguments.format - if hasattr(fmt, 'to_dict'): - fmt = fmt.to_dict() - - ctx = NULL - if context == 'repl': - if not py_db.is_output_redirected: - ctx = pydevd_io.redirect_stream_to_pydb_io_messages_context() - else: - # If we're not in a repl (watch, hover, ...) don't show warnings. - ctx = filter_all_warnings() - - with ctx: - try_exec = False - if frame_id is None: - if _global_frame is None: - # Lazily create a frame to be used for evaluation with no frame id. - - def __create_frame(): - yield sys._getframe() - - _global_frame = next(__create_frame()) - - frame = _global_frame - try_exec = True # Always exec in this case - eval_result = None - else: - frame = py_db.find_frame(thread_id, frame_id) - - eval_result = pydevd_vars.evaluate_expression(py_db, frame, expression, is_exec=False) - is_error = isinstance_checked(eval_result, ExceptionOnEvaluate) - if is_error: - if context == 'hover': # In a hover it doesn't make sense to do an exec. - _evaluate_response(py_db, request, result='', error_message='Exception occurred during evaluation.') - return - elif context == 'watch': - # If it's a watch, don't show it as an exception object, rather, format - # it and show it as a string (with success=False). - msg = '%s: %s' % ( - eval_result.result.__class__.__name__, eval_result.result,) - _evaluate_response(py_db, request, result=msg, error_message=msg) - return - else: - # We only try the exec if the failure we had was due to not being able - # to evaluate the expression. - try: - pydevd_vars.compile_as_eval(expression) - except Exception: - try_exec = context == 'repl' - else: - try_exec = False - if context == 'repl': - # In the repl we should show the exception to the user. - _evaluate_response_return_exception(py_db, request, eval_result.etype, eval_result.result, eval_result.tb) - return - - if try_exec: - try: - pydevd_vars.evaluate_expression(py_db, frame, expression, is_exec=True) - except (Exception, KeyboardInterrupt): - _evaluate_response_return_exception(py_db, request, *sys.exc_info()) - return - # No result on exec. - _evaluate_response(py_db, request, result='') - return - - # Ok, we have the result (could be an error), let's put it into the saved variables. - frame_tracker = py_db.suspended_frames_manager.get_frame_tracker(thread_id) - if frame_tracker is None: - # This is not really expected. - _evaluate_response(py_db, request, result='', error_message='Thread id: %s is not current thread id.' % (thread_id,)) - return - - safe_repr_custom_attrs = {} - if context == 'clipboard': - safe_repr_custom_attrs = dict( - maxstring_outer=2 ** 64, - maxstring_inner=2 ** 64, - maxother_outer=2 ** 64, - maxother_inner=2 ** 64, - ) - - if context == 'repl' and eval_result is None: - # We don't want "None" to appear when typing in the repl. - body = pydevd_schema.EvaluateResponseBody( - result='', - variablesReference=0, - ) - - else: - variable = frame_tracker.obtain_as_variable(expression, eval_result, frame=frame) - var_data = variable.get_var_data(fmt=fmt, context=context, **safe_repr_custom_attrs) - - body = pydevd_schema.EvaluateResponseBody( - result=var_data['value'], - variablesReference=var_data.get('variablesReference', 0), - type=var_data.get('type'), - presentationHint=var_data.get('presentationHint'), - namedVariables=var_data.get('namedVariables'), - indexedVariables=var_data.get('indexedVariables'), - ) - variables_response = pydevd_base_schema.build_response(request, kwargs={'body':body}) - py_db.writer.add_command(NetCommand(CMD_RETURN, 0, variables_response, is_json=True)) - - -def _evaluate_response_return_exception(py_db, request, exc_type, exc, initial_tb): - try: - tb = initial_tb - - # Show the traceback without pydevd frames. - temp_tb = tb - while temp_tb: - if py_db.get_file_type(temp_tb.tb_frame) == PYDEV_FILE: - tb = temp_tb.tb_next - temp_tb = temp_tb.tb_next - - if tb is None: - tb = initial_tb - err = ''.join(traceback.format_exception(exc_type, exc, tb)) - - # Make sure we don't keep references to them. - exc = None - exc_type = None - tb = None - temp_tb = None - initial_tb = None - except: - err = '' - pydev_log.exception(err) - - # Currently there is an issue in VSC where returning success=false for an - # eval request, in repl context, VSC does not show the error response in - # the debug console. So return the error message in result as well. - _evaluate_response(py_db, request, result=err, error_message=err) - - -@silence_warnings_decorator -def internal_evaluate_expression(dbg, seq, thread_id, frame_id, expression, is_exec, trim_if_too_big, attr_to_set_result): - ''' gets the value of a variable ''' - try: - frame = dbg.find_frame(thread_id, frame_id) - if frame is not None: - result = pydevd_vars.evaluate_expression(dbg, frame, expression, is_exec) - if attr_to_set_result != "": - pydevd_vars.change_attr_expression(frame, attr_to_set_result, expression, dbg, result) - else: - result = None - - xml = "" - xml += pydevd_xml.var_to_xml(result, expression, trim_if_too_big) - xml += "" - cmd = dbg.cmd_factory.make_evaluate_expression_message(seq, xml) - dbg.writer.add_command(cmd) - except: - exc = get_exception_traceback_str() - cmd = dbg.cmd_factory.make_error_message(seq, "Error evaluating expression " + exc) - dbg.writer.add_command(cmd) - - -def _set_expression_response(py_db, request, result, error_message): - body = pydevd_schema.SetExpressionResponseBody(result='', variablesReference=0) - variables_response = pydevd_base_schema.build_response(request, kwargs={ - 'body':body, 'success':False, 'message': error_message}) - py_db.writer.add_command(NetCommand(CMD_RETURN, 0, variables_response, is_json=True)) - - -def internal_set_expression_json(py_db, request, thread_id): - # : :type arguments: SetExpressionArguments - - arguments = request.arguments - expression = arguments.expression - frame_id = arguments.frameId - value = arguments.value - fmt = arguments.format - if hasattr(fmt, 'to_dict'): - fmt = fmt.to_dict() - - frame = py_db.find_frame(thread_id, frame_id) - exec_code = '%s = (%s)' % (expression, value) - result = pydevd_vars.evaluate_expression(py_db, frame, exec_code, is_exec=True) - is_error = isinstance(result, ExceptionOnEvaluate) - - if is_error: - _set_expression_response(py_db, request, result, error_message='Error executing: %s' % (exec_code,)) - return - - # Ok, we have the result (could be an error), let's put it into the saved variables. - frame_tracker = py_db.suspended_frames_manager.get_frame_tracker(thread_id) - if frame_tracker is None: - # This is not really expected. - _set_expression_response(py_db, request, result, error_message='Thread id: %s is not current thread id.' % (thread_id,)) - return - - # Now that the exec is done, get the actual value changed to return. - result = pydevd_vars.evaluate_expression(py_db, frame, expression, is_exec=False) - variable = frame_tracker.obtain_as_variable(expression, result, frame=frame) - var_data = variable.get_var_data(fmt=fmt) - - body = pydevd_schema.SetExpressionResponseBody( - value=var_data['value'], - variablesReference=var_data.get('variablesReference', 0), - type=var_data.get('type'), - presentationHint=var_data.get('presentationHint'), - namedVariables=var_data.get('namedVariables'), - indexedVariables=var_data.get('indexedVariables'), - ) - variables_response = pydevd_base_schema.build_response(request, kwargs={'body':body}) - py_db.writer.add_command(NetCommand(CMD_RETURN, 0, variables_response, is_json=True)) - - -def internal_get_completions(dbg, seq, thread_id, frame_id, act_tok, line=-1, column=-1): - ''' - Note that if the column is >= 0, the act_tok is considered text and the actual - activation token/qualifier is computed in this command. - ''' - try: - remove_path = None - try: - qualifier = '' - if column >= 0: - token_and_qualifier = extract_token_and_qualifier(act_tok, line, column) - act_tok = token_and_qualifier[0] - if act_tok: - act_tok += '.' - qualifier = token_and_qualifier[1] - - frame = dbg.find_frame(thread_id, frame_id) - if frame is not None: - completions = _pydev_completer.generate_completions(frame, act_tok) - - # Note that qualifier and start are only actually valid for the - # Debug Adapter Protocol (for the line-based protocol, the IDE - # is required to filter the completions returned). - cmd = dbg.cmd_factory.make_get_completions_message( - seq, completions, qualifier, start=column - len(qualifier)) - dbg.writer.add_command(cmd) - else: - cmd = dbg.cmd_factory.make_error_message(seq, "internal_get_completions: Frame not found: %s from thread: %s" % (frame_id, thread_id)) - dbg.writer.add_command(cmd) - - finally: - if remove_path is not None: - sys.path.remove(remove_path) - - except: - exc = get_exception_traceback_str() - sys.stderr.write('%s\n' % (exc,)) - cmd = dbg.cmd_factory.make_error_message(seq, "Error evaluating expression " + exc) - dbg.writer.add_command(cmd) - - -def internal_get_description(dbg, seq, thread_id, frame_id, expression): - ''' Fetch the variable description stub from the debug console - ''' - try: - frame = dbg.find_frame(thread_id, frame_id) - description = pydevd_console.get_description(frame, thread_id, frame_id, expression) - description = pydevd_xml.make_valid_xml_value(quote(description, '/>_= \t')) - description_xml = '' % description - cmd = dbg.cmd_factory.make_get_description_message(seq, description_xml) - dbg.writer.add_command(cmd) - except: - exc = get_exception_traceback_str() - cmd = dbg.cmd_factory.make_error_message(seq, "Error in fetching description" + exc) - dbg.writer.add_command(cmd) - - -def build_exception_info_response(dbg, thread_id, thread, request_seq, set_additional_thread_info, iter_visible_frames_info, max_frames): - ''' - :return ExceptionInfoResponse - ''' - additional_info = set_additional_thread_info(thread) - topmost_frame = additional_info.get_topmost_frame(thread) - - current_paused_frame_name = '' - - source_path = '' # This is an extra bit of data used by Visual Studio - stack_str_lst = [] - name = None - description = None - - if topmost_frame is not None: - try: - try: - frames_list = dbg.suspended_frames_manager.get_frames_list(thread_id) - while frames_list is not None and len(frames_list): - frames = [] - - frame = None - - if not name: - exc_type = frames_list.exc_type - if exc_type is not None: - try: - name = exc_type.__qualname__ - except: - try: - name = exc_type.__name__ - except: - try: - name = str(exc_type) - except: - pass - - if not description: - exc_desc = frames_list.exc_desc - if exc_desc is not None: - try: - description = str(exc_desc) - except: - pass - - for frame_id, frame, method_name, original_filename, filename_in_utf8, lineno, _applied_mapping, show_as_current_frame, line_col_info in \ - iter_visible_frames_info(dbg, frames_list): - - line_text = linecache.getline(original_filename, lineno) - - # Never filter out plugin frames! - if not getattr(frame, 'IS_PLUGIN_FRAME', False): - if dbg.is_files_filter_enabled and dbg.apply_files_filter(frame, original_filename, False): - continue - - if show_as_current_frame: - current_paused_frame_name = method_name - method_name += ' (Current frame)' - frames.append((filename_in_utf8, lineno, method_name, line_text, line_col_info)) - - if not source_path and frames: - source_path = frames[0][0] - - if IS_PY311_OR_GREATER: - stack_summary = traceback.StackSummary() - for filename_in_utf8, lineno, method_name, line_text, line_col_info in frames[-max_frames:]: - frame_summary = traceback.FrameSummary(filename_in_utf8, lineno, method_name, line=line_text) - if line_col_info is not None: - frame_summary.end_lineno = line_col_info.end_lineno - frame_summary.colno = line_col_info.colno - frame_summary.end_colno = line_col_info.end_colno - stack_summary.append(frame_summary) - - stack_str = ''.join(stack_summary.format()) - - else: - # Note: remove col info (just used in 3.11). - stack_str = ''.join(traceback.format_list((x[:-1] for x in frames[-max_frames:]))) - - try: - stype = frames_list.exc_type.__qualname__ - smod = frames_list.exc_type.__module__ - if smod not in ("__main__", "builtins"): - if not isinstance(smod, str): - smod = "" - stype = smod + '.' + stype - except Exception: - stype = '' - pydev_log.exception('Error getting exception type.') - - stack_str += '%s: %s\n' % (stype, frames_list.exc_desc) - stack_str += frames_list.exc_context_msg - stack_str_lst.append(stack_str) - - frames_list = frames_list.chained_frames_list - if frames_list is None or not frames_list: - break - - except: - pydev_log.exception('Error on build_exception_info_response.') - finally: - topmost_frame = None - full_stack_str = ''.join(reversed(stack_str_lst)) - - if not name: - name = 'exception: type unknown' - if not description: - description = 'exception: no description' - - if current_paused_frame_name: - name += ' (note: full exception trace is shown but execution is paused at: %s)' % (current_paused_frame_name,) - - if thread.stop_reason == CMD_STEP_CAUGHT_EXCEPTION: - break_mode = pydevd_schema.ExceptionBreakMode.ALWAYS - else: - break_mode = pydevd_schema.ExceptionBreakMode.UNHANDLED - - response = pydevd_schema.ExceptionInfoResponse( - request_seq=request_seq, - success=True, - command='exceptionInfo', - body=pydevd_schema.ExceptionInfoResponseBody( - exceptionId=name, - description=description, - breakMode=break_mode, - details=pydevd_schema.ExceptionDetails( - message=description, - typeName=name, - stackTrace=full_stack_str, - source=source_path, - # Note: ExceptionDetails actually accepts an 'innerException', but - # when passing it, VSCode is not showing the stack trace at all. - ) - ) - ) - return response - - -def internal_get_exception_details_json(dbg, request, thread_id, thread, max_frames, set_additional_thread_info=None, iter_visible_frames_info=None): - ''' Fetch exception details - ''' - try: - response = build_exception_info_response(dbg, thread_id, thread, request.seq, set_additional_thread_info, iter_visible_frames_info, max_frames) - except: - exc = get_exception_traceback_str() - response = pydevd_base_schema.build_response(request, kwargs={ - 'success': False, - 'message': exc, - 'body':{} - }) - dbg.writer.add_command(NetCommand(CMD_RETURN, 0, response, is_json=True)) - - -class InternalGetBreakpointException(InternalThreadCommand): - ''' Send details of exception raised while evaluating conditional breakpoint ''' - - def __init__(self, thread_id, exc_type, stacktrace): - self.sequence = 0 - self.thread_id = thread_id - self.stacktrace = stacktrace - self.exc_type = exc_type - - def do_it(self, dbg): - try: - callstack = "" - - makeValid = pydevd_xml.make_valid_xml_value - - for filename, line, methodname, methodobj in self.stacktrace: - if not filesystem_encoding_is_utf8 and hasattr(filename, "decode"): - # filename is a byte string encoded using the file system encoding - # convert it to utf8 - filename = filename.decode(file_system_encoding).encode("utf-8") - - callstack += '' \ - % (self.thread_id, makeValid(filename), line, makeValid(methodname), makeValid(methodobj)) - callstack += "" - - cmd = dbg.cmd_factory.make_send_breakpoint_exception_message(self.sequence, self.exc_type + "\t" + callstack) - dbg.writer.add_command(cmd) - except: - exc = get_exception_traceback_str() - sys.stderr.write('%s\n' % (exc,)) - cmd = dbg.cmd_factory.make_error_message(self.sequence, "Error Sending Exception: " + exc) - dbg.writer.add_command(cmd) - - -class InternalSendCurrExceptionTrace(InternalThreadCommand): - ''' Send details of the exception that was caught and where we've broken in. - ''' - - def __init__(self, thread_id, arg, curr_frame_id): - ''' - :param arg: exception type, description, traceback object - ''' - self.sequence = 0 - self.thread_id = thread_id - self.curr_frame_id = curr_frame_id - self.arg = arg - - def do_it(self, dbg): - try: - cmd = dbg.cmd_factory.make_send_curr_exception_trace_message(dbg, self.sequence, self.thread_id, self.curr_frame_id, *self.arg) - del self.arg - dbg.writer.add_command(cmd) - except: - exc = get_exception_traceback_str() - sys.stderr.write('%s\n' % (exc,)) - cmd = dbg.cmd_factory.make_error_message(self.sequence, "Error Sending Current Exception Trace: " + exc) - dbg.writer.add_command(cmd) - - -class InternalSendCurrExceptionTraceProceeded(InternalThreadCommand): - ''' Send details of the exception that was caught and where we've broken in. - ''' - - def __init__(self, thread_id): - self.sequence = 0 - self.thread_id = thread_id - - def do_it(self, dbg): - try: - cmd = dbg.cmd_factory.make_send_curr_exception_trace_proceeded_message(self.sequence, self.thread_id) - dbg.writer.add_command(cmd) - except: - exc = get_exception_traceback_str() - sys.stderr.write('%s\n' % (exc,)) - cmd = dbg.cmd_factory.make_error_message(self.sequence, "Error Sending Current Exception Trace Proceeded: " + exc) - dbg.writer.add_command(cmd) - - -class InternalEvaluateConsoleExpression(InternalThreadCommand): - ''' Execute the given command in the debug console ''' - - def __init__(self, seq, thread_id, frame_id, line, buffer_output=True): - self.sequence = seq - self.thread_id = thread_id - self.frame_id = frame_id - self.line = line - self.buffer_output = buffer_output - - def do_it(self, dbg): - ''' Create an XML for console output, error and more (true/false) - - - - true/false - - ''' - try: - frame = dbg.find_frame(self.thread_id, self.frame_id) - if frame is not None: - console_message = pydevd_console.execute_console_command( - frame, self.thread_id, self.frame_id, self.line, self.buffer_output) - - cmd = dbg.cmd_factory.make_send_console_message(self.sequence, console_message.to_xml()) - else: - from _pydevd_bundle.pydevd_console import ConsoleMessage - console_message = ConsoleMessage() - console_message.add_console_message( - pydevd_console.CONSOLE_ERROR, - "Select the valid frame in the debug view (thread: %s, frame: %s invalid)" % (self.thread_id, self.frame_id), - ) - cmd = dbg.cmd_factory.make_error_message(self.sequence, console_message.to_xml()) - except: - exc = get_exception_traceback_str() - cmd = dbg.cmd_factory.make_error_message(self.sequence, "Error evaluating expression " + exc) - dbg.writer.add_command(cmd) - - -class InternalRunCustomOperation(InternalThreadCommand): - ''' Run a custom command on an expression - ''' - - def __init__(self, seq, thread_id, frame_id, scope, attrs, style, encoded_code_or_file, fnname): - self.sequence = seq - self.thread_id = thread_id - self.frame_id = frame_id - self.scope = scope - self.attrs = attrs - self.style = style - self.code_or_file = unquote_plus(encoded_code_or_file) - self.fnname = fnname - - def do_it(self, dbg): - try: - res = pydevd_vars.custom_operation(dbg, self.thread_id, self.frame_id, self.scope, self.attrs, - self.style, self.code_or_file, self.fnname) - resEncoded = quote_plus(res) - cmd = dbg.cmd_factory.make_custom_operation_message(self.sequence, resEncoded) - dbg.writer.add_command(cmd) - except: - exc = get_exception_traceback_str() - cmd = dbg.cmd_factory.make_error_message(self.sequence, "Error in running custom operation" + exc) - dbg.writer.add_command(cmd) - - -class InternalConsoleGetCompletions(InternalThreadCommand): - ''' Fetch the completions in the debug console - ''' - - def __init__(self, seq, thread_id, frame_id, act_tok): - self.sequence = seq - self.thread_id = thread_id - self.frame_id = frame_id - self.act_tok = act_tok - - def do_it(self, dbg): - ''' Get completions and write back to the client - ''' - try: - frame = dbg.find_frame(self.thread_id, self.frame_id) - completions_xml = pydevd_console.get_completions(frame, self.act_tok) - cmd = dbg.cmd_factory.make_send_console_message(self.sequence, completions_xml) - dbg.writer.add_command(cmd) - except: - exc = get_exception_traceback_str() - cmd = dbg.cmd_factory.make_error_message(self.sequence, "Error in fetching completions" + exc) - dbg.writer.add_command(cmd) - - -class InternalConsoleExec(InternalThreadCommand): - ''' gets the value of a variable ''' - - def __init__(self, seq, thread_id, frame_id, expression): - self.sequence = seq - self.thread_id = thread_id - self.frame_id = frame_id - self.expression = expression - - def do_it(self, dbg): - ''' Converts request into python variable ''' - try: - try: - # don't trace new threads created by console command - disable_trace_thread_modules() - - result = pydevconsole.console_exec(self.thread_id, self.frame_id, self.expression, dbg) - xml = "" - xml += pydevd_xml.var_to_xml(result, "") - xml += "" - cmd = dbg.cmd_factory.make_evaluate_expression_message(self.sequence, xml) - dbg.writer.add_command(cmd) - except: - exc = get_exception_traceback_str() - sys.stderr.write('%s\n' % (exc,)) - cmd = dbg.cmd_factory.make_error_message(self.sequence, "Error evaluating console expression " + exc) - dbg.writer.add_command(cmd) - finally: - enable_trace_thread_modules() - - sys.stderr.flush() - sys.stdout.flush() - - -class InternalLoadFullValue(InternalThreadCommand): - ''' - Loads values asynchronously - ''' - - def __init__(self, seq, thread_id, frame_id, vars): - self.sequence = seq - self.thread_id = thread_id - self.frame_id = frame_id - self.vars = vars - - @silence_warnings_decorator - def do_it(self, dbg): - '''Starts a thread that will load values asynchronously''' - try: - var_objects = [] - for variable in self.vars: - variable = variable.strip() - if len(variable) > 0: - if '\t' in variable: # there are attributes beyond scope - scope, attrs = variable.split('\t', 1) - name = attrs[0] - else: - scope, attrs = (variable, None) - name = scope - var_obj = pydevd_vars.getVariable(dbg, self.thread_id, self.frame_id, scope, attrs) - var_objects.append((var_obj, name)) - - t = GetValueAsyncThreadDebug(dbg, dbg, self.sequence, var_objects) - t.start() - except: - exc = get_exception_traceback_str() - sys.stderr.write('%s\n' % (exc,)) - cmd = dbg.cmd_factory.make_error_message(self.sequence, "Error evaluating variable %s " % exc) - dbg.writer.add_command(cmd) - - -class AbstractGetValueAsyncThread(PyDBDaemonThread): - ''' - Abstract class for a thread, which evaluates values for async variables - ''' - - def __init__(self, py_db, frame_accessor, seq, var_objects): - PyDBDaemonThread.__init__(self, py_db) - self.frame_accessor = frame_accessor - self.seq = seq - self.var_objs = var_objects - self.cancel_event = threading.Event() - - def send_result(self, xml): - raise NotImplementedError() - - @overrides(PyDBDaemonThread._on_run) - def _on_run(self): - start = time.time() - xml = StringIO() - xml.write("") - for (var_obj, name) in self.var_objs: - current_time = time.time() - if current_time - start > ASYNC_EVAL_TIMEOUT_SEC or self.cancel_event.is_set(): - break - xml.write(pydevd_xml.var_to_xml(var_obj, name, evaluate_full_value=True)) - xml.write("") - self.send_result(xml) - xml.close() - - -class GetValueAsyncThreadDebug(AbstractGetValueAsyncThread): - ''' - A thread for evaluation async values, which returns result for debugger - Create message and send it via writer thread - ''' - - def send_result(self, xml): - if self.frame_accessor is not None: - cmd = self.frame_accessor.cmd_factory.make_load_full_value_message(self.seq, xml.getvalue()) - self.frame_accessor.writer.add_command(cmd) - - -class GetValueAsyncThreadConsole(AbstractGetValueAsyncThread): - ''' - A thread for evaluation async values, which returns result for Console - Send result directly to Console's server - ''' - - def send_result(self, xml): - if self.frame_accessor is not None: - self.frame_accessor.ReturnFullValue(self.seq, xml.getvalue()) - diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/data/datasets/lvis_v1_category_image_count.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/data/datasets/lvis_v1_category_image_count.py deleted file mode 100644 index 31bf0cfcd5096ab87835db86a28671d474514c40..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/data/datasets/lvis_v1_category_image_count.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Autogen with -# with open("lvis_v1_train.json", "r") as f: -# a = json.load(f) -# c = a["categories"] -# for x in c: -# del x["name"] -# del x["instance_count"] -# del x["def"] -# del x["synonyms"] -# del x["frequency"] -# del x["synset"] -# LVIS_CATEGORY_IMAGE_COUNT = repr(c) + " # noqa" -# with open("/tmp/lvis_category_image_count.py", "wt") as f: -# f.write(f"LVIS_CATEGORY_IMAGE_COUNT = {LVIS_CATEGORY_IMAGE_COUNT}") -# Then paste the contents of that file below - -# fmt: off -LVIS_CATEGORY_IMAGE_COUNT = [{'id': 1, 'image_count': 64}, {'id': 2, 'image_count': 364}, {'id': 3, 'image_count': 1911}, {'id': 4, 'image_count': 149}, {'id': 5, 'image_count': 29}, {'id': 6, 'image_count': 26}, {'id': 7, 'image_count': 59}, {'id': 8, 'image_count': 22}, {'id': 9, 'image_count': 12}, {'id': 10, 'image_count': 28}, {'id': 11, 'image_count': 505}, {'id': 12, 'image_count': 1207}, {'id': 13, 'image_count': 4}, {'id': 14, 'image_count': 10}, {'id': 15, 'image_count': 500}, {'id': 16, 'image_count': 33}, {'id': 17, 'image_count': 3}, {'id': 18, 'image_count': 44}, {'id': 19, 'image_count': 561}, {'id': 20, 'image_count': 8}, {'id': 21, 'image_count': 9}, {'id': 22, 'image_count': 33}, {'id': 23, 'image_count': 1883}, {'id': 24, 'image_count': 98}, {'id': 25, 'image_count': 70}, {'id': 26, 'image_count': 46}, {'id': 27, 'image_count': 117}, {'id': 28, 'image_count': 41}, {'id': 29, 'image_count': 1395}, {'id': 30, 'image_count': 7}, {'id': 31, 'image_count': 1}, {'id': 32, 'image_count': 314}, {'id': 33, 'image_count': 31}, {'id': 34, 'image_count': 1905}, {'id': 35, 'image_count': 1859}, {'id': 36, 'image_count': 1623}, {'id': 37, 'image_count': 47}, {'id': 38, 'image_count': 3}, {'id': 39, 'image_count': 3}, {'id': 40, 'image_count': 1}, {'id': 41, 'image_count': 305}, {'id': 42, 'image_count': 6}, {'id': 43, 'image_count': 210}, {'id': 44, 'image_count': 36}, {'id': 45, 'image_count': 1787}, {'id': 46, 'image_count': 17}, {'id': 47, 'image_count': 51}, {'id': 48, 'image_count': 138}, {'id': 49, 'image_count': 3}, {'id': 50, 'image_count': 1470}, {'id': 51, 'image_count': 3}, {'id': 52, 'image_count': 2}, {'id': 53, 'image_count': 186}, {'id': 54, 'image_count': 76}, {'id': 55, 'image_count': 26}, {'id': 56, 'image_count': 303}, {'id': 57, 'image_count': 738}, {'id': 58, 'image_count': 1799}, {'id': 59, 'image_count': 1934}, {'id': 60, 'image_count': 1609}, {'id': 61, 'image_count': 1622}, {'id': 62, 'image_count': 41}, {'id': 63, 'image_count': 4}, {'id': 64, 'image_count': 11}, {'id': 65, 'image_count': 270}, {'id': 66, 'image_count': 349}, {'id': 67, 'image_count': 42}, {'id': 68, 'image_count': 823}, {'id': 69, 'image_count': 6}, {'id': 70, 'image_count': 48}, {'id': 71, 'image_count': 3}, {'id': 72, 'image_count': 42}, {'id': 73, 'image_count': 24}, {'id': 74, 'image_count': 16}, {'id': 75, 'image_count': 605}, {'id': 76, 'image_count': 646}, {'id': 77, 'image_count': 1765}, {'id': 78, 'image_count': 2}, {'id': 79, 'image_count': 125}, {'id': 80, 'image_count': 1420}, {'id': 81, 'image_count': 140}, {'id': 82, 'image_count': 4}, {'id': 83, 'image_count': 322}, {'id': 84, 'image_count': 60}, {'id': 85, 'image_count': 2}, {'id': 86, 'image_count': 231}, {'id': 87, 'image_count': 333}, {'id': 88, 'image_count': 1941}, {'id': 89, 'image_count': 367}, {'id': 90, 'image_count': 1922}, {'id': 91, 'image_count': 18}, {'id': 92, 'image_count': 81}, {'id': 93, 'image_count': 1}, {'id': 94, 'image_count': 1852}, {'id': 95, 'image_count': 430}, {'id': 96, 'image_count': 247}, {'id': 97, 'image_count': 94}, {'id': 98, 'image_count': 21}, {'id': 99, 'image_count': 1821}, {'id': 100, 'image_count': 16}, {'id': 101, 'image_count': 12}, {'id': 102, 'image_count': 25}, {'id': 103, 'image_count': 41}, {'id': 104, 'image_count': 244}, {'id': 105, 'image_count': 7}, {'id': 106, 'image_count': 1}, {'id': 107, 'image_count': 40}, {'id': 108, 'image_count': 40}, {'id': 109, 'image_count': 104}, {'id': 110, 'image_count': 1671}, {'id': 111, 'image_count': 49}, {'id': 112, 'image_count': 243}, {'id': 113, 'image_count': 2}, {'id': 114, 'image_count': 242}, {'id': 115, 'image_count': 271}, {'id': 116, 'image_count': 104}, {'id': 117, 'image_count': 8}, {'id': 118, 'image_count': 1758}, {'id': 119, 'image_count': 1}, {'id': 120, 'image_count': 48}, {'id': 121, 'image_count': 14}, {'id': 122, 'image_count': 40}, {'id': 123, 'image_count': 1}, {'id': 124, 'image_count': 37}, {'id': 125, 'image_count': 1510}, {'id': 126, 'image_count': 6}, {'id': 127, 'image_count': 1903}, {'id': 128, 'image_count': 70}, {'id': 129, 'image_count': 86}, {'id': 130, 'image_count': 7}, {'id': 131, 'image_count': 5}, {'id': 132, 'image_count': 1406}, {'id': 133, 'image_count': 1901}, {'id': 134, 'image_count': 15}, {'id': 135, 'image_count': 28}, {'id': 136, 'image_count': 6}, {'id': 137, 'image_count': 494}, {'id': 138, 'image_count': 234}, {'id': 139, 'image_count': 1922}, {'id': 140, 'image_count': 1}, {'id': 141, 'image_count': 35}, {'id': 142, 'image_count': 5}, {'id': 143, 'image_count': 1828}, {'id': 144, 'image_count': 8}, {'id': 145, 'image_count': 63}, {'id': 146, 'image_count': 1668}, {'id': 147, 'image_count': 4}, {'id': 148, 'image_count': 95}, {'id': 149, 'image_count': 17}, {'id': 150, 'image_count': 1567}, {'id': 151, 'image_count': 2}, {'id': 152, 'image_count': 103}, {'id': 153, 'image_count': 50}, {'id': 154, 'image_count': 1309}, {'id': 155, 'image_count': 6}, {'id': 156, 'image_count': 92}, {'id': 157, 'image_count': 19}, {'id': 158, 'image_count': 37}, {'id': 159, 'image_count': 4}, {'id': 160, 'image_count': 709}, {'id': 161, 'image_count': 9}, {'id': 162, 'image_count': 82}, {'id': 163, 'image_count': 15}, {'id': 164, 'image_count': 3}, {'id': 165, 'image_count': 61}, {'id': 166, 'image_count': 51}, {'id': 167, 'image_count': 5}, {'id': 168, 'image_count': 13}, {'id': 169, 'image_count': 642}, {'id': 170, 'image_count': 24}, {'id': 171, 'image_count': 255}, {'id': 172, 'image_count': 9}, {'id': 173, 'image_count': 1808}, {'id': 174, 'image_count': 31}, {'id': 175, 'image_count': 158}, {'id': 176, 'image_count': 80}, {'id': 177, 'image_count': 1884}, {'id': 178, 'image_count': 158}, {'id': 179, 'image_count': 2}, {'id': 180, 'image_count': 12}, {'id': 181, 'image_count': 1659}, {'id': 182, 'image_count': 7}, {'id': 183, 'image_count': 834}, {'id': 184, 'image_count': 57}, {'id': 185, 'image_count': 174}, {'id': 186, 'image_count': 95}, {'id': 187, 'image_count': 27}, {'id': 188, 'image_count': 22}, {'id': 189, 'image_count': 1391}, {'id': 190, 'image_count': 90}, {'id': 191, 'image_count': 40}, {'id': 192, 'image_count': 445}, {'id': 193, 'image_count': 21}, {'id': 194, 'image_count': 1132}, {'id': 195, 'image_count': 177}, {'id': 196, 'image_count': 4}, {'id': 197, 'image_count': 17}, {'id': 198, 'image_count': 84}, {'id': 199, 'image_count': 55}, {'id': 200, 'image_count': 30}, {'id': 201, 'image_count': 25}, {'id': 202, 'image_count': 2}, {'id': 203, 'image_count': 125}, {'id': 204, 'image_count': 1135}, {'id': 205, 'image_count': 19}, {'id': 206, 'image_count': 72}, {'id': 207, 'image_count': 1926}, {'id': 208, 'image_count': 159}, {'id': 209, 'image_count': 7}, {'id': 210, 'image_count': 1}, {'id': 211, 'image_count': 13}, {'id': 212, 'image_count': 35}, {'id': 213, 'image_count': 18}, {'id': 214, 'image_count': 8}, {'id': 215, 'image_count': 6}, {'id': 216, 'image_count': 35}, {'id': 217, 'image_count': 1222}, {'id': 218, 'image_count': 103}, {'id': 219, 'image_count': 28}, {'id': 220, 'image_count': 63}, {'id': 221, 'image_count': 28}, {'id': 222, 'image_count': 5}, {'id': 223, 'image_count': 7}, {'id': 224, 'image_count': 14}, {'id': 225, 'image_count': 1918}, {'id': 226, 'image_count': 133}, {'id': 227, 'image_count': 16}, {'id': 228, 'image_count': 27}, {'id': 229, 'image_count': 110}, {'id': 230, 'image_count': 1895}, {'id': 231, 'image_count': 4}, {'id': 232, 'image_count': 1927}, {'id': 233, 'image_count': 8}, {'id': 234, 'image_count': 1}, {'id': 235, 'image_count': 263}, {'id': 236, 'image_count': 10}, {'id': 237, 'image_count': 2}, {'id': 238, 'image_count': 3}, {'id': 239, 'image_count': 87}, {'id': 240, 'image_count': 9}, {'id': 241, 'image_count': 71}, {'id': 242, 'image_count': 13}, {'id': 243, 'image_count': 18}, {'id': 244, 'image_count': 2}, {'id': 245, 'image_count': 5}, {'id': 246, 'image_count': 45}, {'id': 247, 'image_count': 1}, {'id': 248, 'image_count': 23}, {'id': 249, 'image_count': 32}, {'id': 250, 'image_count': 4}, {'id': 251, 'image_count': 1}, {'id': 252, 'image_count': 858}, {'id': 253, 'image_count': 661}, {'id': 254, 'image_count': 168}, {'id': 255, 'image_count': 210}, {'id': 256, 'image_count': 65}, {'id': 257, 'image_count': 4}, {'id': 258, 'image_count': 2}, {'id': 259, 'image_count': 159}, {'id': 260, 'image_count': 31}, {'id': 261, 'image_count': 811}, {'id': 262, 'image_count': 1}, {'id': 263, 'image_count': 42}, {'id': 264, 'image_count': 27}, {'id': 265, 'image_count': 2}, {'id': 266, 'image_count': 5}, {'id': 267, 'image_count': 95}, {'id': 268, 'image_count': 32}, {'id': 269, 'image_count': 1}, {'id': 270, 'image_count': 1}, {'id': 271, 'image_count': 1844}, {'id': 272, 'image_count': 897}, {'id': 273, 'image_count': 31}, {'id': 274, 'image_count': 23}, {'id': 275, 'image_count': 1}, {'id': 276, 'image_count': 202}, {'id': 277, 'image_count': 746}, {'id': 278, 'image_count': 44}, {'id': 279, 'image_count': 14}, {'id': 280, 'image_count': 26}, {'id': 281, 'image_count': 1}, {'id': 282, 'image_count': 2}, {'id': 283, 'image_count': 25}, {'id': 284, 'image_count': 238}, {'id': 285, 'image_count': 592}, {'id': 286, 'image_count': 26}, {'id': 287, 'image_count': 5}, {'id': 288, 'image_count': 42}, {'id': 289, 'image_count': 13}, {'id': 290, 'image_count': 46}, {'id': 291, 'image_count': 1}, {'id': 292, 'image_count': 8}, {'id': 293, 'image_count': 34}, {'id': 294, 'image_count': 5}, {'id': 295, 'image_count': 1}, {'id': 296, 'image_count': 1871}, {'id': 297, 'image_count': 717}, {'id': 298, 'image_count': 1010}, {'id': 299, 'image_count': 679}, {'id': 300, 'image_count': 3}, {'id': 301, 'image_count': 4}, {'id': 302, 'image_count': 1}, {'id': 303, 'image_count': 166}, {'id': 304, 'image_count': 2}, {'id': 305, 'image_count': 266}, {'id': 306, 'image_count': 101}, {'id': 307, 'image_count': 6}, {'id': 308, 'image_count': 14}, {'id': 309, 'image_count': 133}, {'id': 310, 'image_count': 2}, {'id': 311, 'image_count': 38}, {'id': 312, 'image_count': 95}, {'id': 313, 'image_count': 1}, {'id': 314, 'image_count': 12}, {'id': 315, 'image_count': 49}, {'id': 316, 'image_count': 5}, {'id': 317, 'image_count': 5}, {'id': 318, 'image_count': 16}, {'id': 319, 'image_count': 216}, {'id': 320, 'image_count': 12}, {'id': 321, 'image_count': 1}, {'id': 322, 'image_count': 54}, {'id': 323, 'image_count': 5}, {'id': 324, 'image_count': 245}, {'id': 325, 'image_count': 12}, {'id': 326, 'image_count': 7}, {'id': 327, 'image_count': 35}, {'id': 328, 'image_count': 36}, {'id': 329, 'image_count': 32}, {'id': 330, 'image_count': 1027}, {'id': 331, 'image_count': 10}, {'id': 332, 'image_count': 12}, {'id': 333, 'image_count': 1}, {'id': 334, 'image_count': 67}, {'id': 335, 'image_count': 71}, {'id': 336, 'image_count': 30}, {'id': 337, 'image_count': 48}, {'id': 338, 'image_count': 249}, {'id': 339, 'image_count': 13}, {'id': 340, 'image_count': 29}, {'id': 341, 'image_count': 14}, {'id': 342, 'image_count': 236}, {'id': 343, 'image_count': 15}, {'id': 344, 'image_count': 1521}, {'id': 345, 'image_count': 25}, {'id': 346, 'image_count': 249}, {'id': 347, 'image_count': 139}, {'id': 348, 'image_count': 2}, {'id': 349, 'image_count': 2}, {'id': 350, 'image_count': 1890}, {'id': 351, 'image_count': 1240}, {'id': 352, 'image_count': 1}, {'id': 353, 'image_count': 9}, {'id': 354, 'image_count': 1}, {'id': 355, 'image_count': 3}, {'id': 356, 'image_count': 11}, {'id': 357, 'image_count': 4}, {'id': 358, 'image_count': 236}, {'id': 359, 'image_count': 44}, {'id': 360, 'image_count': 19}, {'id': 361, 'image_count': 1100}, {'id': 362, 'image_count': 7}, {'id': 363, 'image_count': 69}, {'id': 364, 'image_count': 2}, {'id': 365, 'image_count': 8}, {'id': 366, 'image_count': 5}, {'id': 367, 'image_count': 227}, {'id': 368, 'image_count': 6}, {'id': 369, 'image_count': 106}, {'id': 370, 'image_count': 81}, {'id': 371, 'image_count': 17}, {'id': 372, 'image_count': 134}, {'id': 373, 'image_count': 312}, {'id': 374, 'image_count': 8}, {'id': 375, 'image_count': 271}, {'id': 376, 'image_count': 2}, {'id': 377, 'image_count': 103}, {'id': 378, 'image_count': 1938}, {'id': 379, 'image_count': 574}, {'id': 380, 'image_count': 120}, {'id': 381, 'image_count': 2}, {'id': 382, 'image_count': 2}, {'id': 383, 'image_count': 13}, {'id': 384, 'image_count': 29}, {'id': 385, 'image_count': 1710}, {'id': 386, 'image_count': 66}, {'id': 387, 'image_count': 1008}, {'id': 388, 'image_count': 1}, {'id': 389, 'image_count': 3}, {'id': 390, 'image_count': 1942}, {'id': 391, 'image_count': 19}, {'id': 392, 'image_count': 1488}, {'id': 393, 'image_count': 46}, {'id': 394, 'image_count': 106}, {'id': 395, 'image_count': 115}, {'id': 396, 'image_count': 19}, {'id': 397, 'image_count': 2}, {'id': 398, 'image_count': 1}, {'id': 399, 'image_count': 28}, {'id': 400, 'image_count': 9}, {'id': 401, 'image_count': 192}, {'id': 402, 'image_count': 12}, {'id': 403, 'image_count': 21}, {'id': 404, 'image_count': 247}, {'id': 405, 'image_count': 6}, {'id': 406, 'image_count': 64}, {'id': 407, 'image_count': 7}, {'id': 408, 'image_count': 40}, {'id': 409, 'image_count': 542}, {'id': 410, 'image_count': 2}, {'id': 411, 'image_count': 1898}, {'id': 412, 'image_count': 36}, {'id': 413, 'image_count': 4}, {'id': 414, 'image_count': 1}, {'id': 415, 'image_count': 191}, {'id': 416, 'image_count': 6}, {'id': 417, 'image_count': 41}, {'id': 418, 'image_count': 39}, {'id': 419, 'image_count': 46}, {'id': 420, 'image_count': 1}, {'id': 421, 'image_count': 1451}, {'id': 422, 'image_count': 1878}, {'id': 423, 'image_count': 11}, {'id': 424, 'image_count': 82}, {'id': 425, 'image_count': 18}, {'id': 426, 'image_count': 1}, {'id': 427, 'image_count': 7}, {'id': 428, 'image_count': 3}, {'id': 429, 'image_count': 575}, {'id': 430, 'image_count': 1907}, {'id': 431, 'image_count': 8}, {'id': 432, 'image_count': 4}, {'id': 433, 'image_count': 32}, {'id': 434, 'image_count': 11}, {'id': 435, 'image_count': 4}, {'id': 436, 'image_count': 54}, {'id': 437, 'image_count': 202}, {'id': 438, 'image_count': 32}, {'id': 439, 'image_count': 3}, {'id': 440, 'image_count': 130}, {'id': 441, 'image_count': 119}, {'id': 442, 'image_count': 141}, {'id': 443, 'image_count': 29}, {'id': 444, 'image_count': 525}, {'id': 445, 'image_count': 1323}, {'id': 446, 'image_count': 2}, {'id': 447, 'image_count': 113}, {'id': 448, 'image_count': 16}, {'id': 449, 'image_count': 7}, {'id': 450, 'image_count': 35}, {'id': 451, 'image_count': 1908}, {'id': 452, 'image_count': 353}, {'id': 453, 'image_count': 18}, {'id': 454, 'image_count': 14}, {'id': 455, 'image_count': 77}, {'id': 456, 'image_count': 8}, {'id': 457, 'image_count': 37}, {'id': 458, 'image_count': 1}, {'id': 459, 'image_count': 346}, {'id': 460, 'image_count': 19}, {'id': 461, 'image_count': 1779}, {'id': 462, 'image_count': 23}, {'id': 463, 'image_count': 25}, {'id': 464, 'image_count': 67}, {'id': 465, 'image_count': 19}, {'id': 466, 'image_count': 28}, {'id': 467, 'image_count': 4}, {'id': 468, 'image_count': 27}, {'id': 469, 'image_count': 1861}, {'id': 470, 'image_count': 11}, {'id': 471, 'image_count': 13}, {'id': 472, 'image_count': 13}, {'id': 473, 'image_count': 32}, {'id': 474, 'image_count': 1767}, {'id': 475, 'image_count': 42}, {'id': 476, 'image_count': 17}, {'id': 477, 'image_count': 128}, {'id': 478, 'image_count': 1}, {'id': 479, 'image_count': 9}, {'id': 480, 'image_count': 10}, {'id': 481, 'image_count': 4}, {'id': 482, 'image_count': 9}, {'id': 483, 'image_count': 18}, {'id': 484, 'image_count': 41}, {'id': 485, 'image_count': 28}, {'id': 486, 'image_count': 3}, {'id': 487, 'image_count': 65}, {'id': 488, 'image_count': 9}, {'id': 489, 'image_count': 23}, {'id': 490, 'image_count': 24}, {'id': 491, 'image_count': 1}, {'id': 492, 'image_count': 2}, {'id': 493, 'image_count': 59}, {'id': 494, 'image_count': 48}, {'id': 495, 'image_count': 17}, {'id': 496, 'image_count': 1877}, {'id': 497, 'image_count': 18}, {'id': 498, 'image_count': 1920}, {'id': 499, 'image_count': 50}, {'id': 500, 'image_count': 1890}, {'id': 501, 'image_count': 99}, {'id': 502, 'image_count': 1530}, {'id': 503, 'image_count': 3}, {'id': 504, 'image_count': 11}, {'id': 505, 'image_count': 19}, {'id': 506, 'image_count': 3}, {'id': 507, 'image_count': 63}, {'id': 508, 'image_count': 5}, {'id': 509, 'image_count': 6}, {'id': 510, 'image_count': 233}, {'id': 511, 'image_count': 54}, {'id': 512, 'image_count': 36}, {'id': 513, 'image_count': 10}, {'id': 514, 'image_count': 124}, {'id': 515, 'image_count': 101}, {'id': 516, 'image_count': 3}, {'id': 517, 'image_count': 363}, {'id': 518, 'image_count': 3}, {'id': 519, 'image_count': 30}, {'id': 520, 'image_count': 18}, {'id': 521, 'image_count': 199}, {'id': 522, 'image_count': 97}, {'id': 523, 'image_count': 32}, {'id': 524, 'image_count': 121}, {'id': 525, 'image_count': 16}, {'id': 526, 'image_count': 12}, {'id': 527, 'image_count': 2}, {'id': 528, 'image_count': 214}, {'id': 529, 'image_count': 48}, {'id': 530, 'image_count': 26}, {'id': 531, 'image_count': 13}, {'id': 532, 'image_count': 4}, {'id': 533, 'image_count': 11}, {'id': 534, 'image_count': 123}, {'id': 535, 'image_count': 7}, {'id': 536, 'image_count': 200}, {'id': 537, 'image_count': 91}, {'id': 538, 'image_count': 9}, {'id': 539, 'image_count': 72}, {'id': 540, 'image_count': 1886}, {'id': 541, 'image_count': 4}, {'id': 542, 'image_count': 1}, {'id': 543, 'image_count': 1}, {'id': 544, 'image_count': 1932}, {'id': 545, 'image_count': 4}, {'id': 546, 'image_count': 56}, {'id': 547, 'image_count': 854}, {'id': 548, 'image_count': 755}, {'id': 549, 'image_count': 1843}, {'id': 550, 'image_count': 96}, {'id': 551, 'image_count': 7}, {'id': 552, 'image_count': 74}, {'id': 553, 'image_count': 66}, {'id': 554, 'image_count': 57}, {'id': 555, 'image_count': 44}, {'id': 556, 'image_count': 1905}, {'id': 557, 'image_count': 4}, {'id': 558, 'image_count': 90}, {'id': 559, 'image_count': 1635}, {'id': 560, 'image_count': 8}, {'id': 561, 'image_count': 5}, {'id': 562, 'image_count': 50}, {'id': 563, 'image_count': 545}, {'id': 564, 'image_count': 20}, {'id': 565, 'image_count': 193}, {'id': 566, 'image_count': 285}, {'id': 567, 'image_count': 3}, {'id': 568, 'image_count': 1}, {'id': 569, 'image_count': 1904}, {'id': 570, 'image_count': 294}, {'id': 571, 'image_count': 3}, {'id': 572, 'image_count': 5}, {'id': 573, 'image_count': 24}, {'id': 574, 'image_count': 2}, {'id': 575, 'image_count': 2}, {'id': 576, 'image_count': 16}, {'id': 577, 'image_count': 8}, {'id': 578, 'image_count': 154}, {'id': 579, 'image_count': 66}, {'id': 580, 'image_count': 1}, {'id': 581, 'image_count': 24}, {'id': 582, 'image_count': 1}, {'id': 583, 'image_count': 4}, {'id': 584, 'image_count': 75}, {'id': 585, 'image_count': 6}, {'id': 586, 'image_count': 126}, {'id': 587, 'image_count': 24}, {'id': 588, 'image_count': 22}, {'id': 589, 'image_count': 1872}, {'id': 590, 'image_count': 16}, {'id': 591, 'image_count': 423}, {'id': 592, 'image_count': 1927}, {'id': 593, 'image_count': 38}, {'id': 594, 'image_count': 3}, {'id': 595, 'image_count': 1945}, {'id': 596, 'image_count': 35}, {'id': 597, 'image_count': 1}, {'id': 598, 'image_count': 13}, {'id': 599, 'image_count': 9}, {'id': 600, 'image_count': 14}, {'id': 601, 'image_count': 37}, {'id': 602, 'image_count': 3}, {'id': 603, 'image_count': 4}, {'id': 604, 'image_count': 100}, {'id': 605, 'image_count': 195}, {'id': 606, 'image_count': 1}, {'id': 607, 'image_count': 12}, {'id': 608, 'image_count': 24}, {'id': 609, 'image_count': 489}, {'id': 610, 'image_count': 10}, {'id': 611, 'image_count': 1689}, {'id': 612, 'image_count': 42}, {'id': 613, 'image_count': 81}, {'id': 614, 'image_count': 894}, {'id': 615, 'image_count': 1868}, {'id': 616, 'image_count': 7}, {'id': 617, 'image_count': 1567}, {'id': 618, 'image_count': 10}, {'id': 619, 'image_count': 8}, {'id': 620, 'image_count': 7}, {'id': 621, 'image_count': 629}, {'id': 622, 'image_count': 89}, {'id': 623, 'image_count': 15}, {'id': 624, 'image_count': 134}, {'id': 625, 'image_count': 4}, {'id': 626, 'image_count': 1802}, {'id': 627, 'image_count': 595}, {'id': 628, 'image_count': 1210}, {'id': 629, 'image_count': 48}, {'id': 630, 'image_count': 418}, {'id': 631, 'image_count': 1846}, {'id': 632, 'image_count': 5}, {'id': 633, 'image_count': 221}, {'id': 634, 'image_count': 10}, {'id': 635, 'image_count': 7}, {'id': 636, 'image_count': 76}, {'id': 637, 'image_count': 22}, {'id': 638, 'image_count': 10}, {'id': 639, 'image_count': 341}, {'id': 640, 'image_count': 1}, {'id': 641, 'image_count': 705}, {'id': 642, 'image_count': 1900}, {'id': 643, 'image_count': 188}, {'id': 644, 'image_count': 227}, {'id': 645, 'image_count': 861}, {'id': 646, 'image_count': 6}, {'id': 647, 'image_count': 115}, {'id': 648, 'image_count': 5}, {'id': 649, 'image_count': 43}, {'id': 650, 'image_count': 14}, {'id': 651, 'image_count': 6}, {'id': 652, 'image_count': 15}, {'id': 653, 'image_count': 1167}, {'id': 654, 'image_count': 15}, {'id': 655, 'image_count': 994}, {'id': 656, 'image_count': 28}, {'id': 657, 'image_count': 2}, {'id': 658, 'image_count': 338}, {'id': 659, 'image_count': 334}, {'id': 660, 'image_count': 15}, {'id': 661, 'image_count': 102}, {'id': 662, 'image_count': 1}, {'id': 663, 'image_count': 8}, {'id': 664, 'image_count': 1}, {'id': 665, 'image_count': 1}, {'id': 666, 'image_count': 28}, {'id': 667, 'image_count': 91}, {'id': 668, 'image_count': 260}, {'id': 669, 'image_count': 131}, {'id': 670, 'image_count': 128}, {'id': 671, 'image_count': 3}, {'id': 672, 'image_count': 10}, {'id': 673, 'image_count': 39}, {'id': 674, 'image_count': 2}, {'id': 675, 'image_count': 925}, {'id': 676, 'image_count': 354}, {'id': 677, 'image_count': 31}, {'id': 678, 'image_count': 10}, {'id': 679, 'image_count': 215}, {'id': 680, 'image_count': 71}, {'id': 681, 'image_count': 43}, {'id': 682, 'image_count': 28}, {'id': 683, 'image_count': 34}, {'id': 684, 'image_count': 16}, {'id': 685, 'image_count': 273}, {'id': 686, 'image_count': 2}, {'id': 687, 'image_count': 999}, {'id': 688, 'image_count': 4}, {'id': 689, 'image_count': 107}, {'id': 690, 'image_count': 2}, {'id': 691, 'image_count': 1}, {'id': 692, 'image_count': 454}, {'id': 693, 'image_count': 9}, {'id': 694, 'image_count': 1901}, {'id': 695, 'image_count': 61}, {'id': 696, 'image_count': 91}, {'id': 697, 'image_count': 46}, {'id': 698, 'image_count': 1402}, {'id': 699, 'image_count': 74}, {'id': 700, 'image_count': 421}, {'id': 701, 'image_count': 226}, {'id': 702, 'image_count': 10}, {'id': 703, 'image_count': 1720}, {'id': 704, 'image_count': 261}, {'id': 705, 'image_count': 1337}, {'id': 706, 'image_count': 293}, {'id': 707, 'image_count': 62}, {'id': 708, 'image_count': 814}, {'id': 709, 'image_count': 407}, {'id': 710, 'image_count': 6}, {'id': 711, 'image_count': 16}, {'id': 712, 'image_count': 7}, {'id': 713, 'image_count': 1791}, {'id': 714, 'image_count': 2}, {'id': 715, 'image_count': 1915}, {'id': 716, 'image_count': 1940}, {'id': 717, 'image_count': 13}, {'id': 718, 'image_count': 16}, {'id': 719, 'image_count': 448}, {'id': 720, 'image_count': 12}, {'id': 721, 'image_count': 18}, {'id': 722, 'image_count': 4}, {'id': 723, 'image_count': 71}, {'id': 724, 'image_count': 189}, {'id': 725, 'image_count': 74}, {'id': 726, 'image_count': 103}, {'id': 727, 'image_count': 3}, {'id': 728, 'image_count': 110}, {'id': 729, 'image_count': 5}, {'id': 730, 'image_count': 9}, {'id': 731, 'image_count': 15}, {'id': 732, 'image_count': 25}, {'id': 733, 'image_count': 7}, {'id': 734, 'image_count': 647}, {'id': 735, 'image_count': 824}, {'id': 736, 'image_count': 100}, {'id': 737, 'image_count': 47}, {'id': 738, 'image_count': 121}, {'id': 739, 'image_count': 731}, {'id': 740, 'image_count': 73}, {'id': 741, 'image_count': 49}, {'id': 742, 'image_count': 23}, {'id': 743, 'image_count': 4}, {'id': 744, 'image_count': 62}, {'id': 745, 'image_count': 118}, {'id': 746, 'image_count': 99}, {'id': 747, 'image_count': 40}, {'id': 748, 'image_count': 1036}, {'id': 749, 'image_count': 105}, {'id': 750, 'image_count': 21}, {'id': 751, 'image_count': 229}, {'id': 752, 'image_count': 7}, {'id': 753, 'image_count': 72}, {'id': 754, 'image_count': 9}, {'id': 755, 'image_count': 10}, {'id': 756, 'image_count': 328}, {'id': 757, 'image_count': 468}, {'id': 758, 'image_count': 1}, {'id': 759, 'image_count': 2}, {'id': 760, 'image_count': 24}, {'id': 761, 'image_count': 11}, {'id': 762, 'image_count': 72}, {'id': 763, 'image_count': 17}, {'id': 764, 'image_count': 10}, {'id': 765, 'image_count': 17}, {'id': 766, 'image_count': 489}, {'id': 767, 'image_count': 47}, {'id': 768, 'image_count': 93}, {'id': 769, 'image_count': 1}, {'id': 770, 'image_count': 12}, {'id': 771, 'image_count': 228}, {'id': 772, 'image_count': 5}, {'id': 773, 'image_count': 76}, {'id': 774, 'image_count': 71}, {'id': 775, 'image_count': 30}, {'id': 776, 'image_count': 109}, {'id': 777, 'image_count': 14}, {'id': 778, 'image_count': 1}, {'id': 779, 'image_count': 8}, {'id': 780, 'image_count': 26}, {'id': 781, 'image_count': 339}, {'id': 782, 'image_count': 153}, {'id': 783, 'image_count': 2}, {'id': 784, 'image_count': 3}, {'id': 785, 'image_count': 8}, {'id': 786, 'image_count': 47}, {'id': 787, 'image_count': 8}, {'id': 788, 'image_count': 6}, {'id': 789, 'image_count': 116}, {'id': 790, 'image_count': 69}, {'id': 791, 'image_count': 13}, {'id': 792, 'image_count': 6}, {'id': 793, 'image_count': 1928}, {'id': 794, 'image_count': 79}, {'id': 795, 'image_count': 14}, {'id': 796, 'image_count': 7}, {'id': 797, 'image_count': 20}, {'id': 798, 'image_count': 114}, {'id': 799, 'image_count': 221}, {'id': 800, 'image_count': 502}, {'id': 801, 'image_count': 62}, {'id': 802, 'image_count': 87}, {'id': 803, 'image_count': 4}, {'id': 804, 'image_count': 1912}, {'id': 805, 'image_count': 7}, {'id': 806, 'image_count': 186}, {'id': 807, 'image_count': 18}, {'id': 808, 'image_count': 4}, {'id': 809, 'image_count': 3}, {'id': 810, 'image_count': 7}, {'id': 811, 'image_count': 1413}, {'id': 812, 'image_count': 7}, {'id': 813, 'image_count': 12}, {'id': 814, 'image_count': 248}, {'id': 815, 'image_count': 4}, {'id': 816, 'image_count': 1881}, {'id': 817, 'image_count': 529}, {'id': 818, 'image_count': 1932}, {'id': 819, 'image_count': 50}, {'id': 820, 'image_count': 3}, {'id': 821, 'image_count': 28}, {'id': 822, 'image_count': 10}, {'id': 823, 'image_count': 5}, {'id': 824, 'image_count': 5}, {'id': 825, 'image_count': 18}, {'id': 826, 'image_count': 14}, {'id': 827, 'image_count': 1890}, {'id': 828, 'image_count': 660}, {'id': 829, 'image_count': 8}, {'id': 830, 'image_count': 25}, {'id': 831, 'image_count': 10}, {'id': 832, 'image_count': 218}, {'id': 833, 'image_count': 36}, {'id': 834, 'image_count': 16}, {'id': 835, 'image_count': 808}, {'id': 836, 'image_count': 479}, {'id': 837, 'image_count': 1404}, {'id': 838, 'image_count': 307}, {'id': 839, 'image_count': 57}, {'id': 840, 'image_count': 28}, {'id': 841, 'image_count': 80}, {'id': 842, 'image_count': 11}, {'id': 843, 'image_count': 92}, {'id': 844, 'image_count': 20}, {'id': 845, 'image_count': 194}, {'id': 846, 'image_count': 23}, {'id': 847, 'image_count': 52}, {'id': 848, 'image_count': 673}, {'id': 849, 'image_count': 2}, {'id': 850, 'image_count': 2}, {'id': 851, 'image_count': 1}, {'id': 852, 'image_count': 2}, {'id': 853, 'image_count': 8}, {'id': 854, 'image_count': 80}, {'id': 855, 'image_count': 3}, {'id': 856, 'image_count': 3}, {'id': 857, 'image_count': 15}, {'id': 858, 'image_count': 2}, {'id': 859, 'image_count': 10}, {'id': 860, 'image_count': 386}, {'id': 861, 'image_count': 65}, {'id': 862, 'image_count': 3}, {'id': 863, 'image_count': 35}, {'id': 864, 'image_count': 5}, {'id': 865, 'image_count': 180}, {'id': 866, 'image_count': 99}, {'id': 867, 'image_count': 49}, {'id': 868, 'image_count': 28}, {'id': 869, 'image_count': 1}, {'id': 870, 'image_count': 52}, {'id': 871, 'image_count': 36}, {'id': 872, 'image_count': 70}, {'id': 873, 'image_count': 6}, {'id': 874, 'image_count': 29}, {'id': 875, 'image_count': 24}, {'id': 876, 'image_count': 1115}, {'id': 877, 'image_count': 61}, {'id': 878, 'image_count': 18}, {'id': 879, 'image_count': 18}, {'id': 880, 'image_count': 665}, {'id': 881, 'image_count': 1096}, {'id': 882, 'image_count': 29}, {'id': 883, 'image_count': 8}, {'id': 884, 'image_count': 14}, {'id': 885, 'image_count': 1622}, {'id': 886, 'image_count': 2}, {'id': 887, 'image_count': 3}, {'id': 888, 'image_count': 32}, {'id': 889, 'image_count': 55}, {'id': 890, 'image_count': 1}, {'id': 891, 'image_count': 10}, {'id': 892, 'image_count': 10}, {'id': 893, 'image_count': 47}, {'id': 894, 'image_count': 3}, {'id': 895, 'image_count': 29}, {'id': 896, 'image_count': 342}, {'id': 897, 'image_count': 25}, {'id': 898, 'image_count': 1469}, {'id': 899, 'image_count': 521}, {'id': 900, 'image_count': 347}, {'id': 901, 'image_count': 35}, {'id': 902, 'image_count': 7}, {'id': 903, 'image_count': 207}, {'id': 904, 'image_count': 108}, {'id': 905, 'image_count': 2}, {'id': 906, 'image_count': 34}, {'id': 907, 'image_count': 12}, {'id': 908, 'image_count': 10}, {'id': 909, 'image_count': 13}, {'id': 910, 'image_count': 361}, {'id': 911, 'image_count': 1023}, {'id': 912, 'image_count': 782}, {'id': 913, 'image_count': 2}, {'id': 914, 'image_count': 5}, {'id': 915, 'image_count': 247}, {'id': 916, 'image_count': 221}, {'id': 917, 'image_count': 4}, {'id': 918, 'image_count': 8}, {'id': 919, 'image_count': 158}, {'id': 920, 'image_count': 3}, {'id': 921, 'image_count': 752}, {'id': 922, 'image_count': 64}, {'id': 923, 'image_count': 707}, {'id': 924, 'image_count': 143}, {'id': 925, 'image_count': 1}, {'id': 926, 'image_count': 49}, {'id': 927, 'image_count': 126}, {'id': 928, 'image_count': 76}, {'id': 929, 'image_count': 11}, {'id': 930, 'image_count': 11}, {'id': 931, 'image_count': 4}, {'id': 932, 'image_count': 39}, {'id': 933, 'image_count': 11}, {'id': 934, 'image_count': 13}, {'id': 935, 'image_count': 91}, {'id': 936, 'image_count': 14}, {'id': 937, 'image_count': 5}, {'id': 938, 'image_count': 3}, {'id': 939, 'image_count': 10}, {'id': 940, 'image_count': 18}, {'id': 941, 'image_count': 9}, {'id': 942, 'image_count': 6}, {'id': 943, 'image_count': 951}, {'id': 944, 'image_count': 2}, {'id': 945, 'image_count': 1}, {'id': 946, 'image_count': 19}, {'id': 947, 'image_count': 1942}, {'id': 948, 'image_count': 1916}, {'id': 949, 'image_count': 139}, {'id': 950, 'image_count': 43}, {'id': 951, 'image_count': 1969}, {'id': 952, 'image_count': 5}, {'id': 953, 'image_count': 134}, {'id': 954, 'image_count': 74}, {'id': 955, 'image_count': 381}, {'id': 956, 'image_count': 1}, {'id': 957, 'image_count': 381}, {'id': 958, 'image_count': 6}, {'id': 959, 'image_count': 1826}, {'id': 960, 'image_count': 28}, {'id': 961, 'image_count': 1635}, {'id': 962, 'image_count': 1967}, {'id': 963, 'image_count': 16}, {'id': 964, 'image_count': 1926}, {'id': 965, 'image_count': 1789}, {'id': 966, 'image_count': 401}, {'id': 967, 'image_count': 1968}, {'id': 968, 'image_count': 1167}, {'id': 969, 'image_count': 1}, {'id': 970, 'image_count': 56}, {'id': 971, 'image_count': 17}, {'id': 972, 'image_count': 1}, {'id': 973, 'image_count': 58}, {'id': 974, 'image_count': 9}, {'id': 975, 'image_count': 8}, {'id': 976, 'image_count': 1124}, {'id': 977, 'image_count': 31}, {'id': 978, 'image_count': 16}, {'id': 979, 'image_count': 491}, {'id': 980, 'image_count': 432}, {'id': 981, 'image_count': 1945}, {'id': 982, 'image_count': 1899}, {'id': 983, 'image_count': 5}, {'id': 984, 'image_count': 28}, {'id': 985, 'image_count': 7}, {'id': 986, 'image_count': 146}, {'id': 987, 'image_count': 1}, {'id': 988, 'image_count': 25}, {'id': 989, 'image_count': 22}, {'id': 990, 'image_count': 1}, {'id': 991, 'image_count': 10}, {'id': 992, 'image_count': 9}, {'id': 993, 'image_count': 308}, {'id': 994, 'image_count': 4}, {'id': 995, 'image_count': 1969}, {'id': 996, 'image_count': 45}, {'id': 997, 'image_count': 12}, {'id': 998, 'image_count': 1}, {'id': 999, 'image_count': 85}, {'id': 1000, 'image_count': 1127}, {'id': 1001, 'image_count': 11}, {'id': 1002, 'image_count': 60}, {'id': 1003, 'image_count': 1}, {'id': 1004, 'image_count': 16}, {'id': 1005, 'image_count': 1}, {'id': 1006, 'image_count': 65}, {'id': 1007, 'image_count': 13}, {'id': 1008, 'image_count': 655}, {'id': 1009, 'image_count': 51}, {'id': 1010, 'image_count': 1}, {'id': 1011, 'image_count': 673}, {'id': 1012, 'image_count': 5}, {'id': 1013, 'image_count': 36}, {'id': 1014, 'image_count': 54}, {'id': 1015, 'image_count': 5}, {'id': 1016, 'image_count': 8}, {'id': 1017, 'image_count': 305}, {'id': 1018, 'image_count': 297}, {'id': 1019, 'image_count': 1053}, {'id': 1020, 'image_count': 223}, {'id': 1021, 'image_count': 1037}, {'id': 1022, 'image_count': 63}, {'id': 1023, 'image_count': 1881}, {'id': 1024, 'image_count': 507}, {'id': 1025, 'image_count': 333}, {'id': 1026, 'image_count': 1911}, {'id': 1027, 'image_count': 1765}, {'id': 1028, 'image_count': 1}, {'id': 1029, 'image_count': 5}, {'id': 1030, 'image_count': 1}, {'id': 1031, 'image_count': 9}, {'id': 1032, 'image_count': 2}, {'id': 1033, 'image_count': 151}, {'id': 1034, 'image_count': 82}, {'id': 1035, 'image_count': 1931}, {'id': 1036, 'image_count': 41}, {'id': 1037, 'image_count': 1895}, {'id': 1038, 'image_count': 24}, {'id': 1039, 'image_count': 22}, {'id': 1040, 'image_count': 35}, {'id': 1041, 'image_count': 69}, {'id': 1042, 'image_count': 962}, {'id': 1043, 'image_count': 588}, {'id': 1044, 'image_count': 21}, {'id': 1045, 'image_count': 825}, {'id': 1046, 'image_count': 52}, {'id': 1047, 'image_count': 5}, {'id': 1048, 'image_count': 5}, {'id': 1049, 'image_count': 5}, {'id': 1050, 'image_count': 1860}, {'id': 1051, 'image_count': 56}, {'id': 1052, 'image_count': 1582}, {'id': 1053, 'image_count': 7}, {'id': 1054, 'image_count': 2}, {'id': 1055, 'image_count': 1562}, {'id': 1056, 'image_count': 1885}, {'id': 1057, 'image_count': 1}, {'id': 1058, 'image_count': 5}, {'id': 1059, 'image_count': 137}, {'id': 1060, 'image_count': 1094}, {'id': 1061, 'image_count': 134}, {'id': 1062, 'image_count': 29}, {'id': 1063, 'image_count': 22}, {'id': 1064, 'image_count': 522}, {'id': 1065, 'image_count': 50}, {'id': 1066, 'image_count': 68}, {'id': 1067, 'image_count': 16}, {'id': 1068, 'image_count': 40}, {'id': 1069, 'image_count': 35}, {'id': 1070, 'image_count': 135}, {'id': 1071, 'image_count': 1413}, {'id': 1072, 'image_count': 772}, {'id': 1073, 'image_count': 50}, {'id': 1074, 'image_count': 1015}, {'id': 1075, 'image_count': 1}, {'id': 1076, 'image_count': 65}, {'id': 1077, 'image_count': 1900}, {'id': 1078, 'image_count': 1302}, {'id': 1079, 'image_count': 1977}, {'id': 1080, 'image_count': 2}, {'id': 1081, 'image_count': 29}, {'id': 1082, 'image_count': 36}, {'id': 1083, 'image_count': 138}, {'id': 1084, 'image_count': 4}, {'id': 1085, 'image_count': 67}, {'id': 1086, 'image_count': 26}, {'id': 1087, 'image_count': 25}, {'id': 1088, 'image_count': 33}, {'id': 1089, 'image_count': 37}, {'id': 1090, 'image_count': 50}, {'id': 1091, 'image_count': 270}, {'id': 1092, 'image_count': 12}, {'id': 1093, 'image_count': 316}, {'id': 1094, 'image_count': 41}, {'id': 1095, 'image_count': 224}, {'id': 1096, 'image_count': 105}, {'id': 1097, 'image_count': 1925}, {'id': 1098, 'image_count': 1021}, {'id': 1099, 'image_count': 1213}, {'id': 1100, 'image_count': 172}, {'id': 1101, 'image_count': 28}, {'id': 1102, 'image_count': 745}, {'id': 1103, 'image_count': 187}, {'id': 1104, 'image_count': 147}, {'id': 1105, 'image_count': 136}, {'id': 1106, 'image_count': 34}, {'id': 1107, 'image_count': 41}, {'id': 1108, 'image_count': 636}, {'id': 1109, 'image_count': 570}, {'id': 1110, 'image_count': 1149}, {'id': 1111, 'image_count': 61}, {'id': 1112, 'image_count': 1890}, {'id': 1113, 'image_count': 18}, {'id': 1114, 'image_count': 143}, {'id': 1115, 'image_count': 1517}, {'id': 1116, 'image_count': 7}, {'id': 1117, 'image_count': 943}, {'id': 1118, 'image_count': 6}, {'id': 1119, 'image_count': 1}, {'id': 1120, 'image_count': 11}, {'id': 1121, 'image_count': 101}, {'id': 1122, 'image_count': 1909}, {'id': 1123, 'image_count': 800}, {'id': 1124, 'image_count': 1}, {'id': 1125, 'image_count': 44}, {'id': 1126, 'image_count': 3}, {'id': 1127, 'image_count': 44}, {'id': 1128, 'image_count': 31}, {'id': 1129, 'image_count': 7}, {'id': 1130, 'image_count': 20}, {'id': 1131, 'image_count': 11}, {'id': 1132, 'image_count': 13}, {'id': 1133, 'image_count': 1924}, {'id': 1134, 'image_count': 113}, {'id': 1135, 'image_count': 2}, {'id': 1136, 'image_count': 139}, {'id': 1137, 'image_count': 12}, {'id': 1138, 'image_count': 37}, {'id': 1139, 'image_count': 1866}, {'id': 1140, 'image_count': 47}, {'id': 1141, 'image_count': 1468}, {'id': 1142, 'image_count': 729}, {'id': 1143, 'image_count': 24}, {'id': 1144, 'image_count': 1}, {'id': 1145, 'image_count': 10}, {'id': 1146, 'image_count': 3}, {'id': 1147, 'image_count': 14}, {'id': 1148, 'image_count': 4}, {'id': 1149, 'image_count': 29}, {'id': 1150, 'image_count': 4}, {'id': 1151, 'image_count': 70}, {'id': 1152, 'image_count': 46}, {'id': 1153, 'image_count': 14}, {'id': 1154, 'image_count': 48}, {'id': 1155, 'image_count': 1855}, {'id': 1156, 'image_count': 113}, {'id': 1157, 'image_count': 1}, {'id': 1158, 'image_count': 1}, {'id': 1159, 'image_count': 10}, {'id': 1160, 'image_count': 54}, {'id': 1161, 'image_count': 1923}, {'id': 1162, 'image_count': 630}, {'id': 1163, 'image_count': 31}, {'id': 1164, 'image_count': 69}, {'id': 1165, 'image_count': 7}, {'id': 1166, 'image_count': 11}, {'id': 1167, 'image_count': 1}, {'id': 1168, 'image_count': 30}, {'id': 1169, 'image_count': 50}, {'id': 1170, 'image_count': 45}, {'id': 1171, 'image_count': 28}, {'id': 1172, 'image_count': 114}, {'id': 1173, 'image_count': 193}, {'id': 1174, 'image_count': 21}, {'id': 1175, 'image_count': 91}, {'id': 1176, 'image_count': 31}, {'id': 1177, 'image_count': 1469}, {'id': 1178, 'image_count': 1924}, {'id': 1179, 'image_count': 87}, {'id': 1180, 'image_count': 77}, {'id': 1181, 'image_count': 11}, {'id': 1182, 'image_count': 47}, {'id': 1183, 'image_count': 21}, {'id': 1184, 'image_count': 47}, {'id': 1185, 'image_count': 70}, {'id': 1186, 'image_count': 1838}, {'id': 1187, 'image_count': 19}, {'id': 1188, 'image_count': 531}, {'id': 1189, 'image_count': 11}, {'id': 1190, 'image_count': 941}, {'id': 1191, 'image_count': 113}, {'id': 1192, 'image_count': 26}, {'id': 1193, 'image_count': 5}, {'id': 1194, 'image_count': 56}, {'id': 1195, 'image_count': 73}, {'id': 1196, 'image_count': 32}, {'id': 1197, 'image_count': 128}, {'id': 1198, 'image_count': 623}, {'id': 1199, 'image_count': 12}, {'id': 1200, 'image_count': 52}, {'id': 1201, 'image_count': 11}, {'id': 1202, 'image_count': 1674}, {'id': 1203, 'image_count': 81}] # noqa -# fmt: on diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/data/datasets/register_ade20k_panoptic.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/data/datasets/register_ade20k_panoptic.py deleted file mode 100644 index 05094a617b0103b0f0250eb32e555df994e5331b..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/data/datasets/register_ade20k_panoptic.py +++ /dev/null @@ -1,394 +0,0 @@ -# ------------------------------------------------------------------------------ -# Reference: https://github.com/facebookresearch/Mask2Former/blob/main/mask2former/data/datasets/register_ade20k_panoptic.py -# Modified by Jitesh Jain (https://github.com/praeclarumjj3) -# ------------------------------------------------------------------------------ - -import json -import os - -from annotator.oneformer.detectron2.data import DatasetCatalog, MetadataCatalog -from annotator.oneformer.detectron2.utils.file_io import PathManager - -ADE20K_150_CATEGORIES = [ - {"color": [120, 120, 120], "id": 0, "isthing": 0, "name": "wall"}, - {"color": [180, 120, 120], "id": 1, "isthing": 0, "name": "building"}, - {"color": [6, 230, 230], "id": 2, "isthing": 0, "name": "sky"}, - {"color": [80, 50, 50], "id": 3, "isthing": 0, "name": "floor"}, - {"color": [4, 200, 3], "id": 4, "isthing": 0, "name": "tree"}, - {"color": [120, 120, 80], "id": 5, "isthing": 0, "name": "ceiling"}, - {"color": [140, 140, 140], "id": 6, "isthing": 0, "name": "road, route"}, - {"color": [204, 5, 255], "id": 7, "isthing": 1, "name": "bed"}, - {"color": [230, 230, 230], "id": 8, "isthing": 1, "name": "window "}, - {"color": [4, 250, 7], "id": 9, "isthing": 0, "name": "grass"}, - {"color": [224, 5, 255], "id": 10, "isthing": 1, "name": "cabinet"}, - {"color": [235, 255, 7], "id": 11, "isthing": 0, "name": "sidewalk, pavement"}, - {"color": [150, 5, 61], "id": 12, "isthing": 1, "name": "person"}, - {"color": [120, 120, 70], "id": 13, "isthing": 0, "name": "earth, ground"}, - {"color": [8, 255, 51], "id": 14, "isthing": 1, "name": "door"}, - {"color": [255, 6, 82], "id": 15, "isthing": 1, "name": "table"}, - {"color": [143, 255, 140], "id": 16, "isthing": 0, "name": "mountain, mount"}, - {"color": [204, 255, 4], "id": 17, "isthing": 0, "name": "plant"}, - {"color": [255, 51, 7], "id": 18, "isthing": 1, "name": "curtain"}, - {"color": [204, 70, 3], "id": 19, "isthing": 1, "name": "chair"}, - {"color": [0, 102, 200], "id": 20, "isthing": 1, "name": "car"}, - {"color": [61, 230, 250], "id": 21, "isthing": 0, "name": "water"}, - {"color": [255, 6, 51], "id": 22, "isthing": 1, "name": "painting, picture"}, - {"color": [11, 102, 255], "id": 23, "isthing": 1, "name": "sofa"}, - {"color": [255, 7, 71], "id": 24, "isthing": 1, "name": "shelf"}, - {"color": [255, 9, 224], "id": 25, "isthing": 0, "name": "house"}, - {"color": [9, 7, 230], "id": 26, "isthing": 0, "name": "sea"}, - {"color": [220, 220, 220], "id": 27, "isthing": 1, "name": "mirror"}, - {"color": [255, 9, 92], "id": 28, "isthing": 0, "name": "rug"}, - {"color": [112, 9, 255], "id": 29, "isthing": 0, "name": "field"}, - {"color": [8, 255, 214], "id": 30, "isthing": 1, "name": "armchair"}, - {"color": [7, 255, 224], "id": 31, "isthing": 1, "name": "seat"}, - {"color": [255, 184, 6], "id": 32, "isthing": 1, "name": "fence"}, - {"color": [10, 255, 71], "id": 33, "isthing": 1, "name": "desk"}, - {"color": [255, 41, 10], "id": 34, "isthing": 0, "name": "rock, stone"}, - {"color": [7, 255, 255], "id": 35, "isthing": 1, "name": "wardrobe, closet, press"}, - {"color": [224, 255, 8], "id": 36, "isthing": 1, "name": "lamp"}, - {"color": [102, 8, 255], "id": 37, "isthing": 1, "name": "tub"}, - {"color": [255, 61, 6], "id": 38, "isthing": 1, "name": "rail"}, - {"color": [255, 194, 7], "id": 39, "isthing": 1, "name": "cushion"}, - {"color": [255, 122, 8], "id": 40, "isthing": 0, "name": "base, pedestal, stand"}, - {"color": [0, 255, 20], "id": 41, "isthing": 1, "name": "box"}, - {"color": [255, 8, 41], "id": 42, "isthing": 1, "name": "column, pillar"}, - {"color": [255, 5, 153], "id": 43, "isthing": 1, "name": "signboard, sign"}, - { - "color": [6, 51, 255], - "id": 44, - "isthing": 1, - "name": "chest of drawers, chest, bureau, dresser", - }, - {"color": [235, 12, 255], "id": 45, "isthing": 1, "name": "counter"}, - {"color": [160, 150, 20], "id": 46, "isthing": 0, "name": "sand"}, - {"color": [0, 163, 255], "id": 47, "isthing": 1, "name": "sink"}, - {"color": [140, 140, 140], "id": 48, "isthing": 0, "name": "skyscraper"}, - {"color": [250, 10, 15], "id": 49, "isthing": 1, "name": "fireplace"}, - {"color": [20, 255, 0], "id": 50, "isthing": 1, "name": "refrigerator, icebox"}, - {"color": [31, 255, 0], "id": 51, "isthing": 0, "name": "grandstand, covered stand"}, - {"color": [255, 31, 0], "id": 52, "isthing": 0, "name": "path"}, - {"color": [255, 224, 0], "id": 53, "isthing": 1, "name": "stairs"}, - {"color": [153, 255, 0], "id": 54, "isthing": 0, "name": "runway"}, - {"color": [0, 0, 255], "id": 55, "isthing": 1, "name": "case, display case, showcase, vitrine"}, - { - "color": [255, 71, 0], - "id": 56, - "isthing": 1, - "name": "pool table, billiard table, snooker table", - }, - {"color": [0, 235, 255], "id": 57, "isthing": 1, "name": "pillow"}, - {"color": [0, 173, 255], "id": 58, "isthing": 1, "name": "screen door, screen"}, - {"color": [31, 0, 255], "id": 59, "isthing": 0, "name": "stairway, staircase"}, - {"color": [11, 200, 200], "id": 60, "isthing": 0, "name": "river"}, - {"color": [255, 82, 0], "id": 61, "isthing": 0, "name": "bridge, span"}, - {"color": [0, 255, 245], "id": 62, "isthing": 1, "name": "bookcase"}, - {"color": [0, 61, 255], "id": 63, "isthing": 0, "name": "blind, screen"}, - {"color": [0, 255, 112], "id": 64, "isthing": 1, "name": "coffee table"}, - { - "color": [0, 255, 133], - "id": 65, - "isthing": 1, - "name": "toilet, can, commode, crapper, pot, potty, stool, throne", - }, - {"color": [255, 0, 0], "id": 66, "isthing": 1, "name": "flower"}, - {"color": [255, 163, 0], "id": 67, "isthing": 1, "name": "book"}, - {"color": [255, 102, 0], "id": 68, "isthing": 0, "name": "hill"}, - {"color": [194, 255, 0], "id": 69, "isthing": 1, "name": "bench"}, - {"color": [0, 143, 255], "id": 70, "isthing": 1, "name": "countertop"}, - {"color": [51, 255, 0], "id": 71, "isthing": 1, "name": "stove"}, - {"color": [0, 82, 255], "id": 72, "isthing": 1, "name": "palm, palm tree"}, - {"color": [0, 255, 41], "id": 73, "isthing": 1, "name": "kitchen island"}, - {"color": [0, 255, 173], "id": 74, "isthing": 1, "name": "computer"}, - {"color": [10, 0, 255], "id": 75, "isthing": 1, "name": "swivel chair"}, - {"color": [173, 255, 0], "id": 76, "isthing": 1, "name": "boat"}, - {"color": [0, 255, 153], "id": 77, "isthing": 0, "name": "bar"}, - {"color": [255, 92, 0], "id": 78, "isthing": 1, "name": "arcade machine"}, - {"color": [255, 0, 255], "id": 79, "isthing": 0, "name": "hovel, hut, hutch, shack, shanty"}, - {"color": [255, 0, 245], "id": 80, "isthing": 1, "name": "bus"}, - {"color": [255, 0, 102], "id": 81, "isthing": 1, "name": "towel"}, - {"color": [255, 173, 0], "id": 82, "isthing": 1, "name": "light"}, - {"color": [255, 0, 20], "id": 83, "isthing": 1, "name": "truck"}, - {"color": [255, 184, 184], "id": 84, "isthing": 0, "name": "tower"}, - {"color": [0, 31, 255], "id": 85, "isthing": 1, "name": "chandelier"}, - {"color": [0, 255, 61], "id": 86, "isthing": 1, "name": "awning, sunshade, sunblind"}, - {"color": [0, 71, 255], "id": 87, "isthing": 1, "name": "street lamp"}, - {"color": [255, 0, 204], "id": 88, "isthing": 1, "name": "booth"}, - {"color": [0, 255, 194], "id": 89, "isthing": 1, "name": "tv"}, - {"color": [0, 255, 82], "id": 90, "isthing": 1, "name": "plane"}, - {"color": [0, 10, 255], "id": 91, "isthing": 0, "name": "dirt track"}, - {"color": [0, 112, 255], "id": 92, "isthing": 1, "name": "clothes"}, - {"color": [51, 0, 255], "id": 93, "isthing": 1, "name": "pole"}, - {"color": [0, 194, 255], "id": 94, "isthing": 0, "name": "land, ground, soil"}, - { - "color": [0, 122, 255], - "id": 95, - "isthing": 1, - "name": "bannister, banister, balustrade, balusters, handrail", - }, - { - "color": [0, 255, 163], - "id": 96, - "isthing": 0, - "name": "escalator, moving staircase, moving stairway", - }, - { - "color": [255, 153, 0], - "id": 97, - "isthing": 1, - "name": "ottoman, pouf, pouffe, puff, hassock", - }, - {"color": [0, 255, 10], "id": 98, "isthing": 1, "name": "bottle"}, - {"color": [255, 112, 0], "id": 99, "isthing": 0, "name": "buffet, counter, sideboard"}, - { - "color": [143, 255, 0], - "id": 100, - "isthing": 0, - "name": "poster, posting, placard, notice, bill, card", - }, - {"color": [82, 0, 255], "id": 101, "isthing": 0, "name": "stage"}, - {"color": [163, 255, 0], "id": 102, "isthing": 1, "name": "van"}, - {"color": [255, 235, 0], "id": 103, "isthing": 1, "name": "ship"}, - {"color": [8, 184, 170], "id": 104, "isthing": 1, "name": "fountain"}, - { - "color": [133, 0, 255], - "id": 105, - "isthing": 0, - "name": "conveyer belt, conveyor belt, conveyer, conveyor, transporter", - }, - {"color": [0, 255, 92], "id": 106, "isthing": 0, "name": "canopy"}, - { - "color": [184, 0, 255], - "id": 107, - "isthing": 1, - "name": "washer, automatic washer, washing machine", - }, - {"color": [255, 0, 31], "id": 108, "isthing": 1, "name": "plaything, toy"}, - {"color": [0, 184, 255], "id": 109, "isthing": 0, "name": "pool"}, - {"color": [0, 214, 255], "id": 110, "isthing": 1, "name": "stool"}, - {"color": [255, 0, 112], "id": 111, "isthing": 1, "name": "barrel, cask"}, - {"color": [92, 255, 0], "id": 112, "isthing": 1, "name": "basket, handbasket"}, - {"color": [0, 224, 255], "id": 113, "isthing": 0, "name": "falls"}, - {"color": [112, 224, 255], "id": 114, "isthing": 0, "name": "tent"}, - {"color": [70, 184, 160], "id": 115, "isthing": 1, "name": "bag"}, - {"color": [163, 0, 255], "id": 116, "isthing": 1, "name": "minibike, motorbike"}, - {"color": [153, 0, 255], "id": 117, "isthing": 0, "name": "cradle"}, - {"color": [71, 255, 0], "id": 118, "isthing": 1, "name": "oven"}, - {"color": [255, 0, 163], "id": 119, "isthing": 1, "name": "ball"}, - {"color": [255, 204, 0], "id": 120, "isthing": 1, "name": "food, solid food"}, - {"color": [255, 0, 143], "id": 121, "isthing": 1, "name": "step, stair"}, - {"color": [0, 255, 235], "id": 122, "isthing": 0, "name": "tank, storage tank"}, - {"color": [133, 255, 0], "id": 123, "isthing": 1, "name": "trade name"}, - {"color": [255, 0, 235], "id": 124, "isthing": 1, "name": "microwave"}, - {"color": [245, 0, 255], "id": 125, "isthing": 1, "name": "pot"}, - {"color": [255, 0, 122], "id": 126, "isthing": 1, "name": "animal"}, - {"color": [255, 245, 0], "id": 127, "isthing": 1, "name": "bicycle"}, - {"color": [10, 190, 212], "id": 128, "isthing": 0, "name": "lake"}, - {"color": [214, 255, 0], "id": 129, "isthing": 1, "name": "dishwasher"}, - {"color": [0, 204, 255], "id": 130, "isthing": 1, "name": "screen"}, - {"color": [20, 0, 255], "id": 131, "isthing": 0, "name": "blanket, cover"}, - {"color": [255, 255, 0], "id": 132, "isthing": 1, "name": "sculpture"}, - {"color": [0, 153, 255], "id": 133, "isthing": 1, "name": "hood, exhaust hood"}, - {"color": [0, 41, 255], "id": 134, "isthing": 1, "name": "sconce"}, - {"color": [0, 255, 204], "id": 135, "isthing": 1, "name": "vase"}, - {"color": [41, 0, 255], "id": 136, "isthing": 1, "name": "traffic light"}, - {"color": [41, 255, 0], "id": 137, "isthing": 1, "name": "tray"}, - {"color": [173, 0, 255], "id": 138, "isthing": 1, "name": "trash can"}, - {"color": [0, 245, 255], "id": 139, "isthing": 1, "name": "fan"}, - {"color": [71, 0, 255], "id": 140, "isthing": 0, "name": "pier"}, - {"color": [122, 0, 255], "id": 141, "isthing": 0, "name": "crt screen"}, - {"color": [0, 255, 184], "id": 142, "isthing": 1, "name": "plate"}, - {"color": [0, 92, 255], "id": 143, "isthing": 1, "name": "monitor"}, - {"color": [184, 255, 0], "id": 144, "isthing": 1, "name": "bulletin board"}, - {"color": [0, 133, 255], "id": 145, "isthing": 0, "name": "shower"}, - {"color": [255, 214, 0], "id": 146, "isthing": 1, "name": "radiator"}, - {"color": [25, 194, 194], "id": 147, "isthing": 1, "name": "glass, drinking glass"}, - {"color": [102, 255, 0], "id": 148, "isthing": 1, "name": "clock"}, - {"color": [92, 0, 255], "id": 149, "isthing": 1, "name": "flag"}, -] - -ADE20k_COLORS = [k["color"] for k in ADE20K_150_CATEGORIES] - -MetadataCatalog.get("ade20k_sem_seg_train").set( - stuff_colors=ADE20k_COLORS[:], -) - -MetadataCatalog.get("ade20k_sem_seg_val").set( - stuff_colors=ADE20k_COLORS[:], -) - - -def load_ade20k_panoptic_json(json_file, image_dir, gt_dir, semseg_dir, meta): - """ - Args: - image_dir (str): path to the raw dataset. e.g., "~/coco/train2017". - gt_dir (str): path to the raw annotations. e.g., "~/coco/panoptic_train2017". - json_file (str): path to the json file. e.g., "~/coco/annotations/panoptic_train2017.json". - Returns: - list[dict]: a list of dicts in Detectron2 standard format. (See - `Using Custom Datasets `_ ) - """ - - def _convert_category_id(segment_info, meta): - if segment_info["category_id"] in meta["thing_dataset_id_to_contiguous_id"]: - segment_info["category_id"] = meta["thing_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - segment_info["isthing"] = True - else: - segment_info["category_id"] = meta["stuff_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - segment_info["isthing"] = False - return segment_info - - with PathManager.open(json_file) as f: - json_info = json.load(f) - - ret = [] - for ann in json_info["annotations"]: - image_id = ann["image_id"] - # TODO: currently we assume image and label has the same filename but - # different extension, and images have extension ".jpg" for COCO. Need - # to make image extension a user-provided argument if we extend this - # function to support other COCO-like datasets. - image_file = os.path.join(image_dir, os.path.splitext(ann["file_name"])[0] + ".jpg") - label_file = os.path.join(gt_dir, ann["file_name"]) - sem_label_file = os.path.join(semseg_dir, ann["file_name"]) - segments_info = [_convert_category_id(x, meta) for x in ann["segments_info"]] - ret.append( - { - "file_name": image_file, - "image_id": image_id, - "pan_seg_file_name": label_file, - "sem_seg_file_name": sem_label_file, - "segments_info": segments_info, - } - ) - assert len(ret), f"No images found in {image_dir}!" - assert PathManager.isfile(ret[0]["file_name"]), ret[0]["file_name"] - assert PathManager.isfile(ret[0]["pan_seg_file_name"]), ret[0]["pan_seg_file_name"] - assert PathManager.isfile(ret[0]["sem_seg_file_name"]), ret[0]["sem_seg_file_name"] - return ret - - -def register_ade20k_panoptic( - name, metadata, image_root, panoptic_root, semantic_root, panoptic_json, instances_json=None, -): - """ - Register a "standard" version of ADE20k panoptic segmentation dataset named `name`. - The dictionaries in this registered dataset follows detectron2's standard format. - Hence it's called "standard". - Args: - name (str): the name that identifies a dataset, - e.g. "ade20k_panoptic_train" - metadata (dict): extra metadata associated with this dataset. - image_root (str): directory which contains all the images - panoptic_root (str): directory which contains panoptic annotation images in COCO format - panoptic_json (str): path to the json panoptic annotation file in COCO format - sem_seg_root (none): not used, to be consistent with - `register_coco_panoptic_separated`. - instances_json (str): path to the json instance annotation file - """ - panoptic_name = name - DatasetCatalog.register( - panoptic_name, - lambda: load_ade20k_panoptic_json( - panoptic_json, image_root, panoptic_root, semantic_root, metadata - ), - ) - MetadataCatalog.get(panoptic_name).set( - panoptic_root=panoptic_root, - image_root=image_root, - panoptic_json=panoptic_json, - json_file=instances_json, - evaluator_type="ade20k_panoptic_seg", - ignore_label=255, - label_divisor=1000, - **metadata, - ) - - -_PREDEFINED_SPLITS_ADE20K_PANOPTIC = { - "ade20k_panoptic_train": ( - "ADEChallengeData2016/images/training", - "ADEChallengeData2016/ade20k_panoptic_train", - "ADEChallengeData2016/ade20k_panoptic_train.json", - "ADEChallengeData2016/annotations_detectron2/training", - "ADEChallengeData2016/ade20k_instance_train.json", - ), - "ade20k_panoptic_val": ( - "ADEChallengeData2016/images/validation", - "ADEChallengeData2016/ade20k_panoptic_val", - "ADEChallengeData2016/ade20k_panoptic_val.json", - "ADEChallengeData2016/annotations_detectron2/validation", - "ADEChallengeData2016/ade20k_instance_val.json", - ), -} - - -def get_metadata(): - meta = {} - # The following metadata maps contiguous id from [0, #thing categories + - # #stuff categories) to their names and colors. We have to replica of the - # same name and color under "thing_*" and "stuff_*" because the current - # visualization function in D2 handles thing and class classes differently - # due to some heuristic used in Panoptic FPN. We keep the same naming to - # enable reusing existing visualization functions. - thing_classes = [k["name"] for k in ADE20K_150_CATEGORIES if k["isthing"] == 1] - thing_colors = [k["color"] for k in ADE20K_150_CATEGORIES if k["isthing"] == 1] - stuff_classes = [k["name"] for k in ADE20K_150_CATEGORIES] - stuff_colors = [k["color"] for k in ADE20K_150_CATEGORIES] - - meta["thing_classes"] = thing_classes - meta["thing_colors"] = thing_colors - meta["stuff_classes"] = stuff_classes - meta["stuff_colors"] = stuff_colors - - # Convert category id for training: - # category id: like semantic segmentation, it is the class id for each - # pixel. Since there are some classes not used in evaluation, the category - # id is not always contiguous and thus we have two set of category ids: - # - original category id: category id in the original dataset, mainly - # used for evaluation. - # - contiguous category id: [0, #classes), in order to train the linear - # softmax classifier. - thing_dataset_id_to_contiguous_id = {} - stuff_dataset_id_to_contiguous_id = {} - - for i, cat in enumerate(ADE20K_150_CATEGORIES): - if cat["isthing"]: - thing_dataset_id_to_contiguous_id[cat["id"]] = i - # else: - # stuff_dataset_id_to_contiguous_id[cat["id"]] = i - - # in order to use sem_seg evaluator - stuff_dataset_id_to_contiguous_id[cat["id"]] = i - - meta["thing_dataset_id_to_contiguous_id"] = thing_dataset_id_to_contiguous_id - meta["stuff_dataset_id_to_contiguous_id"] = stuff_dataset_id_to_contiguous_id - - return meta - - -def register_all_ade20k_panoptic(root): - metadata = get_metadata() - for ( - prefix, - (image_root, panoptic_root, panoptic_json, semantic_root, instance_json), - ) in _PREDEFINED_SPLITS_ADE20K_PANOPTIC.items(): - # The "standard" version of COCO panoptic segmentation dataset, - # e.g. used by Panoptic-DeepLab - register_ade20k_panoptic( - prefix, - metadata, - os.path.join(root, image_root), - os.path.join(root, panoptic_root), - os.path.join(root, semantic_root), - os.path.join(root, panoptic_json), - os.path.join(root, instance_json), - ) - - -_root = os.getenv("DETECTRON2_DATASETS", "datasets") -register_all_ade20k_panoptic(_root) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/build_py.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/build_py.py deleted file mode 100644 index f094496e114d17cf8191507edf3c056993b64637..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/build_py.py +++ /dev/null @@ -1,386 +0,0 @@ -from functools import partial -from glob import glob -from distutils.util import convert_path -import distutils.command.build_py as orig -import os -import fnmatch -import textwrap -import io -import distutils.errors -import itertools -import stat -from pathlib import Path -from typing import Dict, Iterable, Iterator, List, Optional, Tuple - -from ..extern.more_itertools import unique_everseen -from ..warnings import SetuptoolsDeprecationWarning - - -def make_writable(target): - os.chmod(target, os.stat(target).st_mode | stat.S_IWRITE) - - -class build_py(orig.build_py): - """Enhanced 'build_py' command that includes data files with packages - - The data files are specified via a 'package_data' argument to 'setup()'. - See 'setuptools.dist.Distribution' for more details. - - Also, this version of the 'build_py' command allows you to specify both - 'py_modules' and 'packages' in the same setup operation. - """ - editable_mode: bool = False - existing_egg_info_dir: Optional[str] = None #: Private API, internal use only. - - def finalize_options(self): - orig.build_py.finalize_options(self) - self.package_data = self.distribution.package_data - self.exclude_package_data = self.distribution.exclude_package_data or {} - if 'data_files' in self.__dict__: - del self.__dict__['data_files'] - self.__updated_files = [] - - def copy_file(self, infile, outfile, preserve_mode=1, preserve_times=1, - link=None, level=1): - # Overwrite base class to allow using links - if link: - infile = str(Path(infile).resolve()) - outfile = str(Path(outfile).resolve()) - return super().copy_file(infile, outfile, preserve_mode, preserve_times, - link, level) - - def run(self): - """Build modules, packages, and copy data files to build directory""" - if not (self.py_modules or self.packages) or self.editable_mode: - return - - if self.py_modules: - self.build_modules() - - if self.packages: - self.build_packages() - self.build_package_data() - - # Only compile actual .py files, using our base class' idea of what our - # output files are. - self.byte_compile(orig.build_py.get_outputs(self, include_bytecode=0)) - - def __getattr__(self, attr): - "lazily compute data files" - if attr == 'data_files': - self.data_files = self._get_data_files() - return self.data_files - return orig.build_py.__getattr__(self, attr) - - def build_module(self, module, module_file, package): - outfile, copied = orig.build_py.build_module(self, module, module_file, package) - if copied: - self.__updated_files.append(outfile) - return outfile, copied - - def _get_data_files(self): - """Generate list of '(package,src_dir,build_dir,filenames)' tuples""" - self.analyze_manifest() - return list(map(self._get_pkg_data_files, self.packages or ())) - - def get_data_files_without_manifest(self): - """ - Generate list of ``(package,src_dir,build_dir,filenames)`` tuples, - but without triggering any attempt to analyze or build the manifest. - """ - # Prevent eventual errors from unset `manifest_files` - # (that would otherwise be set by `analyze_manifest`) - self.__dict__.setdefault('manifest_files', {}) - return list(map(self._get_pkg_data_files, self.packages or ())) - - def _get_pkg_data_files(self, package): - # Locate package source directory - src_dir = self.get_package_dir(package) - - # Compute package build directory - build_dir = os.path.join(*([self.build_lib] + package.split('.'))) - - # Strip directory from globbed filenames - filenames = [ - os.path.relpath(file, src_dir) - for file in self.find_data_files(package, src_dir) - ] - return package, src_dir, build_dir, filenames - - def find_data_files(self, package, src_dir): - """Return filenames for package's data files in 'src_dir'""" - patterns = self._get_platform_patterns( - self.package_data, - package, - src_dir, - ) - globs_expanded = map(partial(glob, recursive=True), patterns) - # flatten the expanded globs into an iterable of matches - globs_matches = itertools.chain.from_iterable(globs_expanded) - glob_files = filter(os.path.isfile, globs_matches) - files = itertools.chain( - self.manifest_files.get(package, []), - glob_files, - ) - return self.exclude_data_files(package, src_dir, files) - - def get_outputs(self, include_bytecode=1) -> List[str]: - """See :class:`setuptools.commands.build.SubCommand`""" - if self.editable_mode: - return list(self.get_output_mapping().keys()) - return super().get_outputs(include_bytecode) - - def get_output_mapping(self) -> Dict[str, str]: - """See :class:`setuptools.commands.build.SubCommand`""" - mapping = itertools.chain( - self._get_package_data_output_mapping(), - self._get_module_mapping(), - ) - return dict(sorted(mapping, key=lambda x: x[0])) - - def _get_module_mapping(self) -> Iterator[Tuple[str, str]]: - """Iterate over all modules producing (dest, src) pairs.""" - for (package, module, module_file) in self.find_all_modules(): - package = package.split('.') - filename = self.get_module_outfile(self.build_lib, package, module) - yield (filename, module_file) - - def _get_package_data_output_mapping(self) -> Iterator[Tuple[str, str]]: - """Iterate over package data producing (dest, src) pairs.""" - for package, src_dir, build_dir, filenames in self.data_files: - for filename in filenames: - target = os.path.join(build_dir, filename) - srcfile = os.path.join(src_dir, filename) - yield (target, srcfile) - - def build_package_data(self): - """Copy data files into build directory""" - for target, srcfile in self._get_package_data_output_mapping(): - self.mkpath(os.path.dirname(target)) - _outf, _copied = self.copy_file(srcfile, target) - make_writable(target) - - def analyze_manifest(self): - self.manifest_files = mf = {} - if not self.distribution.include_package_data: - return - src_dirs = {} - for package in self.packages or (): - # Locate package source directory - src_dirs[assert_relative(self.get_package_dir(package))] = package - - if ( - getattr(self, 'existing_egg_info_dir', None) - and Path(self.existing_egg_info_dir, "SOURCES.txt").exists() - ): - egg_info_dir = self.existing_egg_info_dir - manifest = Path(egg_info_dir, "SOURCES.txt") - files = manifest.read_text(encoding="utf-8").splitlines() - else: - self.run_command('egg_info') - ei_cmd = self.get_finalized_command('egg_info') - egg_info_dir = ei_cmd.egg_info - files = ei_cmd.filelist.files - - check = _IncludePackageDataAbuse() - for path in self._filter_build_files(files, egg_info_dir): - d, f = os.path.split(assert_relative(path)) - prev = None - oldf = f - while d and d != prev and d not in src_dirs: - prev = d - d, df = os.path.split(d) - f = os.path.join(df, f) - if d in src_dirs: - if f == oldf: - if check.is_module(f): - continue # it's a module, not data - else: - importable = check.importable_subpackage(src_dirs[d], f) - if importable: - check.warn(importable) - mf.setdefault(src_dirs[d], []).append(path) - - def _filter_build_files(self, files: Iterable[str], egg_info: str) -> Iterator[str]: - """ - ``build_meta`` may try to create egg_info outside of the project directory, - and this can be problematic for certain plugins (reported in issue #3500). - - Extensions might also include between their sources files created on the - ``build_lib`` and ``build_temp`` directories. - - This function should filter this case of invalid files out. - """ - build = self.get_finalized_command("build") - build_dirs = (egg_info, self.build_lib, build.build_temp, build.build_base) - norm_dirs = [os.path.normpath(p) for p in build_dirs if p] - - for file in files: - norm_path = os.path.normpath(file) - if not os.path.isabs(file) or all(d not in norm_path for d in norm_dirs): - yield file - - def get_data_files(self): - pass # Lazily compute data files in _get_data_files() function. - - def check_package(self, package, package_dir): - """Check namespace packages' __init__ for declare_namespace""" - try: - return self.packages_checked[package] - except KeyError: - pass - - init_py = orig.build_py.check_package(self, package, package_dir) - self.packages_checked[package] = init_py - - if not init_py or not self.distribution.namespace_packages: - return init_py - - for pkg in self.distribution.namespace_packages: - if pkg == package or pkg.startswith(package + '.'): - break - else: - return init_py - - with io.open(init_py, 'rb') as f: - contents = f.read() - if b'declare_namespace' not in contents: - raise distutils.errors.DistutilsError( - "Namespace package problem: %s is a namespace package, but " - "its\n__init__.py does not call declare_namespace()! Please " - 'fix it.\n(See the setuptools manual under ' - '"Namespace Packages" for details.)\n"' % (package,) - ) - return init_py - - def initialize_options(self): - self.packages_checked = {} - orig.build_py.initialize_options(self) - self.editable_mode = False - self.existing_egg_info_dir = None - - def get_package_dir(self, package): - res = orig.build_py.get_package_dir(self, package) - if self.distribution.src_root is not None: - return os.path.join(self.distribution.src_root, res) - return res - - def exclude_data_files(self, package, src_dir, files): - """Filter filenames for package's data files in 'src_dir'""" - files = list(files) - patterns = self._get_platform_patterns( - self.exclude_package_data, - package, - src_dir, - ) - match_groups = (fnmatch.filter(files, pattern) for pattern in patterns) - # flatten the groups of matches into an iterable of matches - matches = itertools.chain.from_iterable(match_groups) - bad = set(matches) - keepers = (fn for fn in files if fn not in bad) - # ditch dupes - return list(unique_everseen(keepers)) - - @staticmethod - def _get_platform_patterns(spec, package, src_dir): - """ - yield platform-specific path patterns (suitable for glob - or fn_match) from a glob-based spec (such as - self.package_data or self.exclude_package_data) - matching package in src_dir. - """ - raw_patterns = itertools.chain( - spec.get('', []), - spec.get(package, []), - ) - return ( - # Each pattern has to be converted to a platform-specific path - os.path.join(src_dir, convert_path(pattern)) - for pattern in raw_patterns - ) - - -def assert_relative(path): - if not os.path.isabs(path): - return path - from distutils.errors import DistutilsSetupError - - msg = ( - textwrap.dedent( - """ - Error: setup script specifies an absolute path: - - %s - - setup() arguments must *always* be /-separated paths relative to the - setup.py directory, *never* absolute paths. - """ - ).lstrip() - % path - ) - raise DistutilsSetupError(msg) - - -class _IncludePackageDataAbuse: - """Inform users that package or module is included as 'data file'""" - - class _Warning(SetuptoolsDeprecationWarning): - _SUMMARY = """ - Package {importable!r} is absent from the `packages` configuration. - """ - - _DETAILS = """ - ############################ - # Package would be ignored # - ############################ - Python recognizes {importable!r} as an importable package[^1], - but it is absent from setuptools' `packages` configuration. - - This leads to an ambiguous overall configuration. If you want to distribute this - package, please make sure that {importable!r} is explicitly added - to the `packages` configuration field. - - Alternatively, you can also rely on setuptools' discovery methods - (for example by using `find_namespace_packages(...)`/`find_namespace:` - instead of `find_packages(...)`/`find:`). - - You can read more about "package discovery" on setuptools documentation page: - - - https://setuptools.pypa.io/en/latest/userguide/package_discovery.html - - If you don't want {importable!r} to be distributed and are - already explicitly excluding {importable!r} via - `find_namespace_packages(...)/find_namespace` or `find_packages(...)/find`, - you can try to use `exclude_package_data`, or `include-package-data=False` in - combination with a more fine grained `package-data` configuration. - - You can read more about "package data files" on setuptools documentation page: - - - https://setuptools.pypa.io/en/latest/userguide/datafiles.html - - - [^1]: For Python, any directory (with suitable naming) can be imported, - even if it does not contain any `.py` files. - On the other hand, currently there is no concept of package data - directory, all directories are treated like packages. - """ - # _DUE_DATE: still not defined as this is particularly controversial. - # Warning initially introduced in May 2022. See issue #3340 for discussion. - - def __init__(self): - self._already_warned = set() - - def is_module(self, file): - return file.endswith(".py") and file[:-len(".py")].isidentifier() - - def importable_subpackage(self, parent, file): - pkg = Path(file).parent - parts = list(itertools.takewhile(str.isidentifier, pkg.parts)) - if parts: - return ".".join([parent, *parts]) - return None - - def warn(self, importable): - if importable not in self._already_warned: - self._Warning.emit(importable=importable) - self._already_warned.add(importable) diff --git a/spaces/Tetel/secondbing/SydneyGPT/SydneyGPTUtils.py b/spaces/Tetel/secondbing/SydneyGPT/SydneyGPTUtils.py deleted file mode 100644 index 4c328e9390fceca307217c15aed13f1285f5eb6f..0000000000000000000000000000000000000000 --- a/spaces/Tetel/secondbing/SydneyGPT/SydneyGPTUtils.py +++ /dev/null @@ -1,28 +0,0 @@ -from SydneyGPT.SydneyGPT import Chatbot -try: - import EdgeGPT.EdgeGPT as EdgeGPT_module - from EdgeGPT.EdgeUtils import Query as BaseQuery -except ImportError: - import EdgeGPT as EdgeGPT_module - from EdgeUtils import Query as BaseQuery - - -create_method = EdgeGPT_module.Chatbot.create - - -async def new_create(*args, **kwargs): - monkey_create = EdgeGPT_module.Chatbot.create - try: - EdgeGPT_module.Chatbot.create = create_method - gpt_bot_create = Chatbot.create(*args, **kwargs) - return await gpt_bot_create - finally: - EdgeGPT_module.Chatbot.create = monkey_create - - -EdgeGPT_module.Chatbot.create = staticmethod(new_create) - - -class Query(BaseQuery): - pass - diff --git a/spaces/TuringAgency/anic_gui/index.html b/spaces/TuringAgency/anic_gui/index.html deleted file mode 100644 index 50e66d1408a13d9523559a0418c5abfa23139ff1..0000000000000000000000000000000000000000 --- a/spaces/TuringAgency/anic_gui/index.html +++ /dev/null @@ -1,15 +0,0 @@ - - - - - - - Anic GUI - - - - -
- - - diff --git a/spaces/VIPLab/Track-Anything/tools/painter.py b/spaces/VIPLab/Track-Anything/tools/painter.py deleted file mode 100644 index 0e711d35aa8348d15cdad9d1cd413da41ea4f1ab..0000000000000000000000000000000000000000 --- a/spaces/VIPLab/Track-Anything/tools/painter.py +++ /dev/null @@ -1,215 +0,0 @@ -# paint masks, contours, or points on images, with specified colors -import cv2 -import torch -import numpy as np -from PIL import Image -import copy -import time - - -def colormap(rgb=True): - color_list = np.array( - [ - 0.000, 0.000, 0.000, - 1.000, 1.000, 1.000, - 1.000, 0.498, 0.313, - 0.392, 0.581, 0.929, - 0.000, 0.447, 0.741, - 0.850, 0.325, 0.098, - 0.929, 0.694, 0.125, - 0.494, 0.184, 0.556, - 0.466, 0.674, 0.188, - 0.301, 0.745, 0.933, - 0.635, 0.078, 0.184, - 0.300, 0.300, 0.300, - 0.600, 0.600, 0.600, - 1.000, 0.000, 0.000, - 1.000, 0.500, 0.000, - 0.749, 0.749, 0.000, - 0.000, 1.000, 0.000, - 0.000, 0.000, 1.000, - 0.667, 0.000, 1.000, - 0.333, 0.333, 0.000, - 0.333, 0.667, 0.000, - 0.333, 1.000, 0.000, - 0.667, 0.333, 0.000, - 0.667, 0.667, 0.000, - 0.667, 1.000, 0.000, - 1.000, 0.333, 0.000, - 1.000, 0.667, 0.000, - 1.000, 1.000, 0.000, - 0.000, 0.333, 0.500, - 0.000, 0.667, 0.500, - 0.000, 1.000, 0.500, - 0.333, 0.000, 0.500, - 0.333, 0.333, 0.500, - 0.333, 0.667, 0.500, - 0.333, 1.000, 0.500, - 0.667, 0.000, 0.500, - 0.667, 0.333, 0.500, - 0.667, 0.667, 0.500, - 0.667, 1.000, 0.500, - 1.000, 0.000, 0.500, - 1.000, 0.333, 0.500, - 1.000, 0.667, 0.500, - 1.000, 1.000, 0.500, - 0.000, 0.333, 1.000, - 0.000, 0.667, 1.000, - 0.000, 1.000, 1.000, - 0.333, 0.000, 1.000, - 0.333, 0.333, 1.000, - 0.333, 0.667, 1.000, - 0.333, 1.000, 1.000, - 0.667, 0.000, 1.000, - 0.667, 0.333, 1.000, - 0.667, 0.667, 1.000, - 0.667, 1.000, 1.000, - 1.000, 0.000, 1.000, - 1.000, 0.333, 1.000, - 1.000, 0.667, 1.000, - 0.167, 0.000, 0.000, - 0.333, 0.000, 0.000, - 0.500, 0.000, 0.000, - 0.667, 0.000, 0.000, - 0.833, 0.000, 0.000, - 1.000, 0.000, 0.000, - 0.000, 0.167, 0.000, - 0.000, 0.333, 0.000, - 0.000, 0.500, 0.000, - 0.000, 0.667, 0.000, - 0.000, 0.833, 0.000, - 0.000, 1.000, 0.000, - 0.000, 0.000, 0.167, - 0.000, 0.000, 0.333, - 0.000, 0.000, 0.500, - 0.000, 0.000, 0.667, - 0.000, 0.000, 0.833, - 0.000, 0.000, 1.000, - 0.143, 0.143, 0.143, - 0.286, 0.286, 0.286, - 0.429, 0.429, 0.429, - 0.571, 0.571, 0.571, - 0.714, 0.714, 0.714, - 0.857, 0.857, 0.857 - ] - ).astype(np.float32) - color_list = color_list.reshape((-1, 3)) * 255 - if not rgb: - color_list = color_list[:, ::-1] - return color_list - - -color_list = colormap() -color_list = color_list.astype('uint8').tolist() - - -def vis_add_mask(image, mask, color, alpha): - color = np.array(color_list[color]) - mask = mask > 0.5 - image[mask] = image[mask] * (1-alpha) + color * alpha - return image.astype('uint8') - -def point_painter(input_image, input_points, point_color=5, point_alpha=0.9, point_radius=15, contour_color=2, contour_width=5): - h, w = input_image.shape[:2] - point_mask = np.zeros((h, w)).astype('uint8') - for point in input_points: - point_mask[point[1], point[0]] = 1 - - kernel = cv2.getStructuringElement(2, (point_radius, point_radius)) - point_mask = cv2.dilate(point_mask, kernel) - - contour_radius = (contour_width - 1) // 2 - dist_transform_fore = cv2.distanceTransform(point_mask, cv2.DIST_L2, 3) - dist_transform_back = cv2.distanceTransform(1-point_mask, cv2.DIST_L2, 3) - dist_map = dist_transform_fore - dist_transform_back - # ...:::!!!:::... - contour_radius += 2 - contour_mask = np.abs(np.clip(dist_map, -contour_radius, contour_radius)) - contour_mask = contour_mask / np.max(contour_mask) - contour_mask[contour_mask>0.5] = 1. - - # paint mask - painted_image = vis_add_mask(input_image.copy(), point_mask, point_color, point_alpha) - # paint contour - painted_image = vis_add_mask(painted_image.copy(), 1-contour_mask, contour_color, 1) - return painted_image - -def mask_painter(input_image, input_mask, mask_color=5, mask_alpha=0.7, contour_color=1, contour_width=3): - assert input_image.shape[:2] == input_mask.shape, 'different shape between image and mask' - # 0: background, 1: foreground - mask = np.clip(input_mask, 0, 1) - contour_radius = (contour_width - 1) // 2 - - dist_transform_fore = cv2.distanceTransform(mask, cv2.DIST_L2, 3) - dist_transform_back = cv2.distanceTransform(1-mask, cv2.DIST_L2, 3) - dist_map = dist_transform_fore - dist_transform_back - # ...:::!!!:::... - contour_radius += 2 - contour_mask = np.abs(np.clip(dist_map, -contour_radius, contour_radius)) - contour_mask = contour_mask / np.max(contour_mask) - contour_mask[contour_mask>0.5] = 1. - - # paint mask - painted_image = vis_add_mask(input_image.copy(), mask.copy(), mask_color, mask_alpha) - # paint contour - painted_image = vis_add_mask(painted_image.copy(), 1-contour_mask, contour_color, 1) - - return painted_image - -def background_remover(input_image, input_mask): - """ - input_image: H, W, 3, np.array - input_mask: H, W, np.array - - image_wo_background: PIL.Image - """ - assert input_image.shape[:2] == input_mask.shape, 'different shape between image and mask' - # 0: background, 1: foreground - mask = np.expand_dims(np.clip(input_mask, 0, 1), axis=2)*255 - image_wo_background = np.concatenate([input_image, mask], axis=2) # H, W, 4 - image_wo_background = Image.fromarray(image_wo_background).convert('RGBA') - - return image_wo_background - -if __name__ == '__main__': - input_image = np.array(Image.open('images/painter_input_image.jpg').convert('RGB')) - input_mask = np.array(Image.open('images/painter_input_mask.jpg').convert('P')) - - # example of mask painter - mask_color = 3 - mask_alpha = 0.7 - contour_color = 1 - contour_width = 5 - - # save - painted_image = Image.fromarray(input_image) - painted_image.save('images/original.png') - - painted_image = mask_painter(input_image, input_mask, mask_color, mask_alpha, contour_color, contour_width) - # save - painted_image = Image.fromarray(input_image) - painted_image.save('images/original1.png') - - # example of point painter - input_image = np.array(Image.open('images/painter_input_image.jpg').convert('RGB')) - input_points = np.array([[500, 375], [70, 600]]) # x, y - point_color = 5 - point_alpha = 0.9 - point_radius = 15 - contour_color = 2 - contour_width = 5 - painted_image_1 = point_painter(input_image, input_points, point_color, point_alpha, point_radius, contour_color, contour_width) - # save - painted_image = Image.fromarray(painted_image_1) - painted_image.save('images/point_painter_1.png') - - input_image = np.array(Image.open('images/painter_input_image.jpg').convert('RGB')) - painted_image_2 = point_painter(input_image, input_points, point_color=9, point_radius=20, contour_color=29) - # save - painted_image = Image.fromarray(painted_image_2) - painted_image.save('images/point_painter_2.png') - - # example of background remover - input_image = np.array(Image.open('images/original.png').convert('RGB')) - image_wo_background = background_remover(input_image, input_mask) # return PIL.Image - image_wo_background.save('images/image_wo_background.png') diff --git a/spaces/Vijish/PoPd-PoPArT/app.py b/spaces/Vijish/PoPd-PoPArT/app.py deleted file mode 100644 index 3ad1606aa36466b590dc9d60c8058e94acec7635..0000000000000000000000000000000000000000 --- a/spaces/Vijish/PoPd-PoPArT/app.py +++ /dev/null @@ -1,159 +0,0 @@ -import streamlit as st -import urllib.request -import PIL.Image -from PIL import Image -import requests -import fastai -from fastai.vision import * -from fastai.utils.mem import * -from fastai.vision import open_image, load_learner, image, torch -import numpy as np -from urllib.request import urlretrieve -from io import BytesIO -import numpy as np -import torchvision.transforms as T -from PIL import Image,ImageOps,ImageFilter -from io import BytesIO -import os - - - - -class FeatureLoss(nn.Module): - def __init__(self, m_feat, layer_ids, layer_wgts): - super().__init__() - self.m_feat = m_feat - self.loss_features = [self.m_feat[i] for i in layer_ids] - self.hooks = hook_outputs(self.loss_features, detach=False) - self.wgts = layer_wgts - self.metric_names = ['pixel',] + [f'feat_{i}' for i in range(len(layer_ids)) - ] + [f'gram_{i}' for i in range(len(layer_ids))] - - def make_features(self, x, clone=False): - self.m_feat(x) - return [(o.clone() if clone else o) for o in self.hooks.stored] - - def forward(self, input, target): - out_feat = self.make_features(target, clone=True) - in_feat = self.make_features(input) - self.feat_losses = [base_loss(input,target)] - self.feat_losses += [base_loss(f_in, f_out)*w - for f_in, f_out, w in zip(in_feat, out_feat, self.wgts)] - self.feat_losses += [base_loss(gram_matrix(f_in), gram_matrix(f_out))*w**2 * 5e3 - for f_in, f_out, w in zip(in_feat, out_feat, self.wgts)] - self.metrics = dict(zip(self.metric_names, self.feat_losses)) - return sum(self.feat_losses) - - def __del__(self): self.hooks.remove() - - -def getNeighbours(i, j, n, m) : - arr = [] - if i-1 >= 0 and j-1 >= 0 : - arr.append((i-1, j-1)) - if i-1 >= 0 : - arr.append((i-1, j)) - if i-1 >= 0 and j+1 < m : - arr.append((i-1, j+1)) - if j+1 < m : - arr.append((i, j+1)) - if i+1 < n and j+1 < m : - arr.append((i+1, j+1)) - if i+1 < n : - arr.append((i+1, j)) - if i+1 < n and j-1 >= 0 : - arr.append((i+1, j-1)) - if j-1 >= 0 : - arr.append((i, j-1)) - return arr - -MODEL_URL = "https://www.dropbox.com/s/05ong36r29h51ov/popd.pkl?dl=1" -urllib.request.urlretrieve(MODEL_URL, "popd.pkl") -path = Path(".") -learn=load_learner(path, 'popd.pkl') - -def predict(image,colour): - img_fast = open_image(image) - a = PIL.Image.open(image).convert('RGB') - st.image(a, caption='Input') - p,img_hr,b = learn.predict(img_fast) - x = np.minimum(np.maximum(image2np(img_hr.data*255), 0), 255).astype(np.uint8) - img = PIL.Image.fromarray(x).convert('RGB') - size = a.size - im1 = img.resize(size) - membuf = BytesIO() - im1.save(membuf, format="png") - im = Image.open(membuf) - im = im.convert('RGBA') - data = np.array(im) # "data" is a height x width x 4 numpy array - red, green, blue, alpha = data.T # Temporarily unpack the bands for readability' - white_areas = (red == 0) & (blue == 0) & (green == 0) - data[..., :-1][white_areas.T] = (0,0,0) # Transpose back needed - im2 = Image.fromarray(data) - membuf = BytesIO() - im2.save(membuf, format="png") - img = Image.open(membuf) - bitmap = img.load() - n = img.size[0] - m = img.size[1] - stateMap = [] - for i in range(n): - stateMap.append([False for j in range(m)]) - queue = [(0, 0)] - while queue: - e = queue.pop(0) - i = e[0] - j = e[1] - if not stateMap[i][j]: - stateMap[i][j] = True - color = int((bitmap[i, j][0] + bitmap[i, j][1] + bitmap[i, j][2])/3) - if color > 100: - bitmap[i, j] =colour - neigh = getNeighbours(i, j, n, m) - for ne in neigh: - queue.append(ne) - - return st.image(img, caption='PoP ArT') - -SIDEBAR_OPTION_DEMO_IMAGE = "Select a Demo Image" -SIDEBAR_OPTION_UPLOAD_IMAGE = "Upload an Image" -#SIDEBAR_OPTION_COLOUR_IMAGE = "Choose a colour" - -SIDEBAR_OPTIONS = [SIDEBAR_OPTION_DEMO_IMAGE, SIDEBAR_OPTION_UPLOAD_IMAGE] -st.sidebar.write("Check out GitHub [link](https://github.com/vijishmadhavan/PoPd)") - - -app_mode = st.sidebar.selectbox("Please select from the following", SIDEBAR_OPTIONS) -photos = ["fight.jpg","shaolin-kung-fu.jpg","unnamed.jpg","michael-jackson.png"] -colour = ['Red','Blue','Yellow'] -if app_mode == SIDEBAR_OPTION_DEMO_IMAGE: - st.sidebar.write(" ------ ") - option = st.sidebar.selectbox('Please select a sample image,colour and then click PoP button', photos) - colour = st.sidebar.selectbox("Colour", colour) - if colour == 'Red': - colour = (185, 39, 40) - elif colour == 'Blue': - colour = (40, 96, 219) - else: - colour = (249, 223, 2) - pressed = st.sidebar.button('PoP') - if pressed: - st.empty() - st.sidebar.write('Please wait for the magic to happen! This may take up to a minute.') - predict(option,colour) - -elif app_mode == SIDEBAR_OPTION_UPLOAD_IMAGE: - uploaded_file = st.file_uploader("Choose an image...") - if uploaded_file is not None: - colour = st.sidebar.selectbox("Colour", colour) - if colour == 'Red': - colour = (185, 39, 40) - elif colour == 'Blue': - colour = (40, 96, 219) - else: - colour = (249, 223, 2) - pressed = st.sidebar.button('PoP') - if pressed: - st.empty() - st.sidebar.write('Please wait for the magic to happen! This may take up to a minute.') - predict(uploaded_file,colour) diff --git a/spaces/Widium/Style-Recreation/functions/system/devices.py b/spaces/Widium/Style-Recreation/functions/system/devices.py deleted file mode 100644 index e046ce76d48b77ad82502603c096c893775c9eef..0000000000000000000000000000000000000000 --- a/spaces/Widium/Style-Recreation/functions/system/devices.py +++ /dev/null @@ -1,27 +0,0 @@ -# *************************************************************************** # -# # -# devices.py # -# # -# By: Widium # -# Github : https://github.com/widium # -# # -# Created: 2023/05/05 10:57:02 by Widium # -# Updated: 2023/05/05 10:57:02 by Widium # -# # -# **************************************************************************** # - -import os - -def deactivate_gpu(): - os.environ['CUDA_VISIBLE_DEVICES'] = '-1' - -import tensorflow as tf -from tensorflow.python.client import device_lib - - -def get_available_devices(): - local_device_protos = device_lib.list_local_devices() - devices = [x.name for x in local_device_protos] - print("Available devices:", devices) - -# print("GPU AVAILABLE ?", tf.config.list_physical_devices('GPU')) diff --git a/spaces/Wrathless/Dkrotzer-MusicalMagic/tests/modules/__init__.py b/spaces/Wrathless/Dkrotzer-MusicalMagic/tests/modules/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/Wrathless/Dkrotzer-MusicalMagic/tests/modules/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/Xenova/sponsorblock-ml/src/segment.py b/spaces/Xenova/sponsorblock-ml/src/segment.py deleted file mode 100644 index 4cbbf21866de3de0da90a722d0d36cdcdf13c4f6..0000000000000000000000000000000000000000 --- a/spaces/Xenova/sponsorblock-ml/src/segment.py +++ /dev/null @@ -1,166 +0,0 @@ -import preprocess -from dataclasses import dataclass, field - - -@dataclass -class SegmentationArguments: - pause_threshold: int = field(default=2.5, metadata={ - 'help': 'When the time between words is greater than pause threshold, force into a new segment'}) - - -def get_overlapping_chunks_of_tokens(tokens, size, overlap): - for i in range(0, len(tokens), size-overlap+1): - yield tokens[i:i+size] - - -# Generate up to SAFETY_TOKENS_PERCENTAGE*max_tokens tokens -MIN_SAFETY_TOKENS = 8 -SAFETY_TOKENS_PERCENTAGE = 0.9765625 -# e.g. 512 -> 500, 768 -> 750 - - -# TODO play around with this? -OVERLAP_TOKEN_PERCENTAGE = 0.5 # 0.25 - - -def add_labels_to_words(words, sponsor_segments): - - for sponsor_segment in sponsor_segments: - for w in extract_segment(words, sponsor_segment['start'], sponsor_segment['end']): - w['category'] = sponsor_segment['category'] - - return words - - -def generate_labelled_segments(words, tokenizer, segmentation_args, sponsor_segments): - segments = generate_segments(words, tokenizer, segmentation_args) - - labelled_segments = list( - map(lambda x: add_labels_to_words(x, sponsor_segments), segments)) - - return labelled_segments - - -def word_start(word): - return word['start'] - - -def word_end(word): - return word.get('end', word['start']) - - -def generate_segments(words, tokenizer, segmentation_args): - - cleaned_words_list = [] - for w in words: - w['cleaned'] = preprocess.clean_text(w['text']) - cleaned_words_list.append(w['cleaned']) - - # Get lengths of tokenized words - num_tokens_list = tokenizer(cleaned_words_list, add_special_tokens=False, - truncation=True, return_attention_mask=False, return_length=True).length - - first_pass_segments = [] - for index, (word, num_tokens) in enumerate(zip(words, num_tokens_list)): - word['num_tokens'] = num_tokens - - # Add new segment - if index == 0 or word_start(words[index]) - word_end(words[index-1]) >= segmentation_args.pause_threshold: - first_pass_segments.append([word]) - - else: # Add to current segment - first_pass_segments[-1].append(word) - - max_q_size = round(SAFETY_TOKENS_PERCENTAGE * tokenizer.model_max_length) - - buffer_size = OVERLAP_TOKEN_PERCENTAGE*max_q_size # tokenizer.model_max_length - - # In second pass, we split those segments if too big - second_pass_segments = [] - - for segment in first_pass_segments: - current_segment_num_tokens = 0 - current_segment = [] - after_split_segments = [] - for word in segment: - new_seg = current_segment_num_tokens + \ - word['num_tokens'] >= max_q_size - if new_seg: - # Adding this token would make it have too many tokens - # We save this batch and create new - after_split_segments.append(current_segment) - - # Add tokens to current segment - current_segment.append(word) - current_segment_num_tokens += word['num_tokens'] - - if not new_seg: - continue - - # Just created a new segment, so we remove until we only have buffer_size tokens - last_index = 0 - while current_segment_num_tokens > buffer_size and current_segment: - current_segment_num_tokens -= current_segment[last_index]['num_tokens'] - last_index += 1 - - current_segment = current_segment[last_index:] - - if current_segment: # Add remaining segment - after_split_segments.append(current_segment) - - # TODO if len(after_split_segments) > 1, a split occurred - - second_pass_segments.extend(after_split_segments) - - # Cleaning up, delete 'num_tokens' from each word - for word in words: - word.pop('num_tokens', None) - - return second_pass_segments - - -def extract_segment(words, start, end, map_function=None): - """Extracts all words with time in [start, end]""" - if words is None: - words = [] - - a = max(binary_search_below(words, 0, len(words), start), 0) - b = min(binary_search_above(words, -1, len(words) - 1, end) + 1, len(words)) - - to_transform = map_function is not None and callable(map_function) - - return [ - map_function(words[i]) if to_transform else words[i] for i in range(a, b) - ] - - -def avg(*items): - return sum(items)/len(items) - - -def binary_search_below(transcript, start_index, end_index, time): - if start_index >= end_index: - return end_index - - middle_index = (start_index + end_index) // 2 - middle = transcript[middle_index] - middle_time = avg(word_start(middle), word_end(middle)) - - if time <= middle_time: - return binary_search_below(transcript, start_index, middle_index, time) - else: - return binary_search_below(transcript, middle_index + 1, end_index, time) - - -def binary_search_above(transcript, start_index, end_index, time): - if start_index >= end_index: - return end_index - - middle_index = (start_index + end_index + 1) // 2 - middle = transcript[middle_index] - middle_time = avg(word_start(middle), word_end(middle)) - - if time >= middle_time: - return binary_search_above(transcript, middle_index, end_index, time) - else: - return binary_search_above(transcript, start_index, middle_index - 1, time) diff --git a/spaces/XzJosh/Ava-Bert-VITS2/text/tone_sandhi.py b/spaces/XzJosh/Ava-Bert-VITS2/text/tone_sandhi.py deleted file mode 100644 index 0f45b7a72c5d858bcaab19ac85cfa686bf9a74da..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Ava-Bert-VITS2/text/tone_sandhi.py +++ /dev/null @@ -1,351 +0,0 @@ -# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from typing import List -from typing import Tuple - -import jieba -from pypinyin import lazy_pinyin -from pypinyin import Style - - -class ToneSandhi(): - def __init__(self): - self.must_neural_tone_words = { - '麻烦', '麻利', '鸳鸯', '高粱', '骨头', '骆驼', '马虎', '首饰', '馒头', '馄饨', '风筝', - '难为', '队伍', '阔气', '闺女', '门道', '锄头', '铺盖', '铃铛', '铁匠', '钥匙', '里脊', - '里头', '部分', '那么', '道士', '造化', '迷糊', '连累', '这么', '这个', '运气', '过去', - '软和', '转悠', '踏实', '跳蚤', '跟头', '趔趄', '财主', '豆腐', '讲究', '记性', '记号', - '认识', '规矩', '见识', '裁缝', '补丁', '衣裳', '衣服', '衙门', '街坊', '行李', '行当', - '蛤蟆', '蘑菇', '薄荷', '葫芦', '葡萄', '萝卜', '荸荠', '苗条', '苗头', '苍蝇', '芝麻', - '舒服', '舒坦', '舌头', '自在', '膏药', '脾气', '脑袋', '脊梁', '能耐', '胳膊', '胭脂', - '胡萝', '胡琴', '胡同', '聪明', '耽误', '耽搁', '耷拉', '耳朵', '老爷', '老实', '老婆', - '老头', '老太', '翻腾', '罗嗦', '罐头', '编辑', '结实', '红火', '累赘', '糨糊', '糊涂', - '精神', '粮食', '簸箕', '篱笆', '算计', '算盘', '答应', '笤帚', '笑语', '笑话', '窟窿', - '窝囊', '窗户', '稳当', '稀罕', '称呼', '秧歌', '秀气', '秀才', '福气', '祖宗', '砚台', - '码头', '石榴', '石头', '石匠', '知识', '眼睛', '眯缝', '眨巴', '眉毛', '相声', '盘算', - '白净', '痢疾', '痛快', '疟疾', '疙瘩', '疏忽', '畜生', '生意', '甘蔗', '琵琶', '琢磨', - '琉璃', '玻璃', '玫瑰', '玄乎', '狐狸', '状元', '特务', '牲口', '牙碜', '牌楼', '爽快', - '爱人', '热闹', '烧饼', '烟筒', '烂糊', '点心', '炊帚', '灯笼', '火候', '漂亮', '滑溜', - '溜达', '温和', '清楚', '消息', '浪头', '活泼', '比方', '正经', '欺负', '模糊', '槟榔', - '棺材', '棒槌', '棉花', '核桃', '栅栏', '柴火', '架势', '枕头', '枇杷', '机灵', '本事', - '木头', '木匠', '朋友', '月饼', '月亮', '暖和', '明白', '时候', '新鲜', '故事', '收拾', - '收成', '提防', '挖苦', '挑剔', '指甲', '指头', '拾掇', '拳头', '拨弄', '招牌', '招呼', - '抬举', '护士', '折腾', '扫帚', '打量', '打算', '打点', '打扮', '打听', '打发', '扎实', - '扁担', '戒指', '懒得', '意识', '意思', '情形', '悟性', '怪物', '思量', '怎么', '念头', - '念叨', '快活', '忙活', '志气', '心思', '得罪', '张罗', '弟兄', '开通', '应酬', '庄稼', - '干事', '帮手', '帐篷', '希罕', '师父', '师傅', '巴结', '巴掌', '差事', '工夫', '岁数', - '屁股', '尾巴', '少爷', '小气', '小伙', '将就', '对头', '对付', '寡妇', '家伙', '客气', - '实在', '官司', '学问', '学生', '字号', '嫁妆', '媳妇', '媒人', '婆家', '娘家', '委屈', - '姑娘', '姐夫', '妯娌', '妥当', '妖精', '奴才', '女婿', '头发', '太阳', '大爷', '大方', - '大意', '大夫', '多少', '多么', '外甥', '壮实', '地道', '地方', '在乎', '困难', '嘴巴', - '嘱咐', '嘟囔', '嘀咕', '喜欢', '喇嘛', '喇叭', '商量', '唾沫', '哑巴', '哈欠', '哆嗦', - '咳嗽', '和尚', '告诉', '告示', '含糊', '吓唬', '后头', '名字', '名堂', '合同', '吆喝', - '叫唤', '口袋', '厚道', '厉害', '千斤', '包袱', '包涵', '匀称', '勤快', '动静', '动弹', - '功夫', '力气', '前头', '刺猬', '刺激', '别扭', '利落', '利索', '利害', '分析', '出息', - '凑合', '凉快', '冷战', '冤枉', '冒失', '养活', '关系', '先生', '兄弟', '便宜', '使唤', - '佩服', '作坊', '体面', '位置', '似的', '伙计', '休息', '什么', '人家', '亲戚', '亲家', - '交情', '云彩', '事情', '买卖', '主意', '丫头', '丧气', '两口', '东西', '东家', '世故', - '不由', '不在', '下水', '下巴', '上头', '上司', '丈夫', '丈人', '一辈', '那个', '菩萨', - '父亲', '母亲', '咕噜', '邋遢', '费用', '冤家', '甜头', '介绍', '荒唐', '大人', '泥鳅', - '幸福', '熟悉', '计划', '扑腾', '蜡烛', '姥爷', '照顾', '喉咙', '吉他', '弄堂', '蚂蚱', - '凤凰', '拖沓', '寒碜', '糟蹋', '倒腾', '报复', '逻辑', '盘缠', '喽啰', '牢骚', '咖喱', - '扫把', '惦记' - } - self.must_not_neural_tone_words = { - "男子", "女子", "分子", "原子", "量子", "莲子", "石子", "瓜子", "电子", "人人", "虎虎" - } - self.punc = ":,;。?!“”‘’':,;.?!" - - # the meaning of jieba pos tag: https://blog.csdn.net/weixin_44174352/article/details/113731041 - # e.g. - # word: "家里" - # pos: "s" - # finals: ['ia1', 'i3'] - def _neural_sandhi(self, word: str, pos: str, - finals: List[str]) -> List[str]: - - # reduplication words for n. and v. e.g. 奶奶, 试试, 旺旺 - for j, item in enumerate(word): - if j - 1 >= 0 and item == word[j - 1] and pos[0] in { - "n", "v", "a" - } and word not in self.must_not_neural_tone_words: - finals[j] = finals[j][:-1] + "5" - ge_idx = word.find("个") - if len(word) >= 1 and word[-1] in "吧呢啊呐噻嘛吖嗨呐哦哒额滴哩哟喽啰耶喔诶": - finals[-1] = finals[-1][:-1] + "5" - elif len(word) >= 1 and word[-1] in "的地得": - finals[-1] = finals[-1][:-1] + "5" - # e.g. 走了, 看着, 去过 - # elif len(word) == 1 and word in "了着过" and pos in {"ul", "uz", "ug"}: - # finals[-1] = finals[-1][:-1] + "5" - elif len(word) > 1 and word[-1] in "们子" and pos in { - "r", "n" - } and word not in self.must_not_neural_tone_words: - finals[-1] = finals[-1][:-1] + "5" - # e.g. 桌上, 地下, 家里 - elif len(word) > 1 and word[-1] in "上下里" and pos in {"s", "l", "f"}: - finals[-1] = finals[-1][:-1] + "5" - # e.g. 上来, 下去 - elif len(word) > 1 and word[-1] in "来去" and word[-2] in "上下进出回过起开": - finals[-1] = finals[-1][:-1] + "5" - # 个做量词 - elif (ge_idx >= 1 and - (word[ge_idx - 1].isnumeric() or - word[ge_idx - 1] in "几有两半多各整每做是")) or word == '个': - finals[ge_idx] = finals[ge_idx][:-1] + "5" - else: - if word in self.must_neural_tone_words or word[ - -2:] in self.must_neural_tone_words: - finals[-1] = finals[-1][:-1] + "5" - - word_list = self._split_word(word) - finals_list = [finals[:len(word_list[0])], finals[len(word_list[0]):]] - for i, word in enumerate(word_list): - # conventional neural in Chinese - if word in self.must_neural_tone_words or word[ - -2:] in self.must_neural_tone_words: - finals_list[i][-1] = finals_list[i][-1][:-1] + "5" - finals = sum(finals_list, []) - return finals - - def _bu_sandhi(self, word: str, finals: List[str]) -> List[str]: - # e.g. 看不懂 - if len(word) == 3 and word[1] == "不": - finals[1] = finals[1][:-1] + "5" - else: - for i, char in enumerate(word): - # "不" before tone4 should be bu2, e.g. 不怕 - if char == "不" and i + 1 < len(word) and finals[i + - 1][-1] == "4": - finals[i] = finals[i][:-1] + "2" - return finals - - def _yi_sandhi(self, word: str, finals: List[str]) -> List[str]: - # "一" in number sequences, e.g. 一零零, 二一零 - if word.find("一") != -1 and all( - [item.isnumeric() for item in word if item != "一"]): - return finals - # "一" between reduplication words shold be yi5, e.g. 看一看 - elif len(word) == 3 and word[1] == "一" and word[0] == word[-1]: - finals[1] = finals[1][:-1] + "5" - # when "一" is ordinal word, it should be yi1 - elif word.startswith("第一"): - finals[1] = finals[1][:-1] + "1" - else: - for i, char in enumerate(word): - if char == "一" and i + 1 < len(word): - # "一" before tone4 should be yi2, e.g. 一段 - if finals[i + 1][-1] == "4": - finals[i] = finals[i][:-1] + "2" - # "一" before non-tone4 should be yi4, e.g. 一天 - else: - # "一" 后面如果是标点,还读一声 - if word[i + 1] not in self.punc: - finals[i] = finals[i][:-1] + "4" - return finals - - def _split_word(self, word: str) -> List[str]: - word_list = jieba.cut_for_search(word) - word_list = sorted(word_list, key=lambda i: len(i), reverse=False) - first_subword = word_list[0] - first_begin_idx = word.find(first_subword) - if first_begin_idx == 0: - second_subword = word[len(first_subword):] - new_word_list = [first_subword, second_subword] - else: - second_subword = word[:-len(first_subword)] - new_word_list = [second_subword, first_subword] - return new_word_list - - def _three_sandhi(self, word: str, finals: List[str]) -> List[str]: - if len(word) == 2 and self._all_tone_three(finals): - finals[0] = finals[0][:-1] + "2" - elif len(word) == 3: - word_list = self._split_word(word) - if self._all_tone_three(finals): - # disyllabic + monosyllabic, e.g. 蒙古/包 - if len(word_list[0]) == 2: - finals[0] = finals[0][:-1] + "2" - finals[1] = finals[1][:-1] + "2" - # monosyllabic + disyllabic, e.g. 纸/老虎 - elif len(word_list[0]) == 1: - finals[1] = finals[1][:-1] + "2" - else: - finals_list = [ - finals[:len(word_list[0])], finals[len(word_list[0]):] - ] - if len(finals_list) == 2: - for i, sub in enumerate(finals_list): - # e.g. 所有/人 - if self._all_tone_three(sub) and len(sub) == 2: - finals_list[i][0] = finals_list[i][0][:-1] + "2" - # e.g. 好/喜欢 - elif i == 1 and not self._all_tone_three(sub) and finals_list[i][0][-1] == "3" and \ - finals_list[0][-1][-1] == "3": - - finals_list[0][-1] = finals_list[0][-1][:-1] + "2" - finals = sum(finals_list, []) - # split idiom into two words who's length is 2 - elif len(word) == 4: - finals_list = [finals[:2], finals[2:]] - finals = [] - for sub in finals_list: - if self._all_tone_three(sub): - sub[0] = sub[0][:-1] + "2" - finals += sub - - return finals - - def _all_tone_three(self, finals: List[str]) -> bool: - return all(x[-1] == "3" for x in finals) - - # merge "不" and the word behind it - # if don't merge, "不" sometimes appears alone according to jieba, which may occur sandhi error - def _merge_bu(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - last_word = "" - for word, pos in seg: - if last_word == "不": - word = last_word + word - if word != "不": - new_seg.append((word, pos)) - last_word = word[:] - if last_word == "不": - new_seg.append((last_word, 'd')) - last_word = "" - return new_seg - - # function 1: merge "一" and reduplication words in it's left and right, e.g. "听","一","听" ->"听一听" - # function 2: merge single "一" and the word behind it - # if don't merge, "一" sometimes appears alone according to jieba, which may occur sandhi error - # e.g. - # input seg: [('听', 'v'), ('一', 'm'), ('听', 'v')] - # output seg: [['听一听', 'v']] - def _merge_yi(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - # function 1 - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and word == "一" and i + 1 < len(seg) and seg[i - 1][ - 0] == seg[i + 1][0] and seg[i - 1][1] == "v": - new_seg[i - 1][0] = new_seg[i - 1][0] + "一" + new_seg[i - 1][0] - else: - if i - 2 >= 0 and seg[i - 1][0] == "一" and seg[i - 2][ - 0] == word and pos == "v": - continue - else: - new_seg.append([word, pos]) - seg = new_seg - new_seg = [] - # function 2 - for i, (word, pos) in enumerate(seg): - if new_seg and new_seg[-1][0] == "一": - new_seg[-1][0] = new_seg[-1][0] + word - else: - new_seg.append([word, pos]) - return new_seg - - # the first and the second words are all_tone_three - def _merge_continuous_three_tones( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - sub_finals_list = [ - lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for (word, pos) in seg - ] - assert len(sub_finals_list) == len(seg) - merge_last = [False] * len(seg) - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and self._all_tone_three( - sub_finals_list[i - 1]) and self._all_tone_three( - sub_finals_list[i]) and not merge_last[i - 1]: - # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi - if not self._is_reduplication(seg[i - 1][0]) and len( - seg[i - 1][0]) + len(seg[i][0]) <= 3: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - merge_last[i] = True - else: - new_seg.append([word, pos]) - else: - new_seg.append([word, pos]) - - return new_seg - - def _is_reduplication(self, word: str) -> bool: - return len(word) == 2 and word[0] == word[1] - - # the last char of first word and the first char of second word is tone_three - def _merge_continuous_three_tones_2( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - sub_finals_list = [ - lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for (word, pos) in seg - ] - assert len(sub_finals_list) == len(seg) - merge_last = [False] * len(seg) - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and sub_finals_list[i - 1][-1][-1] == "3" and sub_finals_list[i][0][-1] == "3" and not \ - merge_last[i - 1]: - # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi - if not self._is_reduplication(seg[i - 1][0]) and len( - seg[i - 1][0]) + len(seg[i][0]) <= 3: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - merge_last[i] = True - else: - new_seg.append([word, pos]) - else: - new_seg.append([word, pos]) - return new_seg - - def _merge_er(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and word == "儿" and seg[i-1][0] != "#": - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - else: - new_seg.append([word, pos]) - return new_seg - - def _merge_reduplication( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - for i, (word, pos) in enumerate(seg): - if new_seg and word == new_seg[-1][0]: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - else: - new_seg.append([word, pos]) - return new_seg - - def pre_merge_for_modify( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - seg = self._merge_bu(seg) - try: - seg = self._merge_yi(seg) - except: - print("_merge_yi failed") - seg = self._merge_reduplication(seg) - seg = self._merge_continuous_three_tones(seg) - seg = self._merge_continuous_three_tones_2(seg) - seg = self._merge_er(seg) - return seg - - def modified_tone(self, word: str, pos: str, - finals: List[str]) -> List[str]: - finals = self._bu_sandhi(word, finals) - finals = self._yi_sandhi(word, finals) - finals = self._neural_sandhi(word, pos, finals) - finals = self._three_sandhi(word, finals) - return finals diff --git a/spaces/XzJosh/Lumi-Bert-VITS2/data_utils.py b/spaces/XzJosh/Lumi-Bert-VITS2/data_utils.py deleted file mode 100644 index be3a29a93188c5b3386f22e5db29e5e96d78109a..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Lumi-Bert-VITS2/data_utils.py +++ /dev/null @@ -1,321 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data -import commons -from mel_processing import spectrogram_torch, mel_spectrogram_torch, spec_to_mel_torch -from utils import load_wav_to_torch, load_filepaths_and_text -from text import cleaned_text_to_sequence, get_bert - -"""Multi speaker version""" - - -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths_sid_text, hparams): - self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text) - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - self.spk_map = hparams.spk2id - self.hparams = hparams - - self.use_mel_spec_posterior = getattr(hparams, "use_mel_posterior_encoder", False) - if self.use_mel_spec_posterior: - self.n_mel_channels = getattr(hparams, "n_mel_channels", 80) - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 300) - - random.seed(1234) - random.shuffle(self.audiopaths_sid_text) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_sid_text_new = [] - lengths = [] - skipped = 0 - for _id, spk, language, text, phones, tone, word2ph in self.audiopaths_sid_text: - audiopath = f'{_id}' - if self.min_text_len <= len(phones) and len(phones) <= self.max_text_len: - phones = phones.split(" ") - tone = [int(i) for i in tone.split(" ")] - word2ph = [int(i) for i in word2ph.split(" ")] - audiopaths_sid_text_new.append([audiopath, spk, language, text, phones, tone, word2ph]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - else: - skipped += 1 - print("skipped: ", skipped, ", total: ", len(self.audiopaths_sid_text)) - self.audiopaths_sid_text = audiopaths_sid_text_new - self.lengths = lengths - - def get_audio_text_speaker_pair(self, audiopath_sid_text): - # separate filename, speaker_id and text - audiopath, sid, language, text, phones, tone, word2ph = audiopath_sid_text - - bert, phones, tone, language = self.get_text(text, word2ph, phones, tone, language, audiopath) - - spec, wav = self.get_audio(audiopath) - sid = torch.LongTensor([int(self.spk_map[sid])]) - return (phones, spec, wav, sid, tone, language, bert) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if self.use_mel_spec_posterior: - spec_filename = spec_filename.replace(".spec.pt", ".mel.pt") - try: - spec = torch.load(spec_filename) - except: - if self.use_mel_spec_posterior: - spec = mel_spectrogram_torch(audio_norm, self.filter_length, - self.n_mel_channels, self.sampling_rate, self.hop_length, - self.win_length, self.hparams.mel_fmin, self.hparams.mel_fmax, center=False) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text, word2ph, phone, tone, language_str, wav_path): - pold = phone - w2pho = [i for i in word2ph] - word2ph = [i for i in word2ph] - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - pold2 = phone - - if self.add_blank: - p1 = len(phone) - phone = commons.intersperse(phone, 0) - p2 = len(phone) - t1 = len(tone) - tone = commons.intersperse(tone, 0) - t2 = len(tone) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert_path = wav_path.replace(".wav", ".bert.pt") - try: - bert = torch.load(bert_path) - assert bert.shape[-1] == len(phone) - except: - bert = get_bert(text, word2ph, language_str) - torch.save(bert, bert_path) - #print(bert.shape[-1], bert_path, text, pold) - assert bert.shape[-1] == len(phone) - - assert bert.shape[-1] == len(phone), ( - bert.shape, len(phone), sum(word2ph), p1, p2, t1, t2, pold, pold2, word2ph, text, w2pho) - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - return bert, phone, tone, language - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def __getitem__(self, index): - return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index]) - - def __len__(self): - return len(self.audiopaths_sid_text) - - -class TextAudioSpeakerCollate(): - """ Zero-pads model inputs and targets - """ - - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text, audio and speaker identities - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized, sid] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - sid = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - tone_padded = torch.LongTensor(len(batch), max_text_len) - language_padded = torch.LongTensor(len(batch), max_text_len) - bert_padded = torch.FloatTensor(len(batch), 1024, max_text_len) - - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - tone_padded.zero_() - language_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - bert_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - sid[i] = row[3] - - tone = row[4] - tone_padded[i, :tone.size(0)] = tone - - language = row[5] - language_padded[i, :language.size(0)] = language - - bert = row[6] - bert_padded[i, :, :bert.size(1)] = bert - - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, tone_padded, language_padded, bert_padded - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - - def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, 0, -1): - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i + 1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - if (len_bucket == 0): - continue - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)] - - # subsample - ids_bucket = ids_bucket[self.rank::self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [bucket[idx] for idx in ids_bucket[j * self.batch_size:(j + 1) * self.batch_size]] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size diff --git a/spaces/XzJosh/otto-Bert-VITS2/text/symbols.py b/spaces/XzJosh/otto-Bert-VITS2/text/symbols.py deleted file mode 100644 index 9dfae4e633829f20c4fd767b1c7a9198911ed801..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/otto-Bert-VITS2/text/symbols.py +++ /dev/null @@ -1,51 +0,0 @@ -punctuation = ['!', '?', '…', ",", ".", "'", '-'] -pu_symbols = punctuation + ["SP", "UNK"] -pad = '_' - -# chinese -zh_symbols = ['E', 'En', 'a', 'ai', 'an', 'ang', 'ao', 'b', 'c', 'ch', 'd', 'e', 'ei', 'en', 'eng', 'er', 'f', 'g', 'h', - 'i', 'i0', 'ia', 'ian', 'iang', 'iao', 'ie', 'in', 'ing', 'iong', 'ir', 'iu', 'j', 'k', 'l', 'm', 'n', 'o', - 'ong', - 'ou', 'p', 'q', 'r', 's', 'sh', 't', 'u', 'ua', 'uai', 'uan', 'uang', 'ui', 'un', 'uo', 'v', 'van', 've', 'vn', - 'w', 'x', 'y', 'z', 'zh', - "AA", "EE", "OO"] -num_zh_tones = 6 - -# japanese -ja_symbols = ['I', 'N', 'U', 'a', 'b', 'by', 'ch', 'cl', 'd', 'dy', 'e', 'f', 'g', 'gy', 'h', 'hy', 'i', 'j', 'k', 'ky', - 'm', 'my', 'n', 'ny', 'o', 'p', 'py', 'r', 'ry', 's', 'sh', 't', 'ts', 'u', 'V', 'w', 'y', 'z'] -num_ja_tones = 1 - -# English -en_symbols = ['aa', 'ae', 'ah', 'ao', 'aw', 'ay', 'b', 'ch', 'd', 'dh', 'eh', 'er', 'ey', 'f', 'g', 'hh', 'ih', 'iy', - 'jh', 'k', 'l', 'm', 'n', 'ng', 'ow', 'oy', 'p', 'r', 's', - 'sh', 't', 'th', 'uh', 'uw', 'V', 'w', 'y', 'z', 'zh'] -num_en_tones = 4 - -# combine all symbols -normal_symbols = sorted(set(zh_symbols + ja_symbols + en_symbols)) -symbols = [pad] + normal_symbols + pu_symbols -sil_phonemes_ids = [symbols.index(i) for i in pu_symbols] - -# combine all tones -num_tones = num_zh_tones + num_ja_tones + num_en_tones - -# language maps -language_id_map = { - 'ZH': 0, - "JA": 1, - "EN": 2 -} -num_languages = len(language_id_map.keys()) - -language_tone_start_map = { - 'ZH': 0, - "JA": num_zh_tones, - "EN": num_zh_tones + num_ja_tones -} - -if __name__ == '__main__': - a = set(zh_symbols) - b = set(en_symbols) - print(sorted(a&b)) - diff --git a/spaces/YazawaSunrise/so-vits-svc-LoveLive/Eng_docs.md b/spaces/YazawaSunrise/so-vits-svc-LoveLive/Eng_docs.md deleted file mode 100644 index 78f6db875daad0272f644f195b634a526e302adf..0000000000000000000000000000000000000000 --- a/spaces/YazawaSunrise/so-vits-svc-LoveLive/Eng_docs.md +++ /dev/null @@ -1,83 +0,0 @@ -# SoftVC VITS Singing Voice Conversion -## Updates -> According to incomplete statistics, it seems that training with multiple speakers may lead to **worsened leaking of voice timbre**. It is not recommended to train models with more than 5 speakers. The current suggestion is to try to train models with only a single speaker if you want to achieve a voice timbre that is more similar to the target. -> Fixed the issue with unwanted staccato, improving audio quality by a decent amount.\ -> The 2.0 version has been moved to the 2.0 branch.\ -> Version 3.0 uses the code structure of FreeVC, which isn't compatible with older versions.\ -> Compared to [DiffSVC](https://github.com/prophesier/diff-svc) , diffsvc performs much better when the training data is of extremely high quality, but this repository may perform better on datasets with lower quality. Additionally, this repository is much faster in terms of inference speed compared to diffsvc. - -## Model Overview -A singing voice coversion (SVC) model, using the SoftVC encoder to extract features from the input audio, sent into VITS along with the F0 to replace the original input to acheive a voice conversion effect. Additionally, changing the vocoder to [NSF HiFiGAN](https://github.com/openvpi/DiffSinger/tree/refactor/modules/nsf_hifigan) to fix the issue with unwanted staccato. -## Notice -+ The current branch is the 32kHz version, which requires less vram during inferencing, as well as faster inferencing speeds, and datasets for said branch take up less disk space. Thus the 32 kHz branch is recommended for use. -+ If you want to train 48 kHz variant models, switch to the [main branch](https://github.com/innnky/so-vits-svc/tree/main). -## Colab notebook script for dataset creation and training. -[colab training notebook](https://colab.research.google.com/drive/1rCUOOVG7-XQlVZuWRAj5IpGrMM8t07pE?usp=sharing) - -## Required models -+ soft vc hubert:[hubert-soft-0d54a1f4.pt](https://github.com/bshall/hubert/releases/download/v0.1/hubert-soft-0d54a1f4.pt) - + Place under `hubert`. -+ Pretrained models [G_0.pth](https://huggingface.co/innnky/sovits_pretrained/resolve/main/G_0.pth) and [D_0.pth](https://huggingface.co/innnky/sovits_pretrained/resolve/main/D_0.pth) - + Place under `logs/32k`. - + Pretrained models are required, because from experiments, training from scratch can be rather unpredictable to say the least, and training with a pretrained model can greatly improve training speeds. - + The pretrained model includes云灏, 即霜, 辉宇·星AI, 派蒙, and 绫地宁宁, covering the common ranges of both male and female voices, and so it can be seen as a rather universal pretrained model. - + The pretrained model exludes the `optimizer speaker_embedding` section, rendering it only usable for pretraining and incapable of inferencing with. -```shell -# For simple downloading. -# hubert -wget -P hubert/ https://github.com/bshall/hubert/releases/download/v0.1/hubert-soft-0d54a1f4.pt -# G&D pretrained models -wget -P logs/32k/ https://huggingface.co/innnky/sovits_pretrained/resolve/main/G_0.pth -wget -P logs/32k/ https://huggingface.co/innnky/sovits_pretrained/resolve/main/D_0.pth - -``` - - -## Dataset preparation -All that is required is that the data be put under the `dataset_raw` folder in the structure format provided below. -```shell -dataset_raw -├───speaker0 -│ ├───xxx1-xxx1.wav -│ ├───... -│ └───Lxx-0xx8.wav -└───speaker1 - ├───xx2-0xxx2.wav - ├───... - └───xxx7-xxx007.wav -``` - -## Data pre-processing. -1. Resample to 32khz - -```shell -python resample.py - ``` -2. Automatically sort out training set, validation set, test set, and automatically generate configuration files. -```shell -python preprocess_flist_config.py -# Notice. -# The n_speakers value in the config will be set automatically according to the amount of speakers in the dataset. -# To reserve space for additionally added speakers in the dataset, the n_speakers value will be be set to twice the actual amount. -# If you want even more space for adding more data, you can edit the n_speakers value in the config after runing this step. -# This can not be changed after training starts. -``` -3. Generate hubert and F0 features/ -```shell -python preprocess_hubert_f0.py -``` -After running the step above, the `dataset` folder will contain all the pre-processed data, you can delete the `dataset_raw` folder after that. - -## Training. -```shell -python train.py -c configs/config.json -m 32k -``` - -## Inferencing. - -Use [inference_main.py](inference_main.py) -+ Edit `model_path` to your newest checkpoint. -+ Place the input audio under the `raw` folder. -+ Change `clean_names` to the output file name. -+ Use `trans` to edit the pitch shifting amount (semitones). -+ Change `spk_list` to the speaker name. diff --git a/spaces/YazawaSunrise/so-vits-svc-LoveLive/modules.py b/spaces/YazawaSunrise/so-vits-svc-LoveLive/modules.py deleted file mode 100644 index 52ee14e41a5b6d67d875d1b694aecd2a51244897..0000000000000000000000000000000000000000 --- a/spaces/YazawaSunrise/so-vits-svc-LoveLive/modules.py +++ /dev/null @@ -1,342 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x diff --git a/spaces/Yogesh19/Voiceai/app.py b/spaces/Yogesh19/Voiceai/app.py deleted file mode 100644 index ca8b6d40b4ab898c70da92f4a4298de2baf703dc..0000000000000000000000000000000000000000 --- a/spaces/Yogesh19/Voiceai/app.py +++ /dev/null @@ -1,164 +0,0 @@ -import os -import re -import requests -import json -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') -PLAY_HT_API_KEY=os.getenv('PLAY_HT_API_KEY') -PLAY_HT_USER_ID=os.getenv('PLAY_HT_USER_ID') - -PLAY_HT_VOICE_ID=os.getenv('PLAY_HT_VOICE_ID') -play_ht_api_get_audio_url = "https://play.ht/api/v2/tts" - - -template = """You are a helpful assistant to answer user queries. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -headers = { - "accept": "text/event-stream", - "content-type": "application/json", - "AUTHORIZATION": "Bearer "+ PLAY_HT_API_KEY, - "X-USER-ID": PLAY_HT_USER_ID -} - - -def get_payload(text): - return { - "text": text, - "voice": PLAY_HT_VOICE_ID, - "quality": "medium", - "output_format": "mp3", - "speed": 1, - "sample_rate": 24000, - "seed": None, - "temperature": None - } - -def get_generated_audio(text): - payload = get_payload(text) - generated_response = {} - try: - response = requests.post(play_ht_api_get_audio_url, json=payload, headers=headers) - response.raise_for_status() - generated_response["type"]= 'SUCCESS' - generated_response["response"] = response.text - except requests.exceptions.RequestException as e: - generated_response["type"]= 'ERROR' - try: - response_text = json.loads(response.text) - if response_text['error_message']: - generated_response["response"] = response_text['error_message'] - else: - generated_response["response"] = response.text - except Exception as e: - generated_response["response"] = response.text - except Exception as e: - generated_response["type"]= 'ERROR' - generated_response["response"] = response.text - return generated_response - -def extract_urls(text): - # Define the regex pattern for URLs - url_pattern = r'https?://(?:[-\w.]|(?:%[\da-fA-F]{2}))+[/\w\.-]*' - - # Find all occurrences of URLs in the text - urls = re.findall(url_pattern, text) - - return urls - -def get_audio_reply_for_question(text): - generated_audio_event = get_generated_audio(text) - #From get_generated_audio, you will get events in a string format, from that we need to extract the url - final_response = { - "audio_url": '', - "message": '' - } - if generated_audio_event["type"] == 'SUCCESS': - audio_urls = extract_urls(generated_audio_event["response"]) - if len(audio_urls) == 0: - final_response['message'] = "No audio file link found in generated event" - else: - final_response['audio_url'] = audio_urls[-1] - else: - final_response['message'] = generated_audio_event['response'] - return final_response - -def download_url(url): - try: - # Send a GET request to the URL to fetch the content - final_response = { - 'content':'', - 'error':'' - } - response = requests.get(url) - # Check if the request was successful (status code 200) - if response.status_code == 200: - final_response['content'] = response.content - else: - final_response['error'] = f"Failed to download the URL. Status code: {response.status_code}" - except Exception as e: - final_response['error'] = f"Failed to download the URL. Error: {e}" - return final_response - -def get_filename_from_url(url): - # Use os.path.basename() to extract the file name from the URL - file_name = os.path.basename(url) - return file_name - -def get_text_response(user_message): - response = llm_chain.predict(user_message = user_message) - return response - -def get_text_response_and_audio_response(user_message): - response = get_text_response(user_message) # Getting the reply from Open AI - audio_reply_for_question_response = get_audio_reply_for_question(response) - final_response = { - 'output_file_path': '', - 'message':'' - } - audio_url = audio_reply_for_question_response['audio_url'] - if audio_url: - output_file_path=get_filename_from_url(audio_url) - download_url_response = download_url(audio_url) - audio_content = download_url_response['content'] - if audio_content: - with open(output_file_path, "wb") as audio_file: - audio_file.write(audio_content) - final_response['output_file_path'] = output_file_path - else: - final_response['message'] = download_url_response['error'] - else: - final_response['message'] = audio_reply_for_question_response['message'] - return final_response - -def chat_bot_response(message, history): - text_and_audio_response = get_text_response_and_audio_response(message) - output_file_path = text_and_audio_response['output_file_path'] - if output_file_path: - return (text_and_audio_response['output_file_path'],) - else: - return text_and_audio_response['message'] - -demo = gr.ChatInterface(chat_bot_response,examples=["How are you doing?","What are your interests?","Which places do you like to visit?"]) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/aaaaaabbbbbbbdddddddduuuuulllll/Ashaar/poetry_diacritizer/models/gpt.py b/spaces/aaaaaabbbbbbbdddddddduuuuulllll/Ashaar/poetry_diacritizer/models/gpt.py deleted file mode 100644 index 71473c3c34a4013466552026fd562cfa8d393384..0000000000000000000000000000000000000000 --- a/spaces/aaaaaabbbbbbbdddddddduuuuulllll/Ashaar/poetry_diacritizer/models/gpt.py +++ /dev/null @@ -1,83 +0,0 @@ -from typing import List -from torch import nn -import torch -from pathlib import Path -import json -from .gpt_model import Model, HParams - - -class GPTModel(nn.Module): - def __init__(self, path, n_layer=-1, freeze=True, use_lstm=False): - super().__init__() - root = Path(path) - - params = json.loads((root / "params.json").read_text()) - hparams = params["hparams"] - hparams.setdefault("n_hidden", hparams["n_embed"]) - self.model = Model(HParams(**hparams)) - state = torch.load(root / "model.pt", map_location="cpu") - state_dict = self.fixed_state_dict(state["state_dict"]) - self.model.load_state_dict(state_dict) - self.activation = {} - self.freeze = freeze - self.n_layer = n_layer - if self.freeze: - for param in self.model.parameters(): - param.requires_grad = False - - self.activation = {} - self.use_lstm = use_lstm - self.set_hook(self.n_layer) - self.in_fc_layer = 512 if self.use_lstm else 768 - self.lstm1 = nn.LSTM( - 768, - 256, - bidirectional=True, - batch_first=True, - ) - self.lstm2 = nn.LSTM( - 512, - 256, - bidirectional=True, - batch_first=True, - ) - self.lstm3 = nn.LSTM( - 512, - 256, - bidirectional=True, - batch_first=True, - ) - self.fc = nn.Linear(self.in_fc_layer, 17) - - def get_activation(self, name): - def hook(model, input, output): - self.activation[name] = output[0].detach() - - return hook - - def set_hook(self, n_layer=0): - self.model.blocks[n_layer].register_forward_hook(self.get_activation("feats")) - - def fixed_state_dict(self, state_dict): - if all(k.startswith("module.") for k in state_dict): - # legacy multi-GPU format - state_dict = {k[len("module.") :]: v for k, v in state_dict.items()} - return state_dict - - def forward(self, src: torch.Tensor, lengths: torch.Tensor, target=None): - - # logits shape [batch_size, 256, 500] - logits = self.model(src)["logits"] - logits = self.activation["feats"] - - if self.use_lstm: - x, (h, cn) = self.lstm1(logits) - x, (h, cn) = self.lstm2(x) - x, (h, cn) = self.lstm3(x) - else: - x = logits - predictions = self.fc(x) - - output = {"diacritics": predictions} - - return output diff --git a/spaces/abhibisht89/Donut_DocVQA/app.py b/spaces/abhibisht89/Donut_DocVQA/app.py deleted file mode 100644 index a15c5281026a1cee44d573f0c67de3c48a6ca252..0000000000000000000000000000000000000000 --- a/spaces/abhibisht89/Donut_DocVQA/app.py +++ /dev/null @@ -1,56 +0,0 @@ -import re -import gradio as gr - -from transformers import DonutProcessor, VisionEncoderDecoderModel -import torch - -processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa") -model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa") -device = "cuda" if torch.cuda.is_available() else "cpu" -model.to(device) - - -def doc_process(image,question): - # prepare decoder inputs - task_prompt = "{user_input}" - prompt = task_prompt.replace("{user_input}", question) - decoder_input_ids = processor.tokenizer(prompt, add_special_tokens=False, return_tensors="pt").input_ids - - pixel_values = processor(image, return_tensors="pt").pixel_values - - - outputs = model.generate( - pixel_values.to(device), - decoder_input_ids=decoder_input_ids.to(device), - max_length=model.decoder.config.max_position_embeddings, - early_stopping=True, - pad_token_id=processor.tokenizer.pad_token_id, - eos_token_id=processor.tokenizer.eos_token_id, - use_cache=True, - num_beams=1, - bad_words_ids=[[processor.tokenizer.unk_token_id]], - return_dict_in_generate=True, - ) - - sequence = processor.batch_decode(outputs.sequences)[0] - sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "") - sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() # remove first task start token - #print(processor.token2json(sequence)) - return processor.token2json(sequence) - -description = "Gradio Demo for Donut 🍩, inspired by Nielsr demo" - -article = "

Donut: OCR-free Document Understanding Transformer | Github Repo

" - -demo = gr.Interface( - fn= doc_process, - inputs=["image", "text"], - outputs="json", - title="Donut 🍩 for DocVQA", - description=description, - article=article, - enable_queue=True, - examples=[["example_1.png", "What is date of birth?"], ["example_1.png", "What is Patient initials?"]], - cache_examples=False) - -demo.launch() \ No newline at end of file diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/utils/env.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/utils/env.py deleted file mode 100644 index e3f0d92529e193e6d8339419bcd9bed7901a7769..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/utils/env.py +++ /dev/null @@ -1,95 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -"""This file holding some environment constant for sharing by other files.""" - -import os.path as osp -import subprocess -import sys -from collections import defaultdict - -import cv2 -import torch - -import annotator.uniformer.mmcv as mmcv -from .parrots_wrapper import get_build_config - - -def collect_env(): - """Collect the information of the running environments. - - Returns: - dict: The environment information. The following fields are contained. - - - sys.platform: The variable of ``sys.platform``. - - Python: Python version. - - CUDA available: Bool, indicating if CUDA is available. - - GPU devices: Device type of each GPU. - - CUDA_HOME (optional): The env var ``CUDA_HOME``. - - NVCC (optional): NVCC version. - - GCC: GCC version, "n/a" if GCC is not installed. - - PyTorch: PyTorch version. - - PyTorch compiling details: The output of \ - ``torch.__config__.show()``. - - TorchVision (optional): TorchVision version. - - OpenCV: OpenCV version. - - MMCV: MMCV version. - - MMCV Compiler: The GCC version for compiling MMCV ops. - - MMCV CUDA Compiler: The CUDA version for compiling MMCV ops. - """ - env_info = {} - env_info['sys.platform'] = sys.platform - env_info['Python'] = sys.version.replace('\n', '') - - cuda_available = torch.cuda.is_available() - env_info['CUDA available'] = cuda_available - - if cuda_available: - devices = defaultdict(list) - for k in range(torch.cuda.device_count()): - devices[torch.cuda.get_device_name(k)].append(str(k)) - for name, device_ids in devices.items(): - env_info['GPU ' + ','.join(device_ids)] = name - - from annotator.uniformer.mmcv.utils.parrots_wrapper import _get_cuda_home - CUDA_HOME = _get_cuda_home() - env_info['CUDA_HOME'] = CUDA_HOME - - if CUDA_HOME is not None and osp.isdir(CUDA_HOME): - try: - nvcc = osp.join(CUDA_HOME, 'bin/nvcc') - nvcc = subprocess.check_output( - f'"{nvcc}" -V | tail -n1', shell=True) - nvcc = nvcc.decode('utf-8').strip() - except subprocess.SubprocessError: - nvcc = 'Not Available' - env_info['NVCC'] = nvcc - - try: - gcc = subprocess.check_output('gcc --version | head -n1', shell=True) - gcc = gcc.decode('utf-8').strip() - env_info['GCC'] = gcc - except subprocess.CalledProcessError: # gcc is unavailable - env_info['GCC'] = 'n/a' - - env_info['PyTorch'] = torch.__version__ - env_info['PyTorch compiling details'] = get_build_config() - - try: - import torchvision - env_info['TorchVision'] = torchvision.__version__ - except ModuleNotFoundError: - pass - - env_info['OpenCV'] = cv2.__version__ - - env_info['MMCV'] = mmcv.__version__ - - try: - from annotator.uniformer.mmcv.ops import get_compiler_version, get_compiling_cuda_version - except ModuleNotFoundError: - env_info['MMCV Compiler'] = 'n/a' - env_info['MMCV CUDA Compiler'] = 'n/a' - else: - env_info['MMCV Compiler'] = get_compiler_version() - env_info['MMCV CUDA Compiler'] = get_compiling_cuda_version() - - return env_info diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/datasets/coco.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/datasets/coco.py deleted file mode 100644 index 65802369de9f82b70e4dcee96c22d6a886120aa1..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/datasets/coco.py +++ /dev/null @@ -1,546 +0,0 @@ -import itertools -import logging -import os.path as osp -import tempfile -from collections import OrderedDict - -import annotator.uniformer.mmcv as mmcv -import numpy as np -import pycocotools -from annotator.uniformer.mmcv.utils import print_log -from pycocotools.coco import COCO -from pycocotools.cocoeval import COCOeval -from terminaltables import AsciiTable - -from annotator.uniformer.mmdet.core import eval_recalls -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class CocoDataset(CustomDataset): - - CLASSES = ('person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', - 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', - 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', - 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', - 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', - 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', - 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', - 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', - 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', - 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', - 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', - 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', - 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', - 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush') - - def load_annotations(self, ann_file): - """Load annotation from COCO style annotation file. - - Args: - ann_file (str): Path of annotation file. - - Returns: - list[dict]: Annotation info from COCO api. - """ - if not getattr(pycocotools, '__version__', '0') >= '12.0.2': - raise AssertionError( - 'Incompatible version of pycocotools is installed. ' - 'Run pip uninstall pycocotools first. Then run pip ' - 'install mmpycocotools to install open-mmlab forked ' - 'pycocotools.') - - self.coco = COCO(ann_file) - self.cat_ids = self.coco.get_cat_ids(cat_names=self.CLASSES) - self.cat2label = {cat_id: i for i, cat_id in enumerate(self.cat_ids)} - self.img_ids = self.coco.get_img_ids() - data_infos = [] - total_ann_ids = [] - for i in self.img_ids: - info = self.coco.load_imgs([i])[0] - info['filename'] = info['file_name'] - data_infos.append(info) - ann_ids = self.coco.get_ann_ids(img_ids=[i]) - total_ann_ids.extend(ann_ids) - assert len(set(total_ann_ids)) == len( - total_ann_ids), f"Annotation ids in '{ann_file}' are not unique!" - return data_infos - - def get_ann_info(self, idx): - """Get COCO annotation by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - - img_id = self.data_infos[idx]['id'] - ann_ids = self.coco.get_ann_ids(img_ids=[img_id]) - ann_info = self.coco.load_anns(ann_ids) - return self._parse_ann_info(self.data_infos[idx], ann_info) - - def get_cat_ids(self, idx): - """Get COCO category ids by index. - - Args: - idx (int): Index of data. - - Returns: - list[int]: All categories in the image of specified index. - """ - - img_id = self.data_infos[idx]['id'] - ann_ids = self.coco.get_ann_ids(img_ids=[img_id]) - ann_info = self.coco.load_anns(ann_ids) - return [ann['category_id'] for ann in ann_info] - - def _filter_imgs(self, min_size=32): - """Filter images too small or without ground truths.""" - valid_inds = [] - # obtain images that contain annotation - ids_with_ann = set(_['image_id'] for _ in self.coco.anns.values()) - # obtain images that contain annotations of the required categories - ids_in_cat = set() - for i, class_id in enumerate(self.cat_ids): - ids_in_cat |= set(self.coco.cat_img_map[class_id]) - # merge the image id sets of the two conditions and use the merged set - # to filter out images if self.filter_empty_gt=True - ids_in_cat &= ids_with_ann - - valid_img_ids = [] - for i, img_info in enumerate(self.data_infos): - img_id = self.img_ids[i] - if self.filter_empty_gt and img_id not in ids_in_cat: - continue - if min(img_info['width'], img_info['height']) >= min_size: - valid_inds.append(i) - valid_img_ids.append(img_id) - self.img_ids = valid_img_ids - return valid_inds - - def _parse_ann_info(self, img_info, ann_info): - """Parse bbox and mask annotation. - - Args: - ann_info (list[dict]): Annotation info of an image. - with_mask (bool): Whether to parse mask annotations. - - Returns: - dict: A dict containing the following keys: bboxes, bboxes_ignore,\ - labels, masks, seg_map. "masks" are raw annotations and not \ - decoded into binary masks. - """ - gt_bboxes = [] - gt_labels = [] - gt_bboxes_ignore = [] - gt_masks_ann = [] - for i, ann in enumerate(ann_info): - if ann.get('ignore', False): - continue - x1, y1, w, h = ann['bbox'] - inter_w = max(0, min(x1 + w, img_info['width']) - max(x1, 0)) - inter_h = max(0, min(y1 + h, img_info['height']) - max(y1, 0)) - if inter_w * inter_h == 0: - continue - if ann['area'] <= 0 or w < 1 or h < 1: - continue - if ann['category_id'] not in self.cat_ids: - continue - bbox = [x1, y1, x1 + w, y1 + h] - if ann.get('iscrowd', False): - gt_bboxes_ignore.append(bbox) - else: - gt_bboxes.append(bbox) - gt_labels.append(self.cat2label[ann['category_id']]) - gt_masks_ann.append(ann.get('segmentation', None)) - - if gt_bboxes: - gt_bboxes = np.array(gt_bboxes, dtype=np.float32) - gt_labels = np.array(gt_labels, dtype=np.int64) - else: - gt_bboxes = np.zeros((0, 4), dtype=np.float32) - gt_labels = np.array([], dtype=np.int64) - - if gt_bboxes_ignore: - gt_bboxes_ignore = np.array(gt_bboxes_ignore, dtype=np.float32) - else: - gt_bboxes_ignore = np.zeros((0, 4), dtype=np.float32) - - seg_map = img_info['filename'].replace('jpg', 'png') - - ann = dict( - bboxes=gt_bboxes, - labels=gt_labels, - bboxes_ignore=gt_bboxes_ignore, - masks=gt_masks_ann, - seg_map=seg_map) - - return ann - - def xyxy2xywh(self, bbox): - """Convert ``xyxy`` style bounding boxes to ``xywh`` style for COCO - evaluation. - - Args: - bbox (numpy.ndarray): The bounding boxes, shape (4, ), in - ``xyxy`` order. - - Returns: - list[float]: The converted bounding boxes, in ``xywh`` order. - """ - - _bbox = bbox.tolist() - return [ - _bbox[0], - _bbox[1], - _bbox[2] - _bbox[0], - _bbox[3] - _bbox[1], - ] - - def _proposal2json(self, results): - """Convert proposal results to COCO json style.""" - json_results = [] - for idx in range(len(self)): - img_id = self.img_ids[idx] - bboxes = results[idx] - for i in range(bboxes.shape[0]): - data = dict() - data['image_id'] = img_id - data['bbox'] = self.xyxy2xywh(bboxes[i]) - data['score'] = float(bboxes[i][4]) - data['category_id'] = 1 - json_results.append(data) - return json_results - - def _det2json(self, results): - """Convert detection results to COCO json style.""" - json_results = [] - for idx in range(len(self)): - img_id = self.img_ids[idx] - result = results[idx] - for label in range(len(result)): - bboxes = result[label] - for i in range(bboxes.shape[0]): - data = dict() - data['image_id'] = img_id - data['bbox'] = self.xyxy2xywh(bboxes[i]) - data['score'] = float(bboxes[i][4]) - data['category_id'] = self.cat_ids[label] - json_results.append(data) - return json_results - - def _segm2json(self, results): - """Convert instance segmentation results to COCO json style.""" - bbox_json_results = [] - segm_json_results = [] - for idx in range(len(self)): - img_id = self.img_ids[idx] - det, seg = results[idx] - for label in range(len(det)): - # bbox results - bboxes = det[label] - for i in range(bboxes.shape[0]): - data = dict() - data['image_id'] = img_id - data['bbox'] = self.xyxy2xywh(bboxes[i]) - data['score'] = float(bboxes[i][4]) - data['category_id'] = self.cat_ids[label] - bbox_json_results.append(data) - - # segm results - # some detectors use different scores for bbox and mask - if isinstance(seg, tuple): - segms = seg[0][label] - mask_score = seg[1][label] - else: - segms = seg[label] - mask_score = [bbox[4] for bbox in bboxes] - for i in range(bboxes.shape[0]): - data = dict() - data['image_id'] = img_id - data['bbox'] = self.xyxy2xywh(bboxes[i]) - data['score'] = float(mask_score[i]) - data['category_id'] = self.cat_ids[label] - if isinstance(segms[i]['counts'], bytes): - segms[i]['counts'] = segms[i]['counts'].decode() - data['segmentation'] = segms[i] - segm_json_results.append(data) - return bbox_json_results, segm_json_results - - def results2json(self, results, outfile_prefix): - """Dump the detection results to a COCO style json file. - - There are 3 types of results: proposals, bbox predictions, mask - predictions, and they have different data types. This method will - automatically recognize the type, and dump them to json files. - - Args: - results (list[list | tuple | ndarray]): Testing results of the - dataset. - outfile_prefix (str): The filename prefix of the json files. If the - prefix is "somepath/xxx", the json files will be named - "somepath/xxx.bbox.json", "somepath/xxx.segm.json", - "somepath/xxx.proposal.json". - - Returns: - dict[str: str]: Possible keys are "bbox", "segm", "proposal", and \ - values are corresponding filenames. - """ - result_files = dict() - if isinstance(results[0], list): - json_results = self._det2json(results) - result_files['bbox'] = f'{outfile_prefix}.bbox.json' - result_files['proposal'] = f'{outfile_prefix}.bbox.json' - mmcv.dump(json_results, result_files['bbox']) - elif isinstance(results[0], tuple): - json_results = self._segm2json(results) - result_files['bbox'] = f'{outfile_prefix}.bbox.json' - result_files['proposal'] = f'{outfile_prefix}.bbox.json' - result_files['segm'] = f'{outfile_prefix}.segm.json' - mmcv.dump(json_results[0], result_files['bbox']) - mmcv.dump(json_results[1], result_files['segm']) - elif isinstance(results[0], np.ndarray): - json_results = self._proposal2json(results) - result_files['proposal'] = f'{outfile_prefix}.proposal.json' - mmcv.dump(json_results, result_files['proposal']) - else: - raise TypeError('invalid type of results') - return result_files - - def fast_eval_recall(self, results, proposal_nums, iou_thrs, logger=None): - gt_bboxes = [] - for i in range(len(self.img_ids)): - ann_ids = self.coco.get_ann_ids(img_ids=self.img_ids[i]) - ann_info = self.coco.load_anns(ann_ids) - if len(ann_info) == 0: - gt_bboxes.append(np.zeros((0, 4))) - continue - bboxes = [] - for ann in ann_info: - if ann.get('ignore', False) or ann['iscrowd']: - continue - x1, y1, w, h = ann['bbox'] - bboxes.append([x1, y1, x1 + w, y1 + h]) - bboxes = np.array(bboxes, dtype=np.float32) - if bboxes.shape[0] == 0: - bboxes = np.zeros((0, 4)) - gt_bboxes.append(bboxes) - - recalls = eval_recalls( - gt_bboxes, results, proposal_nums, iou_thrs, logger=logger) - ar = recalls.mean(axis=1) - return ar - - def format_results(self, results, jsonfile_prefix=None, **kwargs): - """Format the results to json (standard format for COCO evaluation). - - Args: - results (list[tuple | numpy.ndarray]): Testing results of the - dataset. - jsonfile_prefix (str | None): The prefix of json files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - - Returns: - tuple: (result_files, tmp_dir), result_files is a dict containing \ - the json filepaths, tmp_dir is the temporal directory created \ - for saving json files when jsonfile_prefix is not specified. - """ - assert isinstance(results, list), 'results must be a list' - assert len(results) == len(self), ( - 'The length of results is not equal to the dataset len: {} != {}'. - format(len(results), len(self))) - - if jsonfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - jsonfile_prefix = osp.join(tmp_dir.name, 'results') - else: - tmp_dir = None - result_files = self.results2json(results, jsonfile_prefix) - return result_files, tmp_dir - - def evaluate(self, - results, - metric='bbox', - logger=None, - jsonfile_prefix=None, - classwise=False, - proposal_nums=(100, 300, 1000), - iou_thrs=None, - metric_items=None): - """Evaluation in COCO protocol. - - Args: - results (list[list | tuple]): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. Options are - 'bbox', 'segm', 'proposal', 'proposal_fast'. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - jsonfile_prefix (str | None): The prefix of json files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - classwise (bool): Whether to evaluating the AP for each class. - proposal_nums (Sequence[int]): Proposal number used for evaluating - recalls, such as recall@100, recall@1000. - Default: (100, 300, 1000). - iou_thrs (Sequence[float], optional): IoU threshold used for - evaluating recalls/mAPs. If set to a list, the average of all - IoUs will also be computed. If not specified, [0.50, 0.55, - 0.60, 0.65, 0.70, 0.75, 0.80, 0.85, 0.90, 0.95] will be used. - Default: None. - metric_items (list[str] | str, optional): Metric items that will - be returned. If not specified, ``['AR@100', 'AR@300', - 'AR@1000', 'AR_s@1000', 'AR_m@1000', 'AR_l@1000' ]`` will be - used when ``metric=='proposal'``, ``['mAP', 'mAP_50', 'mAP_75', - 'mAP_s', 'mAP_m', 'mAP_l']`` will be used when - ``metric=='bbox' or metric=='segm'``. - - Returns: - dict[str, float]: COCO style evaluation metric. - """ - - metrics = metric if isinstance(metric, list) else [metric] - allowed_metrics = ['bbox', 'segm', 'proposal', 'proposal_fast'] - for metric in metrics: - if metric not in allowed_metrics: - raise KeyError(f'metric {metric} is not supported') - if iou_thrs is None: - iou_thrs = np.linspace( - .5, 0.95, int(np.round((0.95 - .5) / .05)) + 1, endpoint=True) - if metric_items is not None: - if not isinstance(metric_items, list): - metric_items = [metric_items] - - result_files, tmp_dir = self.format_results(results, jsonfile_prefix) - - eval_results = OrderedDict() - cocoGt = self.coco - for metric in metrics: - msg = f'Evaluating {metric}...' - if logger is None: - msg = '\n' + msg - print_log(msg, logger=logger) - - if metric == 'proposal_fast': - ar = self.fast_eval_recall( - results, proposal_nums, iou_thrs, logger='silent') - log_msg = [] - for i, num in enumerate(proposal_nums): - eval_results[f'AR@{num}'] = ar[i] - log_msg.append(f'\nAR@{num}\t{ar[i]:.4f}') - log_msg = ''.join(log_msg) - print_log(log_msg, logger=logger) - continue - - if metric not in result_files: - raise KeyError(f'{metric} is not in results') - try: - cocoDt = cocoGt.loadRes(result_files[metric]) - except IndexError: - print_log( - 'The testing results of the whole dataset is empty.', - logger=logger, - level=logging.ERROR) - break - - iou_type = 'bbox' if metric == 'proposal' else metric - cocoEval = COCOeval(cocoGt, cocoDt, iou_type) - cocoEval.params.catIds = self.cat_ids - cocoEval.params.imgIds = self.img_ids - cocoEval.params.maxDets = list(proposal_nums) - cocoEval.params.iouThrs = iou_thrs - # mapping of cocoEval.stats - coco_metric_names = { - 'mAP': 0, - 'mAP_50': 1, - 'mAP_75': 2, - 'mAP_s': 3, - 'mAP_m': 4, - 'mAP_l': 5, - 'AR@100': 6, - 'AR@300': 7, - 'AR@1000': 8, - 'AR_s@1000': 9, - 'AR_m@1000': 10, - 'AR_l@1000': 11 - } - if metric_items is not None: - for metric_item in metric_items: - if metric_item not in coco_metric_names: - raise KeyError( - f'metric item {metric_item} is not supported') - - if metric == 'proposal': - cocoEval.params.useCats = 0 - cocoEval.evaluate() - cocoEval.accumulate() - cocoEval.summarize() - if metric_items is None: - metric_items = [ - 'AR@100', 'AR@300', 'AR@1000', 'AR_s@1000', - 'AR_m@1000', 'AR_l@1000' - ] - - for item in metric_items: - val = float( - f'{cocoEval.stats[coco_metric_names[item]]:.3f}') - eval_results[item] = val - else: - cocoEval.evaluate() - cocoEval.accumulate() - cocoEval.summarize() - if classwise: # Compute per-category AP - # Compute per-category AP - # from https://github.com/facebookresearch/detectron2/ - precisions = cocoEval.eval['precision'] - # precision: (iou, recall, cls, area range, max dets) - assert len(self.cat_ids) == precisions.shape[2] - - results_per_category = [] - for idx, catId in enumerate(self.cat_ids): - # area range index 0: all area ranges - # max dets index -1: typically 100 per image - nm = self.coco.loadCats(catId)[0] - precision = precisions[:, :, idx, 0, -1] - precision = precision[precision > -1] - if precision.size: - ap = np.mean(precision) - else: - ap = float('nan') - results_per_category.append( - (f'{nm["name"]}', f'{float(ap):0.3f}')) - - num_columns = min(6, len(results_per_category) * 2) - results_flatten = list( - itertools.chain(*results_per_category)) - headers = ['category', 'AP'] * (num_columns // 2) - results_2d = itertools.zip_longest(*[ - results_flatten[i::num_columns] - for i in range(num_columns) - ]) - table_data = [headers] - table_data += [result for result in results_2d] - table = AsciiTable(table_data) - print_log('\n' + table.table, logger=logger) - - if metric_items is None: - metric_items = [ - 'mAP', 'mAP_50', 'mAP_75', 'mAP_s', 'mAP_m', 'mAP_l' - ] - - for metric_item in metric_items: - key = f'{metric}_{metric_item}' - val = float( - f'{cocoEval.stats[coco_metric_names[metric_item]]:.3f}' - ) - eval_results[key] = val - ap = cocoEval.stats[:6] - eval_results[f'{metric}_mAP_copypaste'] = ( - f'{ap[0]:.3f} {ap[1]:.3f} {ap[2]:.3f} {ap[3]:.3f} ' - f'{ap[4]:.3f} {ap[5]:.3f}') - if tmp_dir is not None: - tmp_dir.cleanup() - return eval_results diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/utils/util_random.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/utils/util_random.py deleted file mode 100644 index e313e9947bb3232a9458878fd219e1594ab93d57..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/utils/util_random.py +++ /dev/null @@ -1,33 +0,0 @@ -"""Helpers for random number generators.""" -import numpy as np - - -def ensure_rng(rng=None): - """Coerces input into a random number generator. - - If the input is None, then a global random state is returned. - - If the input is a numeric value, then that is used as a seed to construct a - random state. Otherwise the input is returned as-is. - - Adapted from [1]_. - - Args: - rng (int | numpy.random.RandomState | None): - if None, then defaults to the global rng. Otherwise this can be an - integer or a RandomState class - Returns: - (numpy.random.RandomState) : rng - - a numpy random number generator - - References: - .. [1] https://gitlab.kitware.com/computer-vision/kwarray/blob/master/kwarray/util_random.py#L270 # noqa: E501 - """ - - if rng is None: - rng = np.random.mtrand._rand - elif isinstance(rng, int): - rng = np.random.RandomState(rng) - else: - rng = rng - return rng diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/configs/_base_/schedules/schedule_160k.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/configs/_base_/schedules/schedule_160k.py deleted file mode 100644 index 826aca61039f6e486c5d16ce8538437710964800..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/configs/_base_/schedules/schedule_160k.py +++ /dev/null @@ -1,20 +0,0 @@ -''' - * Copyright (c) 2023 Salesforce, Inc. - * All rights reserved. - * SPDX-License-Identifier: Apache License 2.0 - * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/ - * By Can Qin - * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet - * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala - * Modified from UniFormer repo: From https://github.com/Sense-X/UniFormer - * Apache-2.0 license -''' -# optimizer -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0005) -optimizer_config = dict() -# learning policy -lr_config = dict(policy='poly', power=0.9, min_lr=1e-4, by_epoch=False) -# runtime settings -runner = dict(type='IterBasedRunner', max_iters=160000) -checkpoint_config = dict(by_epoch=False, interval=16000) -evaluation = dict(interval=16000, metric='mIoU') diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/libs/darwin/__init__.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/libs/darwin/__init__.py deleted file mode 100644 index 075f48532d4d6483a9095fc2e379df3796a3e26e..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/libs/darwin/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .cocoapy import * diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/media/codecs/__init__.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/media/codecs/__init__.py deleted file mode 100644 index 40387e3f334f93008a2ce3f0c1bf7bab6bfa26e0..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/media/codecs/__init__.py +++ /dev/null @@ -1,96 +0,0 @@ -from pyglet.util import CodecRegistry, Decoder, Encoder -from .base import * - -import pyglet - - -_debug = pyglet.options['debug_media'] - -registry = CodecRegistry() -add_decoders = registry.add_decoders -add_encoders = registry.add_encoders -get_decoders = registry.get_decoders -get_encoders = registry.get_encoders - - -class MediaDecoder(Decoder): - - def decode(self, filename, file, streaming): - """Read the given file object and return an instance of `Source` - or `StreamingSource`. - Throws DecodeException if there is an error. `filename` - can be a file type hint. - """ - raise NotImplementedError() - - -class MediaEncoder(Encoder): - - def encode(self, source, filename, file): - """Encode the given source to the given file. `filename` - provides a hint to the file format desired. options are - encoder-specific, and unknown options should be ignored or - issue warnings. - """ - raise NotImplementedError() - - -def add_default_codecs(): - # Add all bundled codecs. These should be listed in order of - # preference. This is called automatically by pyglet.media. - - try: - from . import wave - registry.add_decoders(wave) - registry.add_encoders(wave) - except ImportError: - pass - - if pyglet.compat_platform.startswith('linux'): - try: - from . import gstreamer - registry.add_decoders(gstreamer) - except ImportError: - pass - - try: - if pyglet.compat_platform in ('win32', 'cygwin'): - from pyglet.libs.win32.constants import WINDOWS_VISTA_OR_GREATER - if WINDOWS_VISTA_OR_GREATER: # Supports Vista and above. - from . import wmf - registry.add_decoders(wmf) - except ImportError: - pass - - try: - if have_ffmpeg(): - from . import ffmpeg - registry.add_decoders(ffmpeg) - except ImportError: - pass - - try: - from . import pyogg - registry.add_decoders(pyogg) - except ImportError: - pass - - -def have_ffmpeg(): - """Check if FFmpeg library is available. - - Returns: - bool: True if FFmpeg is found. - - .. versionadded:: 1.4 - """ - try: - from . import ffmpeg_lib - if _debug: - print('FFmpeg available, using to load media files. Versions: {}'.format(ffmpeg_lib.compat.versions)) - return True - - except (ImportError, FileNotFoundError, AttributeError): - if _debug: - print('FFmpeg not available.') - return False diff --git a/spaces/akhaliq/GPEN/face_enhancement.py b/spaces/akhaliq/GPEN/face_enhancement.py deleted file mode 100644 index 42f45b8149a9d88a19cdb94eb9146231fff2ce10..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/GPEN/face_enhancement.py +++ /dev/null @@ -1,145 +0,0 @@ -''' -@paper: GAN Prior Embedded Network for Blind Face Restoration in the Wild (CVPR2021) -@author: yangxy (yangtao9009@gmail.com) -''' -import os -import cv2 -import glob -import time -import argparse -import numpy as np -from PIL import Image -import __init_paths -from face_detect.retinaface_detection import RetinaFaceDetection -from face_parse.face_parsing import FaceParse -from face_model.face_gan import FaceGAN -from sr_model.real_esrnet import RealESRNet -from align_faces import warp_and_crop_face, get_reference_facial_points - -class FaceEnhancement(object): - def __init__(self, base_dir='./', size=512, model=None, use_sr=True, sr_model=None, channel_multiplier=2, narrow=1, key=None, device='cuda'): - self.facedetector = RetinaFaceDetection(base_dir, device) - self.facegan = FaceGAN(base_dir, size, model, channel_multiplier, narrow, key, device=device) - self.srmodel = RealESRNet(base_dir, sr_model, device=device) - self.faceparser = FaceParse(base_dir, device=device) - self.use_sr = use_sr - self.size = size - self.threshold = 0.9 - - # the mask for pasting restored faces back - self.mask = np.zeros((512, 512), np.float32) - cv2.rectangle(self.mask, (26, 26), (486, 486), (1, 1, 1), -1, cv2.LINE_AA) - self.mask = cv2.GaussianBlur(self.mask, (101, 101), 11) - self.mask = cv2.GaussianBlur(self.mask, (101, 101), 11) - - self.kernel = np.array(( - [0.0625, 0.125, 0.0625], - [0.125, 0.25, 0.125], - [0.0625, 0.125, 0.0625]), dtype="float32") - - # get the reference 5 landmarks position in the crop settings - default_square = True - inner_padding_factor = 0.25 - outer_padding = (0, 0) - self.reference_5pts = get_reference_facial_points( - (self.size, self.size), inner_padding_factor, outer_padding, default_square) - - def mask_postprocess(self, mask, thres=20): - mask[:thres, :] = 0; mask[-thres:, :] = 0 - mask[:, :thres] = 0; mask[:, -thres:] = 0 - mask = cv2.GaussianBlur(mask, (101, 101), 11) - mask = cv2.GaussianBlur(mask, (101, 101), 11) - return mask.astype(np.float32) - - def process(self, img): - if self.use_sr: - img_sr = self.srmodel.process(img) - if img_sr is not None: - img = cv2.resize(img, img_sr.shape[:2][::-1]) - - facebs, landms = self.facedetector.detect(img) - - orig_faces, enhanced_faces = [], [] - height, width = img.shape[:2] - full_mask = np.zeros((height, width), dtype=np.float32) - full_img = np.zeros(img.shape, dtype=np.uint8) - - for i, (faceb, facial5points) in enumerate(zip(facebs, landms)): - if faceb[4]0)] = tmp_mask[np.where(mask>0)] - full_img[np.where(mask>0)] = tmp_img[np.where(mask>0)] - - full_mask = full_mask[:, :, np.newaxis] - if self.use_sr and img_sr is not None: - img = cv2.convertScaleAbs(img_sr*(1-full_mask) + full_img*full_mask) - else: - img = cv2.convertScaleAbs(img*(1-full_mask) + full_img*full_mask) - - return img, orig_faces, enhanced_faces - - -if __name__=='__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--model', type=str, default='GPEN-BFR-512', help='GPEN model') - parser.add_argument('--key', type=str, default=None, help='key of GPEN model') - parser.add_argument('--size', type=int, default=512, help='resolution of GPEN') - parser.add_argument('--channel_multiplier', type=int, default=2, help='channel multiplier of GPEN') - parser.add_argument('--narrow', type=float, default=1, help='channel narrow scale') - parser.add_argument('--use_sr', action='store_true', help='use sr or not') - parser.add_argument('--use_cuda', action='store_true', help='use cuda or not') - parser.add_argument('--sr_model', type=str, default='rrdb_realesrnet_psnr', help='SR model') - parser.add_argument('--sr_scale', type=int, default=2, help='SR scale') - parser.add_argument('--indir', type=str, default='examples/imgs', help='input folder') - parser.add_argument('--outdir', type=str, default='results/outs-BFR', help='output folder') - args = parser.parse_args() - - #model = {'name':'GPEN-BFR-512', 'size':512, 'channel_multiplier':2, 'narrow':1} - #model = {'name':'GPEN-BFR-256', 'size':256, 'channel_multiplier':1, 'narrow':0.5} - - os.makedirs(args.outdir, exist_ok=True) - - faceenhancer = FaceEnhancement(size=args.size, model=args.model, use_sr=args.use_sr, sr_model=args.sr_model, channel_multiplier=args.channel_multiplier, narrow=args.narrow, key=args.key, device='cuda' if args.use_cuda else 'cpu') - - files = sorted(glob.glob(os.path.join(args.indir, '*.*g'))) - for n, file in enumerate(files[:]): - filename = os.path.basename(file) - - im = cv2.imread(file, cv2.IMREAD_COLOR) # BGR - if not isinstance(im, np.ndarray): print(filename, 'error'); continue - #im = cv2.resize(im, (0,0), fx=2, fy=2) # optional - - img, orig_faces, enhanced_faces = faceenhancer.process(im) - - im = cv2.resize(im, img.shape[:2][::-1]) - cv2.imwrite(os.path.join(args.outdir, '.'.join(filename.split('.')[:-1])+'_COMP.jpg'), np.hstack((im, img))) - cv2.imwrite(os.path.join(args.outdir, '.'.join(filename.split('.')[:-1])+'_GPEN.jpg'), img) - - for m, (ef, of) in enumerate(zip(enhanced_faces, orig_faces)): - of = cv2.resize(of, ef.shape[:2]) - cv2.imwrite(os.path.join(args.outdir, '.'.join(filename.split('.')[:-1])+'_face%02d'%m+'.jpg'), np.hstack((of, ef))) - - if n%10==0: print(n, filename) - diff --git a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/utils/make_subset_data.sh b/spaces/akhaliq/VQMIVC/ParallelWaveGAN/utils/make_subset_data.sh deleted file mode 100644 index 2487aef51431e1ee552b1f6017a321d73dddf8ae..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/utils/make_subset_data.sh +++ /dev/null @@ -1,52 +0,0 @@ -#!/bin/bash - -# Make subset files located in data direcoty. - -# Copyright 2020 Tomoki Hayashi -# MIT License (https://opensource.org/licenses/MIT) - -# shellcheck disable=SC1091 -. ./path.sh || exit 1; - - -if [ $# -ne 3 ]; then - echo "Usage: $0 " - echo "e.g.: $0 data/train_nodev 16 data/train_nodev/split16" - exit 1 -fi - -set -eu - -src_dir=$1 -num_split=$2 -dst_dir=$3 - -src_scp=${src_dir}/wav.scp -if [ -e "${src_dir}/segments" ]; then - has_segments=true - src_segments=${src_dir}/segments -else - has_segments=false -fi - -if ! ${has_segments}; then - split_scps="" - for i in $(seq 1 "${num_split}"); do - split_scps+=" ${dst_dir}/wav.${i}.scp" - done - # shellcheck disable=SC2086 - utils/split_scp.pl "${src_scp}" ${split_scps} -else - split_scps="" - for i in $(seq 1 "${num_split}"); do - split_scps+=" ${dst_dir}/segments.${i}" - done - # shellcheck disable=SC2086 - utils/split_scp.pl "${src_segments}" ${split_scps} - for i in $(seq 1 "${num_split}"); do - awk '{print $2}' < "${dst_dir}/segments.${i}" | sort | uniq | while read -r wav_id; do - grep "^${wav_id} " < "${src_scp}" >> "${dst_dir}/wav.${i}.scp" - done - done -fi -echo "Successfully make subsets." diff --git a/spaces/alamin655/websurfx/public/static/error_box.js b/spaces/alamin655/websurfx/public/static/error_box.js deleted file mode 100644 index 1e2e8740bc4bb901c4fe589e7936c13d2a1c4323..0000000000000000000000000000000000000000 --- a/spaces/alamin655/websurfx/public/static/error_box.js +++ /dev/null @@ -1,7 +0,0 @@ -/** - * This function provides the ability for the button to toggle the dropdown error-box - * in the search page. - */ -function toggleErrorBox() { - document.querySelector('.dropdown_error_box').classList.toggle('show') -} diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/idna/intranges.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/idna/intranges.py deleted file mode 100644 index 6a43b0475347cb50d0d65ada1000a82eeca9e882..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/idna/intranges.py +++ /dev/null @@ -1,54 +0,0 @@ -""" -Given a list of integers, made up of (hopefully) a small number of long runs -of consecutive integers, compute a representation of the form -((start1, end1), (start2, end2) ...). Then answer the question "was x present -in the original list?" in time O(log(# runs)). -""" - -import bisect -from typing import List, Tuple - -def intranges_from_list(list_: List[int]) -> Tuple[int, ...]: - """Represent a list of integers as a sequence of ranges: - ((start_0, end_0), (start_1, end_1), ...), such that the original - integers are exactly those x such that start_i <= x < end_i for some i. - - Ranges are encoded as single integers (start << 32 | end), not as tuples. - """ - - sorted_list = sorted(list_) - ranges = [] - last_write = -1 - for i in range(len(sorted_list)): - if i+1 < len(sorted_list): - if sorted_list[i] == sorted_list[i+1]-1: - continue - current_range = sorted_list[last_write+1:i+1] - ranges.append(_encode_range(current_range[0], current_range[-1] + 1)) - last_write = i - - return tuple(ranges) - -def _encode_range(start: int, end: int) -> int: - return (start << 32) | end - -def _decode_range(r: int) -> Tuple[int, int]: - return (r >> 32), (r & ((1 << 32) - 1)) - - -def intranges_contain(int_: int, ranges: Tuple[int, ...]) -> bool: - """Determine if `int_` falls into one of the ranges in `ranges`.""" - tuple_ = _encode_range(int_, 0) - pos = bisect.bisect_left(ranges, tuple_) - # we could be immediately ahead of a tuple (start, end) - # with start < int_ <= end - if pos > 0: - left, right = _decode_range(ranges[pos-1]) - if left <= int_ < right: - return True - # or we could be immediately behind a tuple (int_, end) - if pos < len(ranges): - left, _ = _decode_range(ranges[pos]) - if left == int_: - return True - return False diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/live_render.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/live_render.py deleted file mode 100644 index b90fbf7f35097694f727e201b0b378942d70a443..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/live_render.py +++ /dev/null @@ -1,113 +0,0 @@ -import sys -from typing import Optional, Tuple - -if sys.version_info >= (3, 8): - from typing import Literal -else: - from pip._vendor.typing_extensions import Literal # pragma: no cover - - -from ._loop import loop_last -from .console import Console, ConsoleOptions, RenderableType, RenderResult -from .control import Control -from .segment import ControlType, Segment -from .style import StyleType -from .text import Text - -VerticalOverflowMethod = Literal["crop", "ellipsis", "visible"] - - -class LiveRender: - """Creates a renderable that may be updated. - - Args: - renderable (RenderableType): Any renderable object. - style (StyleType, optional): An optional style to apply to the renderable. Defaults to "". - """ - - def __init__( - self, - renderable: RenderableType, - style: StyleType = "", - vertical_overflow: VerticalOverflowMethod = "ellipsis", - ) -> None: - self.renderable = renderable - self.style = style - self.vertical_overflow = vertical_overflow - self._shape: Optional[Tuple[int, int]] = None - - def set_renderable(self, renderable: RenderableType) -> None: - """Set a new renderable. - - Args: - renderable (RenderableType): Any renderable object, including str. - """ - self.renderable = renderable - - def position_cursor(self) -> Control: - """Get control codes to move cursor to beginning of live render. - - Returns: - Control: A control instance that may be printed. - """ - if self._shape is not None: - _, height = self._shape - return Control( - ControlType.CARRIAGE_RETURN, - (ControlType.ERASE_IN_LINE, 2), - *( - ( - (ControlType.CURSOR_UP, 1), - (ControlType.ERASE_IN_LINE, 2), - ) - * (height - 1) - ) - ) - return Control() - - def restore_cursor(self) -> Control: - """Get control codes to clear the render and restore the cursor to its previous position. - - Returns: - Control: A Control instance that may be printed. - """ - if self._shape is not None: - _, height = self._shape - return Control( - ControlType.CARRIAGE_RETURN, - *((ControlType.CURSOR_UP, 1), (ControlType.ERASE_IN_LINE, 2)) * height - ) - return Control() - - def __rich_console__( - self, console: Console, options: ConsoleOptions - ) -> RenderResult: - - renderable = self.renderable - style = console.get_style(self.style) - lines = console.render_lines(renderable, options, style=style, pad=False) - shape = Segment.get_shape(lines) - - _, height = shape - if height > options.size.height: - if self.vertical_overflow == "crop": - lines = lines[: options.size.height] - shape = Segment.get_shape(lines) - elif self.vertical_overflow == "ellipsis": - lines = lines[: (options.size.height - 1)] - overflow_text = Text( - "...", - overflow="crop", - justify="center", - end="", - style="live.ellipsis", - ) - lines.append(list(console.render(overflow_text))) - shape = Segment.get_shape(lines) - self._shape = shape - - new_line = Segment.line() - for last, line in loop_last(lines): - yield from line - if not last: - yield new_line diff --git a/spaces/alfabill/stable-diffusion-inpainting-2/clipseg/score.py b/spaces/alfabill/stable-diffusion-inpainting-2/clipseg/score.py deleted file mode 100644 index 8db8915b109953931fa2a330a7731db4a51b44f8..0000000000000000000000000000000000000000 --- a/spaces/alfabill/stable-diffusion-inpainting-2/clipseg/score.py +++ /dev/null @@ -1,453 +0,0 @@ -from torch.functional import Tensor - -import torch -import inspect -import json -import yaml -import time -import sys - -from general_utils import log - -import numpy as np -from os.path import expanduser, join, isfile, realpath - -from torch.utils.data import DataLoader - -from metrics import FixedIntervalMetrics - -from general_utils import load_model, log, score_config_from_cli_args, AttributeDict, get_attribute, filter_args - - -DATASET_CACHE = dict() - -def load_model(checkpoint_id, weights_file=None, strict=True, model_args='from_config', with_config=False, ignore_weights=False): - - config = json.load(open(join('logs', checkpoint_id, 'config.json'))) - - if model_args != 'from_config' and type(model_args) != dict: - raise ValueError('model_args must either be "from_config" or a dictionary of values') - - model_cls = get_attribute(config['model']) - - # load model - if model_args == 'from_config': - _, model_args, _ = filter_args(config, inspect.signature(model_cls).parameters) - - model = model_cls(**model_args) - - if weights_file is None: - weights_file = realpath(join('logs', checkpoint_id, 'weights.pth')) - else: - weights_file = realpath(join('logs', checkpoint_id, weights_file)) - - if isfile(weights_file) and not ignore_weights: - weights = torch.load(weights_file) - for _, w in weights.items(): - assert not torch.any(torch.isnan(w)), 'weights contain NaNs' - model.load_state_dict(weights, strict=strict) - else: - if not ignore_weights: - raise FileNotFoundError(f'model checkpoint {weights_file} was not found') - - if with_config: - return model, config - - return model - - -def compute_shift2(model, datasets, seed=123, repetitions=1): - """ computes shift """ - - model.eval() - model.cuda() - - import random - random.seed(seed) - - preds, gts = [], [] - for i_dataset, dataset in enumerate(datasets): - - loader = DataLoader(dataset, batch_size=1, num_workers=0, shuffle=False, drop_last=False) - - max_iterations = int(repetitions * len(dataset.dataset.data_list)) - - with torch.no_grad(): - - i, losses = 0, [] - for i_all, (data_x, data_y) in enumerate(loader): - - data_x = [v.cuda(non_blocking=True) if v is not None else v for v in data_x] - data_y = [v.cuda(non_blocking=True) if v is not None else v for v in data_y] - - pred, = model(data_x[0], data_x[1], data_x[2]) - preds += [pred.detach()] - gts += [data_y] - - i += 1 - if max_iterations and i >= max_iterations: - break - - from metrics import FixedIntervalMetrics - n_values = 51 - thresholds = np.linspace(0, 1, n_values)[1:-1] - metric = FixedIntervalMetrics(resize_pred=True, sigmoid=True, n_values=n_values) - - for p, y in zip(preds, gts): - metric.add(p.unsqueeze(1), y) - - best_idx = np.argmax(metric.value()['fgiou_scores']) - best_thresh = thresholds[best_idx] - - return best_thresh - - -def get_cached_pascal_pfe(split, config): - from datasets.pfe_dataset import PFEPascalWrapper - try: - dataset = DATASET_CACHE[(split, config.image_size, config.label_support, config.mask)] - except KeyError: - dataset = PFEPascalWrapper(mode='val', split=split, mask=config.mask, image_size=config.image_size, label_support=config.label_support) - DATASET_CACHE[(split, config.image_size, config.label_support, config.mask)] = dataset - return dataset - - - - -def main(): - config, train_checkpoint_id = score_config_from_cli_args() - - metrics = score(config, train_checkpoint_id, None) - - for dataset in metrics.keys(): - for k in metrics[dataset]: - if type(metrics[dataset][k]) in {float, int}: - print(dataset, f'{k:<16} {metrics[dataset][k]:.3f}') - - -def score(config, train_checkpoint_id, train_config): - - config = AttributeDict(config) - - print(config) - - # use training dataset and loss - train_config = AttributeDict(json.load(open(f'logs/{train_checkpoint_id}/config.json'))) - - cp_str = f'_{config.iteration_cp}' if config.iteration_cp is not None else '' - - - model_cls = get_attribute(train_config['model']) - - _, model_args, _ = filter_args(train_config, inspect.signature(model_cls).parameters) - - model_args = {**model_args, **{k: config[k] for k in ['process_cond', 'fix_shift'] if k in config}} - - strict_models = {'ConditionBase4', 'PFENetWrapper'} - model = load_model(train_checkpoint_id, strict=model_cls.__name__ in strict_models, model_args=model_args, - weights_file=f'weights{cp_str}.pth', ) - - - model.eval() - model.cuda() - - metric_args = dict() - - if 'threshold' in config: - if config.metric.split('.')[-1] == 'SkLearnMetrics': - metric_args['threshold'] = config.threshold - - if 'resize_to' in config: - metric_args['resize_to'] = config.resize_to - - if 'sigmoid' in config: - metric_args['sigmoid'] = config.sigmoid - - if 'custom_threshold' in config: - metric_args['custom_threshold'] = config.custom_threshold - - if config.test_dataset == 'pascal': - - loss_fn = get_attribute(train_config.loss) - # assume that if no split is specified in train_config, test on all splits, - - if 'splits' in config: - splits = config.splits - else: - if 'split' in train_config and type(train_config.split) == int: - # unless train_config has a split set, in that case assume train mode in training - splits = [train_config.split] - assert train_config.mode == 'train' - else: - splits = [0,1,2,3] - - log.info('Test on these splits', splits) - - scores = dict() - for split in splits: - - shift = config.shift if 'shift' in config else 0 - - # automatic shift - if shift == 'auto': - shift_compute_t = time.time() - shift = compute_shift2(model, [get_cached_pascal_pfe(s, config) for s in range(4) if s != split], repetitions=config.compute_shift_fac) - log.info(f'Best threshold is {shift}, computed on splits: {[s for s in range(4) if s != split]}, took {time.time() - shift_compute_t:.1f}s') - - dataset = get_cached_pascal_pfe(split, config) - - eval_start_t = time.time() - - loader = DataLoader(dataset, batch_size=1, num_workers=0, shuffle=False, drop_last=False) - - assert config.batch_size is None or config.batch_size == 1, 'When PFE Dataset is used, batch size must be 1' - - metric = FixedIntervalMetrics(resize_pred=True, sigmoid=True, custom_threshold=shift, **metric_args) - - with torch.no_grad(): - - i, losses = 0, [] - for i_all, (data_x, data_y) in enumerate(loader): - - data_x = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_x] - data_y = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_y] - - if config.mask == 'separate': # for old CondBase model - pred, = model(data_x[0], data_x[1], data_x[2]) - else: - # assert config.mask in {'text', 'highlight'} - pred, _, _, _ = model(data_x[0], data_x[1], return_features=True) - - # loss = loss_fn(pred, data_y[0]) - metric.add(pred.unsqueeze(1) + shift, data_y) - - # losses += [float(loss)] - - i += 1 - if config.max_iterations and i >= config.max_iterations: - break - - #scores[split] = {m: s for m, s in zip(metric.names(), metric.value())} - - log.info(f'Dataset length: {len(dataset)}, took {time.time() - eval_start_t:.1f}s to evaluate.') - - print(metric.value()['mean_iou_scores']) - - scores[split] = metric.scores() - - log.info(f'Completed split {split}') - - key_prefix = config['name'] if 'name' in config else 'pas' - - all_keys = set.intersection(*[set(v.keys()) for v in scores.values()]) - - valid_keys = [k for k in all_keys if all(v[k] is not None and isinstance(v[k], (int, float, np.float)) for v in scores.values())] - - return {key_prefix: {k: np.mean([s[k] for s in scores.values()]) for k in valid_keys}} - - - if config.test_dataset == 'coco': - from datasets.coco_wrapper import COCOWrapper - - coco_dataset = COCOWrapper('test', fold=train_config.fold, image_size=train_config.image_size, mask=config.mask, - with_class_label=True) - - log.info('Dataset length', len(coco_dataset)) - loader = DataLoader(coco_dataset, batch_size=config.batch_size, num_workers=2, shuffle=False, drop_last=False) - - metric = get_attribute(config.metric)(resize_pred=True, **metric_args) - - shift = config.shift if 'shift' in config else 0 - - with torch.no_grad(): - - i, losses = 0, [] - for i_all, (data_x, data_y) in enumerate(loader): - data_x = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_x] - data_y = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_y] - - if config.mask == 'separate': # for old CondBase model - pred, = model(data_x[0], data_x[1], data_x[2]) - else: - # assert config.mask in {'text', 'highlight'} - pred, _, _, _ = model(data_x[0], data_x[1], return_features=True) - - metric.add([pred + shift], data_y) - - i += 1 - if config.max_iterations and i >= config.max_iterations: - break - - key_prefix = config['name'] if 'name' in config else 'coco' - return {key_prefix: metric.scores()} - #return {key_prefix: {k: v for k, v in zip(metric.names(), metric.value())}} - - - if config.test_dataset == 'phrasecut': - from datasets.phrasecut import PhraseCut - - only_visual = config.only_visual is not None and config.only_visual - with_visual = config.with_visual is not None and config.with_visual - - dataset = PhraseCut('test', - image_size=train_config.image_size, - mask=config.mask, - with_visual=with_visual, only_visual=only_visual, aug_crop=False, - aug_color=False) - - loader = DataLoader(dataset, batch_size=config.batch_size, num_workers=2, shuffle=False, drop_last=False) - metric = get_attribute(config.metric)(resize_pred=True, **metric_args) - - shift = config.shift if 'shift' in config else 0 - - - with torch.no_grad(): - - i, losses = 0, [] - for i_all, (data_x, data_y) in enumerate(loader): - data_x = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_x] - data_y = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_y] - - pred, _, _, _ = model(data_x[0], data_x[1], return_features=True) - metric.add([pred + shift], data_y) - - i += 1 - if config.max_iterations and i >= config.max_iterations: - break - - key_prefix = config['name'] if 'name' in config else 'phrasecut' - return {key_prefix: metric.scores()} - #return {key_prefix: {k: v for k, v in zip(metric.names(), metric.value())}} - - if config.test_dataset == 'pascal_zs': - from third_party.JoEm.model.metric import Evaluator - from third_party.JoEm.data_loader import get_seen_idx, get_unseen_idx, VOC - from datasets.pascal_zeroshot import PascalZeroShot, PASCAL_VOC_CLASSES_ZS - - from models.clipseg import CLIPSegMultiLabel - - n_unseen = train_config.remove_classes[1] - - pz = PascalZeroShot('val', n_unseen, image_size=352) - m = CLIPSegMultiLabel(model=train_config.name).cuda() - m.eval(); - - print(len(pz), n_unseen) - print('training removed', [c for class_set in PASCAL_VOC_CLASSES_ZS[:n_unseen // 2] for c in class_set]) - - print('unseen', [VOC[i] for i in get_unseen_idx(n_unseen)]) - print('seen', [VOC[i] for i in get_seen_idx(n_unseen)]) - - loader = DataLoader(pz, batch_size=8) - evaluator = Evaluator(21, get_unseen_idx(n_unseen), get_seen_idx(n_unseen)) - - for i, (data_x, data_y) in enumerate(loader): - pred = m(data_x[0].cuda()) - evaluator.add_batch(data_y[0].numpy(), pred.argmax(1).cpu().detach().numpy()) - - if config.max_iter is not None and i > config.max_iter: - break - - scores = evaluator.Mean_Intersection_over_Union() - key_prefix = config['name'] if 'name' in config else 'pas_zs' - - return {key_prefix: {k: scores[k] for k in ['seen', 'unseen', 'harmonic', 'overall']}} - - elif config.test_dataset in {'same_as_training', 'affordance'}: - loss_fn = get_attribute(train_config.loss) - - metric_cls = get_attribute(config.metric) - metric = metric_cls(**metric_args) - - if config.test_dataset == 'same_as_training': - dataset_cls = get_attribute(train_config.dataset) - elif config.test_dataset == 'affordance': - dataset_cls = get_attribute('datasets.lvis_oneshot3.LVIS_Affordance') - dataset_name = 'aff' - else: - dataset_cls = get_attribute('datasets.lvis_oneshot3.LVIS_OneShot') - dataset_name = 'lvis' - - _, dataset_args, _ = filter_args(config, inspect.signature(dataset_cls).parameters) - - dataset_args['image_size'] = train_config.image_size # explicitly use training image size for evaluation - - if model.__class__.__name__ == 'PFENetWrapper': - dataset_args['image_size'] = config.image_size - - log.info('init dataset', str(dataset_cls)) - dataset = dataset_cls(**dataset_args) - - log.info(f'Score on {model.__class__.__name__} on {dataset_cls.__name__}') - - data_loader = torch.utils.data.DataLoader(dataset, batch_size=config.batch_size, shuffle=config.shuffle) - - # explicitly set prompts - if config.prompt == 'plain': - model.prompt_list = ['{}'] - elif config.prompt == 'fixed': - model.prompt_list = ['a photo of a {}.'] - elif config.prompt == 'shuffle': - model.prompt_list = ['a photo of a {}.', 'a photograph of a {}.', 'an image of a {}.', '{}.'] - elif config.prompt == 'shuffle_clip': - from models.clip_prompts import imagenet_templates - model.prompt_list = imagenet_templates - - config.assume_no_unused_keys(exceptions=['max_iterations']) - - t_start = time.time() - - with torch.no_grad(): # TODO: switch to inference_mode (torch 1.9) - i, losses = 0, [] - for data_x, data_y in data_loader: - - data_x = [x.cuda() if isinstance(x, torch.Tensor) else x for x in data_x] - data_y = [x.cuda() if isinstance(x, torch.Tensor) else x for x in data_y] - - if model.__class__.__name__ in {'ConditionBase4', 'PFENetWrapper'}: - pred, = model(data_x[0], data_x[1], data_x[2]) - visual_q = None - else: - pred, visual_q, _, _ = model(data_x[0], data_x[1], return_features=True) - - loss = loss_fn(pred, data_y[0]) - - metric.add([pred], data_y) - - losses += [float(loss)] - - i += 1 - if config.max_iterations and i >= config.max_iterations: - break - - # scores = {m: s for m, s in zip(metric.names(), metric.value())} - scores = metric.scores() - - keys = set(scores.keys()) - if dataset.negative_prob > 0 and 'mIoU' in keys: - keys.remove('mIoU') - - name_mask = dataset.mask.replace('text_label', 'txt')[:3] - name_neg = '' if dataset.negative_prob == 0 else '_' + str(dataset.negative_prob) - - score_name = config.name if 'name' in config else f'{dataset_name}_{name_mask}{name_neg}' - - scores = {score_name: {k: v for k,v in scores.items() if k in keys}} - scores[score_name].update({'test_loss': np.mean(losses)}) - - log.info(f'Evaluation took {time.time() - t_start:.1f}s') - - return scores - else: - raise ValueError('invalid test dataset') - - - - - - - - - -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/spaces/aliabd/SummerTime/dataset/non_huggingface_datasets_builders/qmsum.py b/spaces/aliabd/SummerTime/dataset/non_huggingface_datasets_builders/qmsum.py deleted file mode 100644 index 7d030c69495fcf1ee1b1b8dca1a56b95c39ca299..0000000000000000000000000000000000000000 --- a/spaces/aliabd/SummerTime/dataset/non_huggingface_datasets_builders/qmsum.py +++ /dev/null @@ -1,119 +0,0 @@ -import os -import json -import datasets - - -"""QMsum dataset.""" - - -_CITATION = """ -@inproceedings{zhong2021qmsum, - title={{QMS}um: {A} {N}ew {B}enchmark for {Q}uery-based {M}ulti-domain {M}eeting {S}ummarization}, - author={Zhong, Ming and Yin, Da and Yu, Tao and Zaidi, Ahmad and Mutuma, Mutethia and Jha, Rahul and Hassan Awadallah, Ahmed and Celikyilmaz, Asli and Liu, Yang and Qiu, Xipeng and Radev, Dragomir}, - booktitle={North American Association for Computational Linguistics (NAACL)}, - year={2021} -} -""" - -_DESCRIPTION = """ -QMSum is a new human-annotated benchmark for query-based multi-domain meeting summarization task, \ -which consists of 1,808 query-summary pairs over 232 meetings in multiple domains. -""" - -_HOMEPAGE = "https://github.com/Yale-LILY/QMSum" - -_BASE_URL = "https://raw.githubusercontent.com/Yale-LILY/QMSum/main/data/ALL/jsonl" -_URLs = { - "train": _BASE_URL + "/train.jsonl", - "val": _BASE_URL + "/val.jsonl", - "test": _BASE_URL + "/test.jsonl", -} - - -class SummertimeQmsum(datasets.GeneratorBasedBuilder): - """QMsum dataset.""" - - VERSION = datasets.Version("1.0.0") - - BUILDER_CONFIGS = [ - datasets.BuilderConfig(), - ] - - def _info(self): - features = datasets.Features( - { - "entry_number": datasets.Value("string"), - "meeting_transcripts": [ - { - "speaker": datasets.Value("string"), - "content": datasets.Value("string"), - } - ], - "general_query_list": [ - { - "query": datasets.Value("string"), - "answer": datasets.Value("string"), - } - ], - "specific_query_list": [ - { - "query": datasets.Value("string"), - "answer": datasets.Value("string"), - "relevant_text_span": [[datasets.Value("string")]], - } - ], - } - ) - return datasets.DatasetInfo( - description=_DESCRIPTION, - features=features, - supervised_keys=None, - homepage=_HOMEPAGE, - license=None, - citation=_CITATION, - ) - - def _split_generators(self, dl_manager): - """Returns SplitGenerators.""" - my_urls = _URLs - downloaded_files = dl_manager.download_and_extract(my_urls) - - trainpath = downloaded_files["train"] - valpath = downloaded_files["val"] - testpath = downloaded_files["test"] - - return [ - datasets.SplitGenerator( - name=datasets.Split.TRAIN, - # These kwargs will be passed to _generate_examples - gen_kwargs={"filepath": trainpath, "split": "train"}, - ), - datasets.SplitGenerator( - name=datasets.Split.VALIDATION, - # These kwargs will be passed to _generate_examples - gen_kwargs={"filepath": valpath, "split": "val"}, - ), - datasets.SplitGenerator( - name=datasets.Split.TEST, - # These kwargs will be passed to _generate_examples - gen_kwargs={"filepath": testpath, "split": "test"}, - ), - ] - - def _generate_examples(self, filepath, split): - """Yields examples.""" - - extraction_path = os.path.join(filepath) - - with open(extraction_path) as f: - for i, line in enumerate(f): - - instance = json.loads(line) - - entry = {} - entry["entry_number"] = split + "_" + str(i) - entry["meeting_transcripts"] = instance["meeting_transcripts"] - entry["general_query_list"] = instance["general_query_list"] - entry["specific_query_list"] = instance["specific_query_list"] - - yield entry["entry_number"], entry diff --git a/spaces/aliceoq/vozes-da-loirinha/lib/infer_pack/commons.py b/spaces/aliceoq/vozes-da-loirinha/lib/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/aliceoq/vozes-da-loirinha/lib/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/almostagi/QTL/README.md b/spaces/almostagi/QTL/README.md deleted file mode 100644 index 24cd9935e47d67ba1b1b857eb41c24da6fc5a3a8..0000000000000000000000000000000000000000 --- a/spaces/almostagi/QTL/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: QTL -emoji: 🤗 -colorFrom: gray -colorTo: red -sdk: gradio -sdk_version: 2.8.14 -app_file: app.py -pinned: false -license: mit ---- - -🤗 This proof-of-concept quantum machine learning model takes a face image input and detects a face that has a mask or no mask. diff --git a/spaces/alphunt/diffdock-alphunt-demo/datasets/conformer_matching.py b/spaces/alphunt/diffdock-alphunt-demo/datasets/conformer_matching.py deleted file mode 100644 index fc8894394c96a5fb2ba6ed0ff17a9bc56d28b3fc..0000000000000000000000000000000000000000 --- a/spaces/alphunt/diffdock-alphunt-demo/datasets/conformer_matching.py +++ /dev/null @@ -1,196 +0,0 @@ -import copy, time -import numpy as np -from collections import defaultdict -from rdkit import Chem, RDLogger -from rdkit.Chem import AllChem, rdMolTransforms -from rdkit import Geometry -import networkx as nx -from scipy.optimize import differential_evolution - -RDLogger.DisableLog('rdApp.*') - -""" - Conformer matching routines from Torsional Diffusion -""" - -def GetDihedral(conf, atom_idx): - return rdMolTransforms.GetDihedralRad(conf, atom_idx[0], atom_idx[1], atom_idx[2], atom_idx[3]) - - -def SetDihedral(conf, atom_idx, new_vale): - rdMolTransforms.SetDihedralRad(conf, atom_idx[0], atom_idx[1], atom_idx[2], atom_idx[3], new_vale) - - -def apply_changes(mol, values, rotable_bonds, conf_id): - opt_mol = copy.copy(mol) - [SetDihedral(opt_mol.GetConformer(conf_id), rotable_bonds[r], values[r]) for r in range(len(rotable_bonds))] - return opt_mol - - -def optimize_rotatable_bonds(mol, true_mol, rotable_bonds, probe_id=-1, ref_id=-1, seed=0, popsize=15, maxiter=500, - mutation=(0.5, 1), recombination=0.8): - opt = OptimizeConformer(mol, true_mol, rotable_bonds, seed=seed, probe_id=probe_id, ref_id=ref_id) - max_bound = [np.pi] * len(opt.rotable_bonds) - min_bound = [-np.pi] * len(opt.rotable_bonds) - bounds = (min_bound, max_bound) - bounds = list(zip(bounds[0], bounds[1])) - - # Optimize conformations - result = differential_evolution(opt.score_conformation, bounds, - maxiter=maxiter, popsize=popsize, - mutation=mutation, recombination=recombination, disp=False, seed=seed) - opt_mol = apply_changes(opt.mol, result['x'], opt.rotable_bonds, conf_id=probe_id) - - return opt_mol - - -class OptimizeConformer: - def __init__(self, mol, true_mol, rotable_bonds, probe_id=-1, ref_id=-1, seed=None): - super(OptimizeConformer, self).__init__() - if seed: - np.random.seed(seed) - self.rotable_bonds = rotable_bonds - self.mol = mol - self.true_mol = true_mol - self.probe_id = probe_id - self.ref_id = ref_id - - def score_conformation(self, values): - for i, r in enumerate(self.rotable_bonds): - SetDihedral(self.mol.GetConformer(self.probe_id), r, values[i]) - return RMSD(self.mol, self.true_mol, self.probe_id, self.ref_id) - - -def get_torsion_angles(mol): - torsions_list = [] - G = nx.Graph() - for i, atom in enumerate(mol.GetAtoms()): - G.add_node(i) - nodes = set(G.nodes()) - for bond in mol.GetBonds(): - start, end = bond.GetBeginAtomIdx(), bond.GetEndAtomIdx() - G.add_edge(start, end) - for e in G.edges(): - G2 = copy.deepcopy(G) - G2.remove_edge(*e) - if nx.is_connected(G2): continue - l = list(sorted(nx.connected_components(G2), key=len)[0]) - if len(l) < 2: continue - n0 = list(G2.neighbors(e[0])) - n1 = list(G2.neighbors(e[1])) - torsions_list.append( - (n0[0], e[0], e[1], n1[0]) - ) - return torsions_list - - -# GeoMol -def get_torsions(mol_list): - print('USING GEOMOL GET TORSIONS FUNCTION') - atom_counter = 0 - torsionList = [] - for m in mol_list: - torsionSmarts = '[!$(*#*)&!D1]-&!@[!$(*#*)&!D1]' - torsionQuery = Chem.MolFromSmarts(torsionSmarts) - matches = m.GetSubstructMatches(torsionQuery) - for match in matches: - idx2 = match[0] - idx3 = match[1] - bond = m.GetBondBetweenAtoms(idx2, idx3) - jAtom = m.GetAtomWithIdx(idx2) - kAtom = m.GetAtomWithIdx(idx3) - for b1 in jAtom.GetBonds(): - if (b1.GetIdx() == bond.GetIdx()): - continue - idx1 = b1.GetOtherAtomIdx(idx2) - for b2 in kAtom.GetBonds(): - if ((b2.GetIdx() == bond.GetIdx()) - or (b2.GetIdx() == b1.GetIdx())): - continue - idx4 = b2.GetOtherAtomIdx(idx3) - # skip 3-membered rings - if (idx4 == idx1): - continue - if m.GetAtomWithIdx(idx4).IsInRing(): - torsionList.append( - (idx4 + atom_counter, idx3 + atom_counter, idx2 + atom_counter, idx1 + atom_counter)) - break - else: - torsionList.append( - (idx1 + atom_counter, idx2 + atom_counter, idx3 + atom_counter, idx4 + atom_counter)) - break - break - - atom_counter += m.GetNumAtoms() - return torsionList - - -def A_transpose_matrix(alpha): - return np.array([[np.cos(alpha), np.sin(alpha)], [-np.sin(alpha), np.cos(alpha)]], dtype=np.double) - - -def S_vec(alpha): - return np.array([[np.cos(alpha)], [np.sin(alpha)]], dtype=np.double) - - -def GetDihedralFromPointCloud(Z, atom_idx): - p = Z[list(atom_idx)] - b = p[:-1] - p[1:] - b[0] *= -1 - v = np.array([v - (v.dot(b[1]) / b[1].dot(b[1])) * b[1] for v in [b[0], b[2]]]) - # Normalize vectors - v /= np.sqrt(np.einsum('...i,...i', v, v)).reshape(-1, 1) - b1 = b[1] / np.linalg.norm(b[1]) - x = np.dot(v[0], v[1]) - m = np.cross(v[0], b1) - y = np.dot(m, v[1]) - return np.arctan2(y, x) - - -def get_dihedral_vonMises(mol, conf, atom_idx, Z): - Z = np.array(Z) - v = np.zeros((2, 1)) - iAtom = mol.GetAtomWithIdx(atom_idx[1]) - jAtom = mol.GetAtomWithIdx(atom_idx[2]) - k_0 = atom_idx[0] - i = atom_idx[1] - j = atom_idx[2] - l_0 = atom_idx[3] - for b1 in iAtom.GetBonds(): - k = b1.GetOtherAtomIdx(i) - if k == j: - continue - for b2 in jAtom.GetBonds(): - l = b2.GetOtherAtomIdx(j) - if l == i: - continue - assert k != l - s_star = S_vec(GetDihedralFromPointCloud(Z, (k, i, j, l))) - a_mat = A_transpose_matrix(GetDihedral(conf, (k, i, j, k_0)) + GetDihedral(conf, (l_0, i, j, l))) - v = v + np.matmul(a_mat, s_star) - v = v / np.linalg.norm(v) - v = v.reshape(-1) - return np.arctan2(v[1], v[0]) - - -def get_von_mises_rms(mol, mol_rdkit, rotable_bonds, conf_id): - new_dihedrals = np.zeros(len(rotable_bonds)) - for idx, r in enumerate(rotable_bonds): - new_dihedrals[idx] = get_dihedral_vonMises(mol_rdkit, - mol_rdkit.GetConformer(conf_id), r, - mol.GetConformer().GetPositions()) - mol_rdkit = apply_changes(mol_rdkit, new_dihedrals, rotable_bonds, conf_id) - return RMSD(mol_rdkit, mol, conf_id) - - -def mmff_func(mol): - mol_mmff = copy.deepcopy(mol) - AllChem.MMFFOptimizeMoleculeConfs(mol_mmff, mmffVariant='MMFF94s') - for i in range(mol.GetNumConformers()): - coords = mol_mmff.GetConformers()[i].GetPositions() - for j in range(coords.shape[0]): - mol.GetConformer(i).SetAtomPosition(j, - Geometry.Point3D(*coords[j])) - - -RMSD = AllChem.AlignMol diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/src/hostapi/wasapi/mingw-include/devicetopology.h b/spaces/amarchheda/ChordDuplicate/portaudio/src/hostapi/wasapi/mingw-include/devicetopology.h deleted file mode 100644 index 7a1f75c4eea202c69e562346770a76d7dbe2dd23..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/src/hostapi/wasapi/mingw-include/devicetopology.h +++ /dev/null @@ -1,3275 +0,0 @@ - - -/* this ALWAYS GENERATED file contains the definitions for the interfaces */ - - - /* File created by MIDL compiler version 7.00.0499 */ -/* Compiler settings for devicetopology.idl: - Oicf, W1, Zp8, env=Win32 (32b run) - protocol : dce , ms_ext, c_ext, robust - error checks: allocation ref bounds_check enum stub_data - VC __declspec() decoration level: - __declspec(uuid()), __declspec(selectany), __declspec(novtable) - DECLSPEC_UUID(), MIDL_INTERFACE() -*/ -//@@MIDL_FILE_HEADING( ) - -#pragma warning( disable: 4049 ) /* more than 64k source lines */ - - -/* verify that the version is high enough to compile this file*/ -#ifndef __REQUIRED_RPCNDR_H_VERSION__ -#define __REQUIRED_RPCNDR_H_VERSION__ 500 -#endif - -/* verify that the version is high enough to compile this file*/ -#ifndef __REQUIRED_RPCSAL_H_VERSION__ -#define __REQUIRED_RPCSAL_H_VERSION__ 100 -#endif - -#include "rpc.h" -#include "rpcndr.h" - -#ifndef __RPCNDR_H_VERSION__ -#error this stub requires an updated version of -#endif // __RPCNDR_H_VERSION__ - -#ifndef COM_NO_WINDOWS_H -#include "windows.h" -#include "ole2.h" -#endif /*COM_NO_WINDOWS_H*/ - -#ifndef __devicetopology_h__ -#define __devicetopology_h__ - -#if defined(_MSC_VER) && (_MSC_VER >= 1020) -#pragma once -#endif - -/* Forward Declarations */ - -#ifndef __IKsControl_FWD_DEFINED__ -#define __IKsControl_FWD_DEFINED__ -typedef interface IKsControl IKsControl; -#endif /* __IKsControl_FWD_DEFINED__ */ - - -#ifndef __IPerChannelDbLevel_FWD_DEFINED__ -#define __IPerChannelDbLevel_FWD_DEFINED__ -typedef interface IPerChannelDbLevel IPerChannelDbLevel; -#endif /* __IPerChannelDbLevel_FWD_DEFINED__ */ - - -#ifndef __IAudioVolumeLevel_FWD_DEFINED__ -#define __IAudioVolumeLevel_FWD_DEFINED__ -typedef interface IAudioVolumeLevel IAudioVolumeLevel; -#endif /* __IAudioVolumeLevel_FWD_DEFINED__ */ - - -#ifndef __IAudioChannelConfig_FWD_DEFINED__ -#define __IAudioChannelConfig_FWD_DEFINED__ -typedef interface IAudioChannelConfig IAudioChannelConfig; -#endif /* __IAudioChannelConfig_FWD_DEFINED__ */ - - -#ifndef __IAudioLoudness_FWD_DEFINED__ -#define __IAudioLoudness_FWD_DEFINED__ -typedef interface IAudioLoudness IAudioLoudness; -#endif /* __IAudioLoudness_FWD_DEFINED__ */ - - -#ifndef __IAudioInputSelector_FWD_DEFINED__ -#define __IAudioInputSelector_FWD_DEFINED__ -typedef interface IAudioInputSelector IAudioInputSelector; -#endif /* __IAudioInputSelector_FWD_DEFINED__ */ - - -#ifndef __IAudioOutputSelector_FWD_DEFINED__ -#define __IAudioOutputSelector_FWD_DEFINED__ -typedef interface IAudioOutputSelector IAudioOutputSelector; -#endif /* __IAudioOutputSelector_FWD_DEFINED__ */ - - -#ifndef __IAudioMute_FWD_DEFINED__ -#define __IAudioMute_FWD_DEFINED__ -typedef interface IAudioMute IAudioMute; -#endif /* __IAudioMute_FWD_DEFINED__ */ - - -#ifndef __IAudioBass_FWD_DEFINED__ -#define __IAudioBass_FWD_DEFINED__ -typedef interface IAudioBass IAudioBass; -#endif /* __IAudioBass_FWD_DEFINED__ */ - - -#ifndef __IAudioMidrange_FWD_DEFINED__ -#define __IAudioMidrange_FWD_DEFINED__ -typedef interface IAudioMidrange IAudioMidrange; -#endif /* __IAudioMidrange_FWD_DEFINED__ */ - - -#ifndef __IAudioTreble_FWD_DEFINED__ -#define __IAudioTreble_FWD_DEFINED__ -typedef interface IAudioTreble IAudioTreble; -#endif /* __IAudioTreble_FWD_DEFINED__ */ - - -#ifndef __IAudioAutoGainControl_FWD_DEFINED__ -#define __IAudioAutoGainControl_FWD_DEFINED__ -typedef interface IAudioAutoGainControl IAudioAutoGainControl; -#endif /* __IAudioAutoGainControl_FWD_DEFINED__ */ - - -#ifndef __IAudioPeakMeter_FWD_DEFINED__ -#define __IAudioPeakMeter_FWD_DEFINED__ -typedef interface IAudioPeakMeter IAudioPeakMeter; -#endif /* __IAudioPeakMeter_FWD_DEFINED__ */ - - -#ifndef __IDeviceSpecificProperty_FWD_DEFINED__ -#define __IDeviceSpecificProperty_FWD_DEFINED__ -typedef interface IDeviceSpecificProperty IDeviceSpecificProperty; -#endif /* __IDeviceSpecificProperty_FWD_DEFINED__ */ - - -#ifndef __IKsFormatSupport_FWD_DEFINED__ -#define __IKsFormatSupport_FWD_DEFINED__ -typedef interface IKsFormatSupport IKsFormatSupport; -#endif /* __IKsFormatSupport_FWD_DEFINED__ */ - - -#ifndef __IKsJackDescription_FWD_DEFINED__ -#define __IKsJackDescription_FWD_DEFINED__ -typedef interface IKsJackDescription IKsJackDescription; -#endif /* __IKsJackDescription_FWD_DEFINED__ */ - - -#ifndef __IPartsList_FWD_DEFINED__ -#define __IPartsList_FWD_DEFINED__ -typedef interface IPartsList IPartsList; -#endif /* __IPartsList_FWD_DEFINED__ */ - - -#ifndef __IPart_FWD_DEFINED__ -#define __IPart_FWD_DEFINED__ -typedef interface IPart IPart; -#endif /* __IPart_FWD_DEFINED__ */ - - -#ifndef __IConnector_FWD_DEFINED__ -#define __IConnector_FWD_DEFINED__ -typedef interface IConnector IConnector; -#endif /* __IConnector_FWD_DEFINED__ */ - - -#ifndef __ISubunit_FWD_DEFINED__ -#define __ISubunit_FWD_DEFINED__ -typedef interface ISubunit ISubunit; -#endif /* __ISubunit_FWD_DEFINED__ */ - - -#ifndef __IControlInterface_FWD_DEFINED__ -#define __IControlInterface_FWD_DEFINED__ -typedef interface IControlInterface IControlInterface; -#endif /* __IControlInterface_FWD_DEFINED__ */ - - -#ifndef __IControlChangeNotify_FWD_DEFINED__ -#define __IControlChangeNotify_FWD_DEFINED__ -typedef interface IControlChangeNotify IControlChangeNotify; -#endif /* __IControlChangeNotify_FWD_DEFINED__ */ - - -#ifndef __IDeviceTopology_FWD_DEFINED__ -#define __IDeviceTopology_FWD_DEFINED__ -typedef interface IDeviceTopology IDeviceTopology; -#endif /* __IDeviceTopology_FWD_DEFINED__ */ - - -#ifndef __DeviceTopology_FWD_DEFINED__ -#define __DeviceTopology_FWD_DEFINED__ - -#ifdef __cplusplus -typedef class DeviceTopology DeviceTopology; -#else -typedef struct DeviceTopology DeviceTopology; -#endif /* __cplusplus */ - -#endif /* __DeviceTopology_FWD_DEFINED__ */ - - -#ifndef __IPartsList_FWD_DEFINED__ -#define __IPartsList_FWD_DEFINED__ -typedef interface IPartsList IPartsList; -#endif /* __IPartsList_FWD_DEFINED__ */ - - -#ifndef __IPerChannelDbLevel_FWD_DEFINED__ -#define __IPerChannelDbLevel_FWD_DEFINED__ -typedef interface IPerChannelDbLevel IPerChannelDbLevel; -#endif /* __IPerChannelDbLevel_FWD_DEFINED__ */ - - -#ifndef __IAudioVolumeLevel_FWD_DEFINED__ -#define __IAudioVolumeLevel_FWD_DEFINED__ -typedef interface IAudioVolumeLevel IAudioVolumeLevel; -#endif /* __IAudioVolumeLevel_FWD_DEFINED__ */ - - -#ifndef __IAudioLoudness_FWD_DEFINED__ -#define __IAudioLoudness_FWD_DEFINED__ -typedef interface IAudioLoudness IAudioLoudness; -#endif /* __IAudioLoudness_FWD_DEFINED__ */ - - -#ifndef __IAudioInputSelector_FWD_DEFINED__ -#define __IAudioInputSelector_FWD_DEFINED__ -typedef interface IAudioInputSelector IAudioInputSelector; -#endif /* __IAudioInputSelector_FWD_DEFINED__ */ - - -#ifndef __IAudioMute_FWD_DEFINED__ -#define __IAudioMute_FWD_DEFINED__ -typedef interface IAudioMute IAudioMute; -#endif /* __IAudioMute_FWD_DEFINED__ */ - - -#ifndef __IAudioBass_FWD_DEFINED__ -#define __IAudioBass_FWD_DEFINED__ -typedef interface IAudioBass IAudioBass; -#endif /* __IAudioBass_FWD_DEFINED__ */ - - -#ifndef __IAudioMidrange_FWD_DEFINED__ -#define __IAudioMidrange_FWD_DEFINED__ -typedef interface IAudioMidrange IAudioMidrange; -#endif /* __IAudioMidrange_FWD_DEFINED__ */ - - -#ifndef __IAudioTreble_FWD_DEFINED__ -#define __IAudioTreble_FWD_DEFINED__ -typedef interface IAudioTreble IAudioTreble; -#endif /* __IAudioTreble_FWD_DEFINED__ */ - - -#ifndef __IAudioAutoGainControl_FWD_DEFINED__ -#define __IAudioAutoGainControl_FWD_DEFINED__ -typedef interface IAudioAutoGainControl IAudioAutoGainControl; -#endif /* __IAudioAutoGainControl_FWD_DEFINED__ */ - - -#ifndef __IAudioOutputSelector_FWD_DEFINED__ -#define __IAudioOutputSelector_FWD_DEFINED__ -typedef interface IAudioOutputSelector IAudioOutputSelector; -#endif /* __IAudioOutputSelector_FWD_DEFINED__ */ - - -#ifndef __IAudioPeakMeter_FWD_DEFINED__ -#define __IAudioPeakMeter_FWD_DEFINED__ -typedef interface IAudioPeakMeter IAudioPeakMeter; -#endif /* __IAudioPeakMeter_FWD_DEFINED__ */ - - -#ifndef __IDeviceSpecificProperty_FWD_DEFINED__ -#define __IDeviceSpecificProperty_FWD_DEFINED__ -typedef interface IDeviceSpecificProperty IDeviceSpecificProperty; -#endif /* __IDeviceSpecificProperty_FWD_DEFINED__ */ - - -#ifndef __IKsFormatSupport_FWD_DEFINED__ -#define __IKsFormatSupport_FWD_DEFINED__ -typedef interface IKsFormatSupport IKsFormatSupport; -#endif /* __IKsFormatSupport_FWD_DEFINED__ */ - - -/* header files for imported files */ -#include "oaidl.h" -#include "ocidl.h" -#include "propidl.h" - -#ifdef __cplusplus -extern "C"{ -#endif - - -/* interface __MIDL_itf_devicetopology_0000_0000 */ -/* [local] */ - -#define E_NOTFOUND HRESULT_FROM_WIN32(ERROR_NOT_FOUND) -// -// Flag for clients of IControlChangeNotify::OnNotify to allow those clients to identify hardware initiated notifications -// -#define DEVTOPO_HARDWARE_INITIATED_EVENTCONTEXT 'draH' -/* E2C2E9DE-09B1-4B04-84E5-07931225EE04 */ -DEFINE_GUID(EVENTCONTEXT_VOLUMESLIDER, 0xE2C2E9DE,0x09B1,0x4B04,0x84, 0xE5, 0x07, 0x93, 0x12, 0x25, 0xEE, 0x04); -#define _IKsControl_ -#include "ks.h" -#include "ksmedia.h" -#ifndef _KS_ -typedef /* [public] */ struct __MIDL___MIDL_itf_devicetopology_0000_0000_0001 - { - ULONG FormatSize; - ULONG Flags; - ULONG SampleSize; - ULONG Reserved; - GUID MajorFormat; - GUID SubFormat; - GUID Specifier; - } KSDATAFORMAT; - -typedef struct __MIDL___MIDL_itf_devicetopology_0000_0000_0001 *PKSDATAFORMAT; - -typedef /* [public][public][public][public][public][public][public][public][public][public] */ struct __MIDL___MIDL_itf_devicetopology_0000_0000_0002 - { - union - { - struct - { - GUID Set; - ULONG Id; - ULONG Flags; - } ; - LONGLONG Alignment; - } ; - } KSIDENTIFIER; - -typedef struct __MIDL___MIDL_itf_devicetopology_0000_0000_0002 *PKSIDENTIFIER; - -typedef /* [public][public][public][public] */ -enum __MIDL___MIDL_itf_devicetopology_0000_0000_0005 - { ePcxChanMap_FL_FR = 0, - ePcxChanMap_FC_LFE = ( ePcxChanMap_FL_FR + 1 ) , - ePcxChanMap_BL_BR = ( ePcxChanMap_FC_LFE + 1 ) , - ePcxChanMap_FLC_FRC = ( ePcxChanMap_BL_BR + 1 ) , - ePcxChanMap_SL_SR = ( ePcxChanMap_FLC_FRC + 1 ) , - ePcxChanMap_Unknown = ( ePcxChanMap_SL_SR + 1 ) - } EChannelMapping; - -typedef /* [public][public][public][public] */ -enum __MIDL___MIDL_itf_devicetopology_0000_0000_0006 - { eConnTypeUnknown = 0, - eConnTypeEighth = ( eConnTypeUnknown + 1 ) , - eConnTypeQuarter = ( eConnTypeEighth + 1 ) , - eConnTypeAtapiInternal = ( eConnTypeQuarter + 1 ) , - eConnTypeRCA = ( eConnTypeAtapiInternal + 1 ) , - eConnTypeOptical = ( eConnTypeRCA + 1 ) , - eConnTypeOtherDigital = ( eConnTypeOptical + 1 ) , - eConnTypeOtherAnalog = ( eConnTypeOtherDigital + 1 ) , - eConnTypeMultichannelAnalogDIN = ( eConnTypeOtherAnalog + 1 ) , - eConnTypeXlrProfessional = ( eConnTypeMultichannelAnalogDIN + 1 ) , - eConnTypeRJ11Modem = ( eConnTypeXlrProfessional + 1 ) , - eConnTypeCombination = ( eConnTypeRJ11Modem + 1 ) - } EPcxConnectionType; - -typedef /* [public][public][public][public] */ -enum __MIDL___MIDL_itf_devicetopology_0000_0000_0007 - { eGeoLocRear = 0x1, - eGeoLocFront = ( eGeoLocRear + 1 ) , - eGeoLocLeft = ( eGeoLocFront + 1 ) , - eGeoLocRight = ( eGeoLocLeft + 1 ) , - eGeoLocTop = ( eGeoLocRight + 1 ) , - eGeoLocBottom = ( eGeoLocTop + 1 ) , - eGeoLocRearOPanel = ( eGeoLocBottom + 1 ) , - eGeoLocRiser = ( eGeoLocRearOPanel + 1 ) , - eGeoLocInsideMobileLid = ( eGeoLocRiser + 1 ) , - eGeoLocDrivebay = ( eGeoLocInsideMobileLid + 1 ) , - eGeoLocHDMI = ( eGeoLocDrivebay + 1 ) , - eGeoLocOutsideMobileLid = ( eGeoLocHDMI + 1 ) , - eGeoLocATAPI = ( eGeoLocOutsideMobileLid + 1 ) , - eGeoLocReserved5 = ( eGeoLocATAPI + 1 ) , - eGeoLocReserved6 = ( eGeoLocReserved5 + 1 ) - } EPcxGeoLocation; - -typedef /* [public][public][public][public] */ -enum __MIDL___MIDL_itf_devicetopology_0000_0000_0008 - { eGenLocPrimaryBox = 0, - eGenLocInternal = ( eGenLocPrimaryBox + 1 ) , - eGenLocSeperate = ( eGenLocInternal + 1 ) , - eGenLocOther = ( eGenLocSeperate + 1 ) - } EPcxGenLocation; - -typedef /* [public][public][public][public] */ -enum __MIDL___MIDL_itf_devicetopology_0000_0000_0009 - { ePortConnJack = 0, - ePortConnIntegratedDevice = ( ePortConnJack + 1 ) , - ePortConnBothIntegratedAndJack = ( ePortConnIntegratedDevice + 1 ) , - ePortConnUnknown = ( ePortConnBothIntegratedAndJack + 1 ) - } EPxcPortConnection; - -typedef /* [public][public] */ struct __MIDL___MIDL_itf_devicetopology_0000_0000_0010 - { - EChannelMapping ChannelMapping; - COLORREF Color; - EPcxConnectionType ConnectionType; - EPcxGeoLocation GeoLocation; - EPcxGenLocation GenLocation; - EPxcPortConnection PortConnection; - BOOL IsConnected; - } KSJACK_DESCRIPTION; - -typedef struct __MIDL___MIDL_itf_devicetopology_0000_0000_0010 *PKSJACK_DESCRIPTION; - -typedef KSIDENTIFIER KSPROPERTY; - -typedef KSIDENTIFIER *PKSPROPERTY; - -typedef KSIDENTIFIER KSMETHOD; - -typedef KSIDENTIFIER *PKSMETHOD; - -typedef KSIDENTIFIER KSEVENT; - -typedef KSIDENTIFIER *PKSEVENT; - -#endif - - - - - - - - -typedef /* [public][public] */ -enum __MIDL___MIDL_itf_devicetopology_0000_0000_0011 - { In = 0, - Out = ( In + 1 ) - } DataFlow; - -typedef /* [public][public] */ -enum __MIDL___MIDL_itf_devicetopology_0000_0000_0012 - { Connector = 0, - Subunit = ( Connector + 1 ) - } PartType; - -#define PARTTYPE_FLAG_CONNECTOR 0x00010000 -#define PARTTYPE_FLAG_SUBUNIT 0x00020000 -#define PARTTYPE_MASK 0x00030000 -#define PARTID_MASK 0x0000ffff -typedef /* [public][public] */ -enum __MIDL___MIDL_itf_devicetopology_0000_0000_0013 - { Unknown_Connector = 0, - Physical_Internal = ( Unknown_Connector + 1 ) , - Physical_External = ( Physical_Internal + 1 ) , - Software_IO = ( Physical_External + 1 ) , - Software_Fixed = ( Software_IO + 1 ) , - Network = ( Software_Fixed + 1 ) - } ConnectorType; - - - -extern RPC_IF_HANDLE __MIDL_itf_devicetopology_0000_0000_v0_0_c_ifspec; -extern RPC_IF_HANDLE __MIDL_itf_devicetopology_0000_0000_v0_0_s_ifspec; - -#ifndef __IKsControl_INTERFACE_DEFINED__ -#define __IKsControl_INTERFACE_DEFINED__ - -/* interface IKsControl */ -/* [unique][helpstring][nonextensible][uuid][local][object] */ - - -EXTERN_C const IID IID_IKsControl; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("28F54685-06FD-11D2-B27A-00A0C9223196") - IKsControl : public IUnknown - { - public: - virtual HRESULT STDMETHODCALLTYPE KsProperty( - /* [in] */ PKSPROPERTY Property, - /* [in] */ ULONG PropertyLength, - /* [out][in] */ void *PropertyData, - /* [in] */ ULONG DataLength, - /* [out] */ ULONG *BytesReturned) = 0; - - virtual HRESULT STDMETHODCALLTYPE KsMethod( - /* [in] */ PKSMETHOD Method, - /* [in] */ ULONG MethodLength, - /* [out][in] */ void *MethodData, - /* [in] */ ULONG DataLength, - /* [out] */ ULONG *BytesReturned) = 0; - - virtual HRESULT STDMETHODCALLTYPE KsEvent( - /* [in] */ PKSEVENT Event, - /* [in] */ ULONG EventLength, - /* [out][in] */ void *EventData, - /* [in] */ ULONG DataLength, - /* [out] */ ULONG *BytesReturned) = 0; - - }; - -#else /* C style interface */ - - typedef struct IKsControlVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - IKsControl * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - IKsControl * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - IKsControl * This); - - HRESULT ( STDMETHODCALLTYPE *KsProperty )( - IKsControl * This, - /* [in] */ PKSPROPERTY Property, - /* [in] */ ULONG PropertyLength, - /* [out][in] */ void *PropertyData, - /* [in] */ ULONG DataLength, - /* [out] */ ULONG *BytesReturned); - - HRESULT ( STDMETHODCALLTYPE *KsMethod )( - IKsControl * This, - /* [in] */ PKSMETHOD Method, - /* [in] */ ULONG MethodLength, - /* [out][in] */ void *MethodData, - /* [in] */ ULONG DataLength, - /* [out] */ ULONG *BytesReturned); - - HRESULT ( STDMETHODCALLTYPE *KsEvent )( - IKsControl * This, - /* [in] */ PKSEVENT Event, - /* [in] */ ULONG EventLength, - /* [out][in] */ void *EventData, - /* [in] */ ULONG DataLength, - /* [out] */ ULONG *BytesReturned); - - END_INTERFACE - } IKsControlVtbl; - - interface IKsControl - { - CONST_VTBL struct IKsControlVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define IKsControl_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define IKsControl_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define IKsControl_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#define IKsControl_KsProperty(This,Property,PropertyLength,PropertyData,DataLength,BytesReturned) \ - ( (This)->lpVtbl -> KsProperty(This,Property,PropertyLength,PropertyData,DataLength,BytesReturned) ) - -#define IKsControl_KsMethod(This,Method,MethodLength,MethodData,DataLength,BytesReturned) \ - ( (This)->lpVtbl -> KsMethod(This,Method,MethodLength,MethodData,DataLength,BytesReturned) ) - -#define IKsControl_KsEvent(This,Event,EventLength,EventData,DataLength,BytesReturned) \ - ( (This)->lpVtbl -> KsEvent(This,Event,EventLength,EventData,DataLength,BytesReturned) ) - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __IKsControl_INTERFACE_DEFINED__ */ - - -#ifndef __IPerChannelDbLevel_INTERFACE_DEFINED__ -#define __IPerChannelDbLevel_INTERFACE_DEFINED__ - -/* interface IPerChannelDbLevel */ -/* [unique][helpstring][nonextensible][uuid][local][object] */ - - -EXTERN_C const IID IID_IPerChannelDbLevel; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("C2F8E001-F205-4BC9-99BC-C13B1E048CCB") - IPerChannelDbLevel : public IUnknown - { - public: - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetChannelCount( - /* [out] */ - __out UINT *pcChannels) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetLevelRange( - /* [in] */ - __in UINT nChannel, - /* [out] */ - __out float *pfMinLevelDB, - /* [out] */ - __out float *pfMaxLevelDB, - /* [out] */ - __out float *pfStepping) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetLevel( - /* [in] */ - __in UINT nChannel, - /* [out] */ - __out float *pfLevelDB) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE SetLevel( - /* [in] */ - __in UINT nChannel, - /* [in] */ - __in float fLevelDB, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE SetLevelUniform( - /* [in] */ - __in float fLevelDB, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE SetLevelAllChannels( - /* [size_is][in] */ - __in_ecount(cChannels) float aLevelsDB[ ], - /* [in] */ - __in ULONG cChannels, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext) = 0; - - }; - -#else /* C style interface */ - - typedef struct IPerChannelDbLevelVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - IPerChannelDbLevel * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - IPerChannelDbLevel * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - IPerChannelDbLevel * This); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetChannelCount )( - IPerChannelDbLevel * This, - /* [out] */ - __out UINT *pcChannels); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetLevelRange )( - IPerChannelDbLevel * This, - /* [in] */ - __in UINT nChannel, - /* [out] */ - __out float *pfMinLevelDB, - /* [out] */ - __out float *pfMaxLevelDB, - /* [out] */ - __out float *pfStepping); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetLevel )( - IPerChannelDbLevel * This, - /* [in] */ - __in UINT nChannel, - /* [out] */ - __out float *pfLevelDB); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *SetLevel )( - IPerChannelDbLevel * This, - /* [in] */ - __in UINT nChannel, - /* [in] */ - __in float fLevelDB, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *SetLevelUniform )( - IPerChannelDbLevel * This, - /* [in] */ - __in float fLevelDB, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *SetLevelAllChannels )( - IPerChannelDbLevel * This, - /* [size_is][in] */ - __in_ecount(cChannels) float aLevelsDB[ ], - /* [in] */ - __in ULONG cChannels, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext); - - END_INTERFACE - } IPerChannelDbLevelVtbl; - - interface IPerChannelDbLevel - { - CONST_VTBL struct IPerChannelDbLevelVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define IPerChannelDbLevel_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define IPerChannelDbLevel_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define IPerChannelDbLevel_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#define IPerChannelDbLevel_GetChannelCount(This,pcChannels) \ - ( (This)->lpVtbl -> GetChannelCount(This,pcChannels) ) - -#define IPerChannelDbLevel_GetLevelRange(This,nChannel,pfMinLevelDB,pfMaxLevelDB,pfStepping) \ - ( (This)->lpVtbl -> GetLevelRange(This,nChannel,pfMinLevelDB,pfMaxLevelDB,pfStepping) ) - -#define IPerChannelDbLevel_GetLevel(This,nChannel,pfLevelDB) \ - ( (This)->lpVtbl -> GetLevel(This,nChannel,pfLevelDB) ) - -#define IPerChannelDbLevel_SetLevel(This,nChannel,fLevelDB,pguidEventContext) \ - ( (This)->lpVtbl -> SetLevel(This,nChannel,fLevelDB,pguidEventContext) ) - -#define IPerChannelDbLevel_SetLevelUniform(This,fLevelDB,pguidEventContext) \ - ( (This)->lpVtbl -> SetLevelUniform(This,fLevelDB,pguidEventContext) ) - -#define IPerChannelDbLevel_SetLevelAllChannels(This,aLevelsDB,cChannels,pguidEventContext) \ - ( (This)->lpVtbl -> SetLevelAllChannels(This,aLevelsDB,cChannels,pguidEventContext) ) - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __IPerChannelDbLevel_INTERFACE_DEFINED__ */ - - -#ifndef __IAudioVolumeLevel_INTERFACE_DEFINED__ -#define __IAudioVolumeLevel_INTERFACE_DEFINED__ - -/* interface IAudioVolumeLevel */ -/* [unique][helpstring][nonextensible][uuid][local][object] */ - - -EXTERN_C const IID IID_IAudioVolumeLevel; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("7FB7B48F-531D-44A2-BCB3-5AD5A134B3DC") - IAudioVolumeLevel : public IPerChannelDbLevel - { - public: - }; - -#else /* C style interface */ - - typedef struct IAudioVolumeLevelVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - IAudioVolumeLevel * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - IAudioVolumeLevel * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - IAudioVolumeLevel * This); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetChannelCount )( - IAudioVolumeLevel * This, - /* [out] */ - __out UINT *pcChannels); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetLevelRange )( - IAudioVolumeLevel * This, - /* [in] */ - __in UINT nChannel, - /* [out] */ - __out float *pfMinLevelDB, - /* [out] */ - __out float *pfMaxLevelDB, - /* [out] */ - __out float *pfStepping); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetLevel )( - IAudioVolumeLevel * This, - /* [in] */ - __in UINT nChannel, - /* [out] */ - __out float *pfLevelDB); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *SetLevel )( - IAudioVolumeLevel * This, - /* [in] */ - __in UINT nChannel, - /* [in] */ - __in float fLevelDB, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *SetLevelUniform )( - IAudioVolumeLevel * This, - /* [in] */ - __in float fLevelDB, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *SetLevelAllChannels )( - IAudioVolumeLevel * This, - /* [size_is][in] */ - __in_ecount(cChannels) float aLevelsDB[ ], - /* [in] */ - __in ULONG cChannels, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext); - - END_INTERFACE - } IAudioVolumeLevelVtbl; - - interface IAudioVolumeLevel - { - CONST_VTBL struct IAudioVolumeLevelVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define IAudioVolumeLevel_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define IAudioVolumeLevel_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define IAudioVolumeLevel_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#define IAudioVolumeLevel_GetChannelCount(This,pcChannels) \ - ( (This)->lpVtbl -> GetChannelCount(This,pcChannels) ) - -#define IAudioVolumeLevel_GetLevelRange(This,nChannel,pfMinLevelDB,pfMaxLevelDB,pfStepping) \ - ( (This)->lpVtbl -> GetLevelRange(This,nChannel,pfMinLevelDB,pfMaxLevelDB,pfStepping) ) - -#define IAudioVolumeLevel_GetLevel(This,nChannel,pfLevelDB) \ - ( (This)->lpVtbl -> GetLevel(This,nChannel,pfLevelDB) ) - -#define IAudioVolumeLevel_SetLevel(This,nChannel,fLevelDB,pguidEventContext) \ - ( (This)->lpVtbl -> SetLevel(This,nChannel,fLevelDB,pguidEventContext) ) - -#define IAudioVolumeLevel_SetLevelUniform(This,fLevelDB,pguidEventContext) \ - ( (This)->lpVtbl -> SetLevelUniform(This,fLevelDB,pguidEventContext) ) - -#define IAudioVolumeLevel_SetLevelAllChannels(This,aLevelsDB,cChannels,pguidEventContext) \ - ( (This)->lpVtbl -> SetLevelAllChannels(This,aLevelsDB,cChannels,pguidEventContext) ) - - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __IAudioVolumeLevel_INTERFACE_DEFINED__ */ - - -#ifndef __IAudioChannelConfig_INTERFACE_DEFINED__ -#define __IAudioChannelConfig_INTERFACE_DEFINED__ - -/* interface IAudioChannelConfig */ -/* [unique][helpstring][nonextensible][uuid][local][object] */ - - -EXTERN_C const IID IID_IAudioChannelConfig; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("BB11C46F-EC28-493C-B88A-5DB88062CE98") - IAudioChannelConfig : public IUnknown - { - public: - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE SetChannelConfig( - /* [in] */ DWORD dwConfig, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetChannelConfig( - /* [retval][out] */ DWORD *pdwConfig) = 0; - - }; - -#else /* C style interface */ - - typedef struct IAudioChannelConfigVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - IAudioChannelConfig * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - IAudioChannelConfig * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - IAudioChannelConfig * This); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *SetChannelConfig )( - IAudioChannelConfig * This, - /* [in] */ DWORD dwConfig, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetChannelConfig )( - IAudioChannelConfig * This, - /* [retval][out] */ DWORD *pdwConfig); - - END_INTERFACE - } IAudioChannelConfigVtbl; - - interface IAudioChannelConfig - { - CONST_VTBL struct IAudioChannelConfigVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define IAudioChannelConfig_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define IAudioChannelConfig_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define IAudioChannelConfig_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#define IAudioChannelConfig_SetChannelConfig(This,dwConfig,pguidEventContext) \ - ( (This)->lpVtbl -> SetChannelConfig(This,dwConfig,pguidEventContext) ) - -#define IAudioChannelConfig_GetChannelConfig(This,pdwConfig) \ - ( (This)->lpVtbl -> GetChannelConfig(This,pdwConfig) ) - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __IAudioChannelConfig_INTERFACE_DEFINED__ */ - - -#ifndef __IAudioLoudness_INTERFACE_DEFINED__ -#define __IAudioLoudness_INTERFACE_DEFINED__ - -/* interface IAudioLoudness */ -/* [unique][helpstring][nonextensible][uuid][local][object] */ - - -EXTERN_C const IID IID_IAudioLoudness; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("7D8B1437-DD53-4350-9C1B-1EE2890BD938") - IAudioLoudness : public IUnknown - { - public: - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetEnabled( - /* [out] */ - __out BOOL *pbEnabled) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE SetEnabled( - /* [in] */ - __in BOOL bEnable, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext) = 0; - - }; - -#else /* C style interface */ - - typedef struct IAudioLoudnessVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - IAudioLoudness * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - IAudioLoudness * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - IAudioLoudness * This); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetEnabled )( - IAudioLoudness * This, - /* [out] */ - __out BOOL *pbEnabled); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *SetEnabled )( - IAudioLoudness * This, - /* [in] */ - __in BOOL bEnable, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext); - - END_INTERFACE - } IAudioLoudnessVtbl; - - interface IAudioLoudness - { - CONST_VTBL struct IAudioLoudnessVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define IAudioLoudness_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define IAudioLoudness_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define IAudioLoudness_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#define IAudioLoudness_GetEnabled(This,pbEnabled) \ - ( (This)->lpVtbl -> GetEnabled(This,pbEnabled) ) - -#define IAudioLoudness_SetEnabled(This,bEnable,pguidEventContext) \ - ( (This)->lpVtbl -> SetEnabled(This,bEnable,pguidEventContext) ) - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __IAudioLoudness_INTERFACE_DEFINED__ */ - - -#ifndef __IAudioInputSelector_INTERFACE_DEFINED__ -#define __IAudioInputSelector_INTERFACE_DEFINED__ - -/* interface IAudioInputSelector */ -/* [unique][helpstring][nonextensible][uuid][local][object] */ - - -EXTERN_C const IID IID_IAudioInputSelector; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("4F03DC02-5E6E-4653-8F72-A030C123D598") - IAudioInputSelector : public IUnknown - { - public: - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetSelection( - /* [out] */ - __out UINT *pnIdSelected) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE SetSelection( - /* [in] */ - __in UINT nIdSelect, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext) = 0; - - }; - -#else /* C style interface */ - - typedef struct IAudioInputSelectorVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - IAudioInputSelector * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - IAudioInputSelector * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - IAudioInputSelector * This); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetSelection )( - IAudioInputSelector * This, - /* [out] */ - __out UINT *pnIdSelected); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *SetSelection )( - IAudioInputSelector * This, - /* [in] */ - __in UINT nIdSelect, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext); - - END_INTERFACE - } IAudioInputSelectorVtbl; - - interface IAudioInputSelector - { - CONST_VTBL struct IAudioInputSelectorVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define IAudioInputSelector_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define IAudioInputSelector_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define IAudioInputSelector_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#define IAudioInputSelector_GetSelection(This,pnIdSelected) \ - ( (This)->lpVtbl -> GetSelection(This,pnIdSelected) ) - -#define IAudioInputSelector_SetSelection(This,nIdSelect,pguidEventContext) \ - ( (This)->lpVtbl -> SetSelection(This,nIdSelect,pguidEventContext) ) - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __IAudioInputSelector_INTERFACE_DEFINED__ */ - - -#ifndef __IAudioOutputSelector_INTERFACE_DEFINED__ -#define __IAudioOutputSelector_INTERFACE_DEFINED__ - -/* interface IAudioOutputSelector */ -/* [unique][helpstring][nonextensible][uuid][local][object] */ - - -EXTERN_C const IID IID_IAudioOutputSelector; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("BB515F69-94A7-429e-8B9C-271B3F11A3AB") - IAudioOutputSelector : public IUnknown - { - public: - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetSelection( - /* [out] */ - __out UINT *pnIdSelected) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE SetSelection( - /* [in] */ - __in UINT nIdSelect, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext) = 0; - - }; - -#else /* C style interface */ - - typedef struct IAudioOutputSelectorVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - IAudioOutputSelector * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - IAudioOutputSelector * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - IAudioOutputSelector * This); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetSelection )( - IAudioOutputSelector * This, - /* [out] */ - __out UINT *pnIdSelected); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *SetSelection )( - IAudioOutputSelector * This, - /* [in] */ - __in UINT nIdSelect, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext); - - END_INTERFACE - } IAudioOutputSelectorVtbl; - - interface IAudioOutputSelector - { - CONST_VTBL struct IAudioOutputSelectorVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define IAudioOutputSelector_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define IAudioOutputSelector_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define IAudioOutputSelector_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#define IAudioOutputSelector_GetSelection(This,pnIdSelected) \ - ( (This)->lpVtbl -> GetSelection(This,pnIdSelected) ) - -#define IAudioOutputSelector_SetSelection(This,nIdSelect,pguidEventContext) \ - ( (This)->lpVtbl -> SetSelection(This,nIdSelect,pguidEventContext) ) - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __IAudioOutputSelector_INTERFACE_DEFINED__ */ - - -#ifndef __IAudioMute_INTERFACE_DEFINED__ -#define __IAudioMute_INTERFACE_DEFINED__ - -/* interface IAudioMute */ -/* [unique][helpstring][nonextensible][uuid][local][object] */ - - -EXTERN_C const IID IID_IAudioMute; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("DF45AEEA-B74A-4B6B-AFAD-2366B6AA012E") - IAudioMute : public IUnknown - { - public: - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE SetMute( - /* [in] */ - __in BOOL bMuted, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetMute( - /* [out] */ - __out BOOL *pbMuted) = 0; - - }; - -#else /* C style interface */ - - typedef struct IAudioMuteVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - IAudioMute * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - IAudioMute * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - IAudioMute * This); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *SetMute )( - IAudioMute * This, - /* [in] */ - __in BOOL bMuted, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetMute )( - IAudioMute * This, - /* [out] */ - __out BOOL *pbMuted); - - END_INTERFACE - } IAudioMuteVtbl; - - interface IAudioMute - { - CONST_VTBL struct IAudioMuteVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define IAudioMute_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define IAudioMute_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define IAudioMute_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#define IAudioMute_SetMute(This,bMuted,pguidEventContext) \ - ( (This)->lpVtbl -> SetMute(This,bMuted,pguidEventContext) ) - -#define IAudioMute_GetMute(This,pbMuted) \ - ( (This)->lpVtbl -> GetMute(This,pbMuted) ) - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __IAudioMute_INTERFACE_DEFINED__ */ - - -#ifndef __IAudioBass_INTERFACE_DEFINED__ -#define __IAudioBass_INTERFACE_DEFINED__ - -/* interface IAudioBass */ -/* [unique][helpstring][nonextensible][uuid][local][object] */ - - -EXTERN_C const IID IID_IAudioBass; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("A2B1A1D9-4DB3-425D-A2B2-BD335CB3E2E5") - IAudioBass : public IPerChannelDbLevel - { - public: - }; - -#else /* C style interface */ - - typedef struct IAudioBassVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - IAudioBass * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - IAudioBass * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - IAudioBass * This); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetChannelCount )( - IAudioBass * This, - /* [out] */ - __out UINT *pcChannels); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetLevelRange )( - IAudioBass * This, - /* [in] */ - __in UINT nChannel, - /* [out] */ - __out float *pfMinLevelDB, - /* [out] */ - __out float *pfMaxLevelDB, - /* [out] */ - __out float *pfStepping); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetLevel )( - IAudioBass * This, - /* [in] */ - __in UINT nChannel, - /* [out] */ - __out float *pfLevelDB); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *SetLevel )( - IAudioBass * This, - /* [in] */ - __in UINT nChannel, - /* [in] */ - __in float fLevelDB, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *SetLevelUniform )( - IAudioBass * This, - /* [in] */ - __in float fLevelDB, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *SetLevelAllChannels )( - IAudioBass * This, - /* [size_is][in] */ - __in_ecount(cChannels) float aLevelsDB[ ], - /* [in] */ - __in ULONG cChannels, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext); - - END_INTERFACE - } IAudioBassVtbl; - - interface IAudioBass - { - CONST_VTBL struct IAudioBassVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define IAudioBass_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define IAudioBass_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define IAudioBass_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#define IAudioBass_GetChannelCount(This,pcChannels) \ - ( (This)->lpVtbl -> GetChannelCount(This,pcChannels) ) - -#define IAudioBass_GetLevelRange(This,nChannel,pfMinLevelDB,pfMaxLevelDB,pfStepping) \ - ( (This)->lpVtbl -> GetLevelRange(This,nChannel,pfMinLevelDB,pfMaxLevelDB,pfStepping) ) - -#define IAudioBass_GetLevel(This,nChannel,pfLevelDB) \ - ( (This)->lpVtbl -> GetLevel(This,nChannel,pfLevelDB) ) - -#define IAudioBass_SetLevel(This,nChannel,fLevelDB,pguidEventContext) \ - ( (This)->lpVtbl -> SetLevel(This,nChannel,fLevelDB,pguidEventContext) ) - -#define IAudioBass_SetLevelUniform(This,fLevelDB,pguidEventContext) \ - ( (This)->lpVtbl -> SetLevelUniform(This,fLevelDB,pguidEventContext) ) - -#define IAudioBass_SetLevelAllChannels(This,aLevelsDB,cChannels,pguidEventContext) \ - ( (This)->lpVtbl -> SetLevelAllChannels(This,aLevelsDB,cChannels,pguidEventContext) ) - - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __IAudioBass_INTERFACE_DEFINED__ */ - - -#ifndef __IAudioMidrange_INTERFACE_DEFINED__ -#define __IAudioMidrange_INTERFACE_DEFINED__ - -/* interface IAudioMidrange */ -/* [unique][helpstring][nonextensible][uuid][local][object] */ - - -EXTERN_C const IID IID_IAudioMidrange; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("5E54B6D7-B44B-40D9-9A9E-E691D9CE6EDF") - IAudioMidrange : public IPerChannelDbLevel - { - public: - }; - -#else /* C style interface */ - - typedef struct IAudioMidrangeVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - IAudioMidrange * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - IAudioMidrange * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - IAudioMidrange * This); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetChannelCount )( - IAudioMidrange * This, - /* [out] */ - __out UINT *pcChannels); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetLevelRange )( - IAudioMidrange * This, - /* [in] */ - __in UINT nChannel, - /* [out] */ - __out float *pfMinLevelDB, - /* [out] */ - __out float *pfMaxLevelDB, - /* [out] */ - __out float *pfStepping); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetLevel )( - IAudioMidrange * This, - /* [in] */ - __in UINT nChannel, - /* [out] */ - __out float *pfLevelDB); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *SetLevel )( - IAudioMidrange * This, - /* [in] */ - __in UINT nChannel, - /* [in] */ - __in float fLevelDB, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *SetLevelUniform )( - IAudioMidrange * This, - /* [in] */ - __in float fLevelDB, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *SetLevelAllChannels )( - IAudioMidrange * This, - /* [size_is][in] */ - __in_ecount(cChannels) float aLevelsDB[ ], - /* [in] */ - __in ULONG cChannels, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext); - - END_INTERFACE - } IAudioMidrangeVtbl; - - interface IAudioMidrange - { - CONST_VTBL struct IAudioMidrangeVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define IAudioMidrange_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define IAudioMidrange_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define IAudioMidrange_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#define IAudioMidrange_GetChannelCount(This,pcChannels) \ - ( (This)->lpVtbl -> GetChannelCount(This,pcChannels) ) - -#define IAudioMidrange_GetLevelRange(This,nChannel,pfMinLevelDB,pfMaxLevelDB,pfStepping) \ - ( (This)->lpVtbl -> GetLevelRange(This,nChannel,pfMinLevelDB,pfMaxLevelDB,pfStepping) ) - -#define IAudioMidrange_GetLevel(This,nChannel,pfLevelDB) \ - ( (This)->lpVtbl -> GetLevel(This,nChannel,pfLevelDB) ) - -#define IAudioMidrange_SetLevel(This,nChannel,fLevelDB,pguidEventContext) \ - ( (This)->lpVtbl -> SetLevel(This,nChannel,fLevelDB,pguidEventContext) ) - -#define IAudioMidrange_SetLevelUniform(This,fLevelDB,pguidEventContext) \ - ( (This)->lpVtbl -> SetLevelUniform(This,fLevelDB,pguidEventContext) ) - -#define IAudioMidrange_SetLevelAllChannels(This,aLevelsDB,cChannels,pguidEventContext) \ - ( (This)->lpVtbl -> SetLevelAllChannels(This,aLevelsDB,cChannels,pguidEventContext) ) - - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __IAudioMidrange_INTERFACE_DEFINED__ */ - - -#ifndef __IAudioTreble_INTERFACE_DEFINED__ -#define __IAudioTreble_INTERFACE_DEFINED__ - -/* interface IAudioTreble */ -/* [unique][helpstring][nonextensible][uuid][local][object] */ - - -EXTERN_C const IID IID_IAudioTreble; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("0A717812-694E-4907-B74B-BAFA5CFDCA7B") - IAudioTreble : public IPerChannelDbLevel - { - public: - }; - -#else /* C style interface */ - - typedef struct IAudioTrebleVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - IAudioTreble * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - IAudioTreble * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - IAudioTreble * This); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetChannelCount )( - IAudioTreble * This, - /* [out] */ - __out UINT *pcChannels); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetLevelRange )( - IAudioTreble * This, - /* [in] */ - __in UINT nChannel, - /* [out] */ - __out float *pfMinLevelDB, - /* [out] */ - __out float *pfMaxLevelDB, - /* [out] */ - __out float *pfStepping); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetLevel )( - IAudioTreble * This, - /* [in] */ - __in UINT nChannel, - /* [out] */ - __out float *pfLevelDB); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *SetLevel )( - IAudioTreble * This, - /* [in] */ - __in UINT nChannel, - /* [in] */ - __in float fLevelDB, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *SetLevelUniform )( - IAudioTreble * This, - /* [in] */ - __in float fLevelDB, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *SetLevelAllChannels )( - IAudioTreble * This, - /* [size_is][in] */ - __in_ecount(cChannels) float aLevelsDB[ ], - /* [in] */ - __in ULONG cChannels, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext); - - END_INTERFACE - } IAudioTrebleVtbl; - - interface IAudioTreble - { - CONST_VTBL struct IAudioTrebleVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define IAudioTreble_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define IAudioTreble_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define IAudioTreble_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#define IAudioTreble_GetChannelCount(This,pcChannels) \ - ( (This)->lpVtbl -> GetChannelCount(This,pcChannels) ) - -#define IAudioTreble_GetLevelRange(This,nChannel,pfMinLevelDB,pfMaxLevelDB,pfStepping) \ - ( (This)->lpVtbl -> GetLevelRange(This,nChannel,pfMinLevelDB,pfMaxLevelDB,pfStepping) ) - -#define IAudioTreble_GetLevel(This,nChannel,pfLevelDB) \ - ( (This)->lpVtbl -> GetLevel(This,nChannel,pfLevelDB) ) - -#define IAudioTreble_SetLevel(This,nChannel,fLevelDB,pguidEventContext) \ - ( (This)->lpVtbl -> SetLevel(This,nChannel,fLevelDB,pguidEventContext) ) - -#define IAudioTreble_SetLevelUniform(This,fLevelDB,pguidEventContext) \ - ( (This)->lpVtbl -> SetLevelUniform(This,fLevelDB,pguidEventContext) ) - -#define IAudioTreble_SetLevelAllChannels(This,aLevelsDB,cChannels,pguidEventContext) \ - ( (This)->lpVtbl -> SetLevelAllChannels(This,aLevelsDB,cChannels,pguidEventContext) ) - - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __IAudioTreble_INTERFACE_DEFINED__ */ - - -#ifndef __IAudioAutoGainControl_INTERFACE_DEFINED__ -#define __IAudioAutoGainControl_INTERFACE_DEFINED__ - -/* interface IAudioAutoGainControl */ -/* [unique][helpstring][nonextensible][uuid][local][object] */ - - -EXTERN_C const IID IID_IAudioAutoGainControl; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("85401FD4-6DE4-4b9d-9869-2D6753A82F3C") - IAudioAutoGainControl : public IUnknown - { - public: - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetEnabled( - /* [out] */ - __out BOOL *pbEnabled) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE SetEnabled( - /* [in] */ - __in BOOL bEnable, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext) = 0; - - }; - -#else /* C style interface */ - - typedef struct IAudioAutoGainControlVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - IAudioAutoGainControl * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - IAudioAutoGainControl * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - IAudioAutoGainControl * This); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetEnabled )( - IAudioAutoGainControl * This, - /* [out] */ - __out BOOL *pbEnabled); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *SetEnabled )( - IAudioAutoGainControl * This, - /* [in] */ - __in BOOL bEnable, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext); - - END_INTERFACE - } IAudioAutoGainControlVtbl; - - interface IAudioAutoGainControl - { - CONST_VTBL struct IAudioAutoGainControlVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define IAudioAutoGainControl_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define IAudioAutoGainControl_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define IAudioAutoGainControl_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#define IAudioAutoGainControl_GetEnabled(This,pbEnabled) \ - ( (This)->lpVtbl -> GetEnabled(This,pbEnabled) ) - -#define IAudioAutoGainControl_SetEnabled(This,bEnable,pguidEventContext) \ - ( (This)->lpVtbl -> SetEnabled(This,bEnable,pguidEventContext) ) - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __IAudioAutoGainControl_INTERFACE_DEFINED__ */ - - -#ifndef __IAudioPeakMeter_INTERFACE_DEFINED__ -#define __IAudioPeakMeter_INTERFACE_DEFINED__ - -/* interface IAudioPeakMeter */ -/* [unique][helpstring][nonextensible][uuid][local][object] */ - - -EXTERN_C const IID IID_IAudioPeakMeter; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("DD79923C-0599-45e0-B8B6-C8DF7DB6E796") - IAudioPeakMeter : public IUnknown - { - public: - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetChannelCount( - /* [out] */ - __out UINT *pcChannels) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetLevel( - /* [in] */ - __in UINT nChannel, - /* [out] */ - __out float *pfLevel) = 0; - - }; - -#else /* C style interface */ - - typedef struct IAudioPeakMeterVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - IAudioPeakMeter * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - IAudioPeakMeter * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - IAudioPeakMeter * This); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetChannelCount )( - IAudioPeakMeter * This, - /* [out] */ - __out UINT *pcChannels); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetLevel )( - IAudioPeakMeter * This, - /* [in] */ - __in UINT nChannel, - /* [out] */ - __out float *pfLevel); - - END_INTERFACE - } IAudioPeakMeterVtbl; - - interface IAudioPeakMeter - { - CONST_VTBL struct IAudioPeakMeterVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define IAudioPeakMeter_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define IAudioPeakMeter_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define IAudioPeakMeter_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#define IAudioPeakMeter_GetChannelCount(This,pcChannels) \ - ( (This)->lpVtbl -> GetChannelCount(This,pcChannels) ) - -#define IAudioPeakMeter_GetLevel(This,nChannel,pfLevel) \ - ( (This)->lpVtbl -> GetLevel(This,nChannel,pfLevel) ) - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __IAudioPeakMeter_INTERFACE_DEFINED__ */ - - -#ifndef __IDeviceSpecificProperty_INTERFACE_DEFINED__ -#define __IDeviceSpecificProperty_INTERFACE_DEFINED__ - -/* interface IDeviceSpecificProperty */ -/* [unique][helpstring][nonextensible][uuid][local][object] */ - - -EXTERN_C const IID IID_IDeviceSpecificProperty; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("3B22BCBF-2586-4af0-8583-205D391B807C") - IDeviceSpecificProperty : public IUnknown - { - public: - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetType( - /* [out] */ - __deref_out VARTYPE *pVType) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetValue( - /* [out] */ - __out void *pvValue, - /* [out][in] */ - __inout DWORD *pcbValue) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE SetValue( - /* [in] */ - __in void *pvValue, - /* [in] */ DWORD cbValue, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE Get4BRange( - /* [out] */ - __deref_out LONG *plMin, - /* [out] */ - __deref_out LONG *plMax, - /* [out] */ - __deref_out LONG *plStepping) = 0; - - }; - -#else /* C style interface */ - - typedef struct IDeviceSpecificPropertyVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - IDeviceSpecificProperty * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - IDeviceSpecificProperty * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - IDeviceSpecificProperty * This); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetType )( - IDeviceSpecificProperty * This, - /* [out] */ - __deref_out VARTYPE *pVType); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetValue )( - IDeviceSpecificProperty * This, - /* [out] */ - __out void *pvValue, - /* [out][in] */ - __inout DWORD *pcbValue); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *SetValue )( - IDeviceSpecificProperty * This, - /* [in] */ - __in void *pvValue, - /* [in] */ DWORD cbValue, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *Get4BRange )( - IDeviceSpecificProperty * This, - /* [out] */ - __deref_out LONG *plMin, - /* [out] */ - __deref_out LONG *plMax, - /* [out] */ - __deref_out LONG *plStepping); - - END_INTERFACE - } IDeviceSpecificPropertyVtbl; - - interface IDeviceSpecificProperty - { - CONST_VTBL struct IDeviceSpecificPropertyVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define IDeviceSpecificProperty_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define IDeviceSpecificProperty_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define IDeviceSpecificProperty_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#define IDeviceSpecificProperty_GetType(This,pVType) \ - ( (This)->lpVtbl -> GetType(This,pVType) ) - -#define IDeviceSpecificProperty_GetValue(This,pvValue,pcbValue) \ - ( (This)->lpVtbl -> GetValue(This,pvValue,pcbValue) ) - -#define IDeviceSpecificProperty_SetValue(This,pvValue,cbValue,pguidEventContext) \ - ( (This)->lpVtbl -> SetValue(This,pvValue,cbValue,pguidEventContext) ) - -#define IDeviceSpecificProperty_Get4BRange(This,plMin,plMax,plStepping) \ - ( (This)->lpVtbl -> Get4BRange(This,plMin,plMax,plStepping) ) - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __IDeviceSpecificProperty_INTERFACE_DEFINED__ */ - - -#ifndef __IKsFormatSupport_INTERFACE_DEFINED__ -#define __IKsFormatSupport_INTERFACE_DEFINED__ - -/* interface IKsFormatSupport */ -/* [unique][helpstring][nonextensible][uuid][local][object] */ - - -EXTERN_C const IID IID_IKsFormatSupport; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("3CB4A69D-BB6F-4D2B-95B7-452D2C155DB5") - IKsFormatSupport : public IUnknown - { - public: - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE IsFormatSupported( - /* [size_is][in] */ PKSDATAFORMAT pKsFormat, - /* [in] */ - __in DWORD cbFormat, - /* [out] */ - __out BOOL *pbSupported) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetDevicePreferredFormat( - /* [out] */ PKSDATAFORMAT *ppKsFormat) = 0; - - }; - -#else /* C style interface */ - - typedef struct IKsFormatSupportVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - IKsFormatSupport * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - IKsFormatSupport * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - IKsFormatSupport * This); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *IsFormatSupported )( - IKsFormatSupport * This, - /* [size_is][in] */ PKSDATAFORMAT pKsFormat, - /* [in] */ - __in DWORD cbFormat, - /* [out] */ - __out BOOL *pbSupported); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetDevicePreferredFormat )( - IKsFormatSupport * This, - /* [out] */ PKSDATAFORMAT *ppKsFormat); - - END_INTERFACE - } IKsFormatSupportVtbl; - - interface IKsFormatSupport - { - CONST_VTBL struct IKsFormatSupportVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define IKsFormatSupport_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define IKsFormatSupport_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define IKsFormatSupport_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#define IKsFormatSupport_IsFormatSupported(This,pKsFormat,cbFormat,pbSupported) \ - ( (This)->lpVtbl -> IsFormatSupported(This,pKsFormat,cbFormat,pbSupported) ) - -#define IKsFormatSupport_GetDevicePreferredFormat(This,ppKsFormat) \ - ( (This)->lpVtbl -> GetDevicePreferredFormat(This,ppKsFormat) ) - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __IKsFormatSupport_INTERFACE_DEFINED__ */ - - -#ifndef __IKsJackDescription_INTERFACE_DEFINED__ -#define __IKsJackDescription_INTERFACE_DEFINED__ - -/* interface IKsJackDescription */ -/* [unique][helpstring][nonextensible][uuid][local][object] */ - - -EXTERN_C const IID IID_IKsJackDescription; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("4509F757-2D46-4637-8E62-CE7DB944F57B") - IKsJackDescription : public IUnknown - { - public: - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetJackCount( - /* [out] */ - __out UINT *pcJacks) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetJackDescription( - /* [in] */ UINT nJack, - /* [out] */ - __out KSJACK_DESCRIPTION *pDescription) = 0; - - }; - -#else /* C style interface */ - - typedef struct IKsJackDescriptionVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - IKsJackDescription * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - IKsJackDescription * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - IKsJackDescription * This); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetJackCount )( - IKsJackDescription * This, - /* [out] */ - __out UINT *pcJacks); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetJackDescription )( - IKsJackDescription * This, - /* [in] */ UINT nJack, - /* [out] */ - __out KSJACK_DESCRIPTION *pDescription); - - END_INTERFACE - } IKsJackDescriptionVtbl; - - interface IKsJackDescription - { - CONST_VTBL struct IKsJackDescriptionVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define IKsJackDescription_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define IKsJackDescription_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define IKsJackDescription_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#define IKsJackDescription_GetJackCount(This,pcJacks) \ - ( (This)->lpVtbl -> GetJackCount(This,pcJacks) ) - -#define IKsJackDescription_GetJackDescription(This,nJack,pDescription) \ - ( (This)->lpVtbl -> GetJackDescription(This,nJack,pDescription) ) - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __IKsJackDescription_INTERFACE_DEFINED__ */ - - -#ifndef __IPartsList_INTERFACE_DEFINED__ -#define __IPartsList_INTERFACE_DEFINED__ - -/* interface IPartsList */ -/* [object][unique][helpstring][uuid][local] */ - - -EXTERN_C const IID IID_IPartsList; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("6DAA848C-5EB0-45CC-AEA5-998A2CDA1FFB") - IPartsList : public IUnknown - { - public: - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetCount( - /* [out] */ - __out UINT *pCount) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetPart( - /* [in] */ - __in UINT nIndex, - /* [out] */ - __out IPart **ppPart) = 0; - - }; - -#else /* C style interface */ - - typedef struct IPartsListVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - IPartsList * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - IPartsList * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - IPartsList * This); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetCount )( - IPartsList * This, - /* [out] */ - __out UINT *pCount); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetPart )( - IPartsList * This, - /* [in] */ - __in UINT nIndex, - /* [out] */ - __out IPart **ppPart); - - END_INTERFACE - } IPartsListVtbl; - - interface IPartsList - { - CONST_VTBL struct IPartsListVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define IPartsList_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define IPartsList_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define IPartsList_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#define IPartsList_GetCount(This,pCount) \ - ( (This)->lpVtbl -> GetCount(This,pCount) ) - -#define IPartsList_GetPart(This,nIndex,ppPart) \ - ( (This)->lpVtbl -> GetPart(This,nIndex,ppPart) ) - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __IPartsList_INTERFACE_DEFINED__ */ - - -#ifndef __IPart_INTERFACE_DEFINED__ -#define __IPart_INTERFACE_DEFINED__ - -/* interface IPart */ -/* [object][unique][helpstring][uuid][local] */ - - -EXTERN_C const IID IID_IPart; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("AE2DE0E4-5BCA-4F2D-AA46-5D13F8FDB3A9") - IPart : public IUnknown - { - public: - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetName( - /* [out] */ - __deref_out LPWSTR *ppwstrName) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetLocalId( - /* [out] */ - __out UINT *pnId) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetGlobalId( - /* [out] */ - __deref_out LPWSTR *ppwstrGlobalId) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetPartType( - /* [out] */ - __out PartType *pPartType) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetSubType( - /* [out] */ GUID *pSubType) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetControlInterfaceCount( - /* [out] */ - __out UINT *pCount) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetControlInterface( - /* [in] */ - __in UINT nIndex, - /* [out] */ - __out IControlInterface **ppInterfaceDesc) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE EnumPartsIncoming( - /* [out] */ - __out IPartsList **ppParts) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE EnumPartsOutgoing( - /* [out] */ - __out IPartsList **ppParts) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetTopologyObject( - /* [out] */ - __out IDeviceTopology **ppTopology) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE Activate( - /* [in] */ - __in DWORD dwClsContext, - /* [in] */ - __in REFIID refiid, - /* [iid_is][out] */ - __out_opt void **ppvObject) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE RegisterControlChangeCallback( - /* [in] */ - __in REFGUID riid, - /* [in] */ - __in IControlChangeNotify *pNotify) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE UnregisterControlChangeCallback( - /* [in] */ - __in IControlChangeNotify *pNotify) = 0; - - }; - -#else /* C style interface */ - - typedef struct IPartVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - IPart * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - IPart * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - IPart * This); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetName )( - IPart * This, - /* [out] */ - __deref_out LPWSTR *ppwstrName); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetLocalId )( - IPart * This, - /* [out] */ - __out UINT *pnId); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetGlobalId )( - IPart * This, - /* [out] */ - __deref_out LPWSTR *ppwstrGlobalId); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetPartType )( - IPart * This, - /* [out] */ - __out PartType *pPartType); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetSubType )( - IPart * This, - /* [out] */ GUID *pSubType); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetControlInterfaceCount )( - IPart * This, - /* [out] */ - __out UINT *pCount); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetControlInterface )( - IPart * This, - /* [in] */ - __in UINT nIndex, - /* [out] */ - __out IControlInterface **ppInterfaceDesc); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *EnumPartsIncoming )( - IPart * This, - /* [out] */ - __out IPartsList **ppParts); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *EnumPartsOutgoing )( - IPart * This, - /* [out] */ - __out IPartsList **ppParts); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetTopologyObject )( - IPart * This, - /* [out] */ - __out IDeviceTopology **ppTopology); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *Activate )( - IPart * This, - /* [in] */ - __in DWORD dwClsContext, - /* [in] */ - __in REFIID refiid, - /* [iid_is][out] */ - __out_opt void **ppvObject); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *RegisterControlChangeCallback )( - IPart * This, - /* [in] */ - __in REFGUID riid, - /* [in] */ - __in IControlChangeNotify *pNotify); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *UnregisterControlChangeCallback )( - IPart * This, - /* [in] */ - __in IControlChangeNotify *pNotify); - - END_INTERFACE - } IPartVtbl; - - interface IPart - { - CONST_VTBL struct IPartVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define IPart_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define IPart_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define IPart_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#define IPart_GetName(This,ppwstrName) \ - ( (This)->lpVtbl -> GetName(This,ppwstrName) ) - -#define IPart_GetLocalId(This,pnId) \ - ( (This)->lpVtbl -> GetLocalId(This,pnId) ) - -#define IPart_GetGlobalId(This,ppwstrGlobalId) \ - ( (This)->lpVtbl -> GetGlobalId(This,ppwstrGlobalId) ) - -#define IPart_GetPartType(This,pPartType) \ - ( (This)->lpVtbl -> GetPartType(This,pPartType) ) - -#define IPart_GetSubType(This,pSubType) \ - ( (This)->lpVtbl -> GetSubType(This,pSubType) ) - -#define IPart_GetControlInterfaceCount(This,pCount) \ - ( (This)->lpVtbl -> GetControlInterfaceCount(This,pCount) ) - -#define IPart_GetControlInterface(This,nIndex,ppInterfaceDesc) \ - ( (This)->lpVtbl -> GetControlInterface(This,nIndex,ppInterfaceDesc) ) - -#define IPart_EnumPartsIncoming(This,ppParts) \ - ( (This)->lpVtbl -> EnumPartsIncoming(This,ppParts) ) - -#define IPart_EnumPartsOutgoing(This,ppParts) \ - ( (This)->lpVtbl -> EnumPartsOutgoing(This,ppParts) ) - -#define IPart_GetTopologyObject(This,ppTopology) \ - ( (This)->lpVtbl -> GetTopologyObject(This,ppTopology) ) - -#define IPart_Activate(This,dwClsContext,refiid,ppvObject) \ - ( (This)->lpVtbl -> Activate(This,dwClsContext,refiid,ppvObject) ) - -#define IPart_RegisterControlChangeCallback(This,riid,pNotify) \ - ( (This)->lpVtbl -> RegisterControlChangeCallback(This,riid,pNotify) ) - -#define IPart_UnregisterControlChangeCallback(This,pNotify) \ - ( (This)->lpVtbl -> UnregisterControlChangeCallback(This,pNotify) ) - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __IPart_INTERFACE_DEFINED__ */ - - -#ifndef __IConnector_INTERFACE_DEFINED__ -#define __IConnector_INTERFACE_DEFINED__ - -/* interface IConnector */ -/* [object][unique][helpstring][uuid][local] */ - - -EXTERN_C const IID IID_IConnector; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("9c2c4058-23f5-41de-877a-df3af236a09e") - IConnector : public IUnknown - { - public: - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetType( - /* [out] */ - __out ConnectorType *pType) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetDataFlow( - /* [out] */ - __out DataFlow *pFlow) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE ConnectTo( - /* [in] */ - __in IConnector *pConnectTo) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE Disconnect( void) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE IsConnected( - /* [out] */ - __out BOOL *pbConnected) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetConnectedTo( - /* [out] */ - __out IConnector **ppConTo) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetConnectorIdConnectedTo( - /* [out] */ - __deref_out LPWSTR *ppwstrConnectorId) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetDeviceIdConnectedTo( - /* [out] */ - __deref_out LPWSTR *ppwstrDeviceId) = 0; - - }; - -#else /* C style interface */ - - typedef struct IConnectorVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - IConnector * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - IConnector * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - IConnector * This); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetType )( - IConnector * This, - /* [out] */ - __out ConnectorType *pType); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetDataFlow )( - IConnector * This, - /* [out] */ - __out DataFlow *pFlow); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *ConnectTo )( - IConnector * This, - /* [in] */ - __in IConnector *pConnectTo); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *Disconnect )( - IConnector * This); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *IsConnected )( - IConnector * This, - /* [out] */ - __out BOOL *pbConnected); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetConnectedTo )( - IConnector * This, - /* [out] */ - __out IConnector **ppConTo); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetConnectorIdConnectedTo )( - IConnector * This, - /* [out] */ - __deref_out LPWSTR *ppwstrConnectorId); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetDeviceIdConnectedTo )( - IConnector * This, - /* [out] */ - __deref_out LPWSTR *ppwstrDeviceId); - - END_INTERFACE - } IConnectorVtbl; - - interface IConnector - { - CONST_VTBL struct IConnectorVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define IConnector_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define IConnector_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define IConnector_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#define IConnector_GetType(This,pType) \ - ( (This)->lpVtbl -> GetType(This,pType) ) - -#define IConnector_GetDataFlow(This,pFlow) \ - ( (This)->lpVtbl -> GetDataFlow(This,pFlow) ) - -#define IConnector_ConnectTo(This,pConnectTo) \ - ( (This)->lpVtbl -> ConnectTo(This,pConnectTo) ) - -#define IConnector_Disconnect(This) \ - ( (This)->lpVtbl -> Disconnect(This) ) - -#define IConnector_IsConnected(This,pbConnected) \ - ( (This)->lpVtbl -> IsConnected(This,pbConnected) ) - -#define IConnector_GetConnectedTo(This,ppConTo) \ - ( (This)->lpVtbl -> GetConnectedTo(This,ppConTo) ) - -#define IConnector_GetConnectorIdConnectedTo(This,ppwstrConnectorId) \ - ( (This)->lpVtbl -> GetConnectorIdConnectedTo(This,ppwstrConnectorId) ) - -#define IConnector_GetDeviceIdConnectedTo(This,ppwstrDeviceId) \ - ( (This)->lpVtbl -> GetDeviceIdConnectedTo(This,ppwstrDeviceId) ) - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __IConnector_INTERFACE_DEFINED__ */ - - -#ifndef __ISubunit_INTERFACE_DEFINED__ -#define __ISubunit_INTERFACE_DEFINED__ - -/* interface ISubunit */ -/* [object][unique][helpstring][uuid][local] */ - - -EXTERN_C const IID IID_ISubunit; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("82149A85-DBA6-4487-86BB-EA8F7FEFCC71") - ISubunit : public IUnknown - { - public: - }; - -#else /* C style interface */ - - typedef struct ISubunitVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - ISubunit * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - ISubunit * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - ISubunit * This); - - END_INTERFACE - } ISubunitVtbl; - - interface ISubunit - { - CONST_VTBL struct ISubunitVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define ISubunit_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define ISubunit_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define ISubunit_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __ISubunit_INTERFACE_DEFINED__ */ - - -#ifndef __IControlInterface_INTERFACE_DEFINED__ -#define __IControlInterface_INTERFACE_DEFINED__ - -/* interface IControlInterface */ -/* [object][unique][helpstring][uuid][local] */ - - -EXTERN_C const IID IID_IControlInterface; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("45d37c3f-5140-444a-ae24-400789f3cbf3") - IControlInterface : public IUnknown - { - public: - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetName( - /* [out] */ - __deref_out LPWSTR *ppwstrName) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetIID( - /* [out] */ - __out GUID *pIID) = 0; - - }; - -#else /* C style interface */ - - typedef struct IControlInterfaceVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - IControlInterface * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - IControlInterface * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - IControlInterface * This); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetName )( - IControlInterface * This, - /* [out] */ - __deref_out LPWSTR *ppwstrName); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetIID )( - IControlInterface * This, - /* [out] */ - __out GUID *pIID); - - END_INTERFACE - } IControlInterfaceVtbl; - - interface IControlInterface - { - CONST_VTBL struct IControlInterfaceVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define IControlInterface_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define IControlInterface_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define IControlInterface_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#define IControlInterface_GetName(This,ppwstrName) \ - ( (This)->lpVtbl -> GetName(This,ppwstrName) ) - -#define IControlInterface_GetIID(This,pIID) \ - ( (This)->lpVtbl -> GetIID(This,pIID) ) - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __IControlInterface_INTERFACE_DEFINED__ */ - - -#ifndef __IControlChangeNotify_INTERFACE_DEFINED__ -#define __IControlChangeNotify_INTERFACE_DEFINED__ - -/* interface IControlChangeNotify */ -/* [unique][helpstring][nonextensible][uuid][local][object] */ - - -EXTERN_C const IID IID_IControlChangeNotify; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("A09513ED-C709-4d21-BD7B-5F34C47F3947") - IControlChangeNotify : public IUnknown - { - public: - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE OnNotify( - /* [in] */ - __in DWORD dwSenderProcessId, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext) = 0; - - }; - -#else /* C style interface */ - - typedef struct IControlChangeNotifyVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - IControlChangeNotify * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - IControlChangeNotify * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - IControlChangeNotify * This); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *OnNotify )( - IControlChangeNotify * This, - /* [in] */ - __in DWORD dwSenderProcessId, - /* [unique][in] */ - __in_opt LPCGUID pguidEventContext); - - END_INTERFACE - } IControlChangeNotifyVtbl; - - interface IControlChangeNotify - { - CONST_VTBL struct IControlChangeNotifyVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define IControlChangeNotify_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define IControlChangeNotify_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define IControlChangeNotify_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#define IControlChangeNotify_OnNotify(This,dwSenderProcessId,pguidEventContext) \ - ( (This)->lpVtbl -> OnNotify(This,dwSenderProcessId,pguidEventContext) ) - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __IControlChangeNotify_INTERFACE_DEFINED__ */ - - -#ifndef __IDeviceTopology_INTERFACE_DEFINED__ -#define __IDeviceTopology_INTERFACE_DEFINED__ - -/* interface IDeviceTopology */ -/* [unique][helpstring][nonextensible][uuid][local][object] */ - - -EXTERN_C const IID IID_IDeviceTopology; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("2A07407E-6497-4A18-9787-32F79BD0D98F") - IDeviceTopology : public IUnknown - { - public: - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetConnectorCount( - /* [out] */ - __out UINT *pCount) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetConnector( - /* [in] */ - __in UINT nIndex, - /* [out] */ - __out IConnector **ppConnector) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetSubunitCount( - /* [out] */ - __out UINT *pCount) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetSubunit( - /* [in] */ - __in UINT nIndex, - /* [out] */ - __deref_out ISubunit **ppSubunit) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetPartById( - /* [in] */ - __in UINT nId, - /* [out] */ - __deref_out IPart **ppPart) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetDeviceId( - /* [out] */ - __deref_out LPWSTR *ppwstrDeviceId) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetSignalPath( - /* [in] */ - __in IPart *pIPartFrom, - /* [in] */ - __in IPart *pIPartTo, - /* [in] */ - __in BOOL bRejectMixedPaths, - /* [out] */ - __deref_out IPartsList **ppParts) = 0; - - }; - -#else /* C style interface */ - - typedef struct IDeviceTopologyVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - IDeviceTopology * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - IDeviceTopology * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - IDeviceTopology * This); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetConnectorCount )( - IDeviceTopology * This, - /* [out] */ - __out UINT *pCount); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetConnector )( - IDeviceTopology * This, - /* [in] */ - __in UINT nIndex, - /* [out] */ - __out IConnector **ppConnector); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetSubunitCount )( - IDeviceTopology * This, - /* [out] */ - __out UINT *pCount); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetSubunit )( - IDeviceTopology * This, - /* [in] */ - __in UINT nIndex, - /* [out] */ - __deref_out ISubunit **ppSubunit); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetPartById )( - IDeviceTopology * This, - /* [in] */ - __in UINT nId, - /* [out] */ - __deref_out IPart **ppPart); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetDeviceId )( - IDeviceTopology * This, - /* [out] */ - __deref_out LPWSTR *ppwstrDeviceId); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetSignalPath )( - IDeviceTopology * This, - /* [in] */ - __in IPart *pIPartFrom, - /* [in] */ - __in IPart *pIPartTo, - /* [in] */ - __in BOOL bRejectMixedPaths, - /* [out] */ - __deref_out IPartsList **ppParts); - - END_INTERFACE - } IDeviceTopologyVtbl; - - interface IDeviceTopology - { - CONST_VTBL struct IDeviceTopologyVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define IDeviceTopology_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define IDeviceTopology_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define IDeviceTopology_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#define IDeviceTopology_GetConnectorCount(This,pCount) \ - ( (This)->lpVtbl -> GetConnectorCount(This,pCount) ) - -#define IDeviceTopology_GetConnector(This,nIndex,ppConnector) \ - ( (This)->lpVtbl -> GetConnector(This,nIndex,ppConnector) ) - -#define IDeviceTopology_GetSubunitCount(This,pCount) \ - ( (This)->lpVtbl -> GetSubunitCount(This,pCount) ) - -#define IDeviceTopology_GetSubunit(This,nIndex,ppSubunit) \ - ( (This)->lpVtbl -> GetSubunit(This,nIndex,ppSubunit) ) - -#define IDeviceTopology_GetPartById(This,nId,ppPart) \ - ( (This)->lpVtbl -> GetPartById(This,nId,ppPart) ) - -#define IDeviceTopology_GetDeviceId(This,ppwstrDeviceId) \ - ( (This)->lpVtbl -> GetDeviceId(This,ppwstrDeviceId) ) - -#define IDeviceTopology_GetSignalPath(This,pIPartFrom,pIPartTo,bRejectMixedPaths,ppParts) \ - ( (This)->lpVtbl -> GetSignalPath(This,pIPartFrom,pIPartTo,bRejectMixedPaths,ppParts) ) - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __IDeviceTopology_INTERFACE_DEFINED__ */ - - - -#ifndef __DevTopologyLib_LIBRARY_DEFINED__ -#define __DevTopologyLib_LIBRARY_DEFINED__ - -/* library DevTopologyLib */ -/* [helpstring][version][uuid] */ - - - - - - - - - - - - - - - - -EXTERN_C const IID LIBID_DevTopologyLib; - -EXTERN_C const CLSID CLSID_DeviceTopology; - -#ifdef __cplusplus - -class DECLSPEC_UUID("1DF639D0-5EC1-47AA-9379-828DC1AA8C59") -DeviceTopology; -#endif -#endif /* __DevTopologyLib_LIBRARY_DEFINED__ */ - -/* Additional Prototypes for ALL interfaces */ - -/* end of Additional Prototypes */ - -#ifdef __cplusplus -} -#endif - -#endif - - - diff --git a/spaces/amirDev/crowd-counting-p2p/models/p2pnet.py b/spaces/amirDev/crowd-counting-p2p/models/p2pnet.py deleted file mode 100644 index 2ac6f82afb425514d16cfc4ba1244d63a2476877..0000000000000000000000000000000000000000 --- a/spaces/amirDev/crowd-counting-p2p/models/p2pnet.py +++ /dev/null @@ -1,342 +0,0 @@ -import torch -import torch.nn.functional as F -from torch import nn - -from util.misc import (NestedTensor, nested_tensor_from_tensor_list, - accuracy, get_world_size, interpolate, - is_dist_avail_and_initialized) - -from .backbone import build_backbone -from .matcher import build_matcher_crowd - -import numpy as np -import time - -# the network frmawork of the regression branch -class RegressionModel(nn.Module): - def __init__(self, num_features_in, num_anchor_points=4, feature_size=256): - super(RegressionModel, self).__init__() - - self.conv1 = nn.Conv2d(num_features_in, feature_size, kernel_size=3, padding=1) - self.act1 = nn.ReLU() - - self.conv2 = nn.Conv2d(feature_size, feature_size, kernel_size=3, padding=1) - self.act2 = nn.ReLU() - - self.conv3 = nn.Conv2d(feature_size, feature_size, kernel_size=3, padding=1) - self.act3 = nn.ReLU() - - self.conv4 = nn.Conv2d(feature_size, feature_size, kernel_size=3, padding=1) - self.act4 = nn.ReLU() - - self.output = nn.Conv2d(feature_size, num_anchor_points * 2, kernel_size=3, padding=1) - # sub-branch forward - def forward(self, x): - out = self.conv1(x) - out = self.act1(out) - - out = self.conv2(out) - out = self.act2(out) - - out = self.output(out) - - out = out.permute(0, 2, 3, 1) - - return out.contiguous().view(out.shape[0], -1, 2) - -# the network frmawork of the classification branch -class ClassificationModel(nn.Module): - def __init__(self, num_features_in, num_anchor_points=4, num_classes=80, prior=0.01, feature_size=256): - super(ClassificationModel, self).__init__() - - self.num_classes = num_classes - self.num_anchor_points = num_anchor_points - - self.conv1 = nn.Conv2d(num_features_in, feature_size, kernel_size=3, padding=1) - self.act1 = nn.ReLU() - - self.conv2 = nn.Conv2d(feature_size, feature_size, kernel_size=3, padding=1) - self.act2 = nn.ReLU() - - self.conv3 = nn.Conv2d(feature_size, feature_size, kernel_size=3, padding=1) - self.act3 = nn.ReLU() - - self.conv4 = nn.Conv2d(feature_size, feature_size, kernel_size=3, padding=1) - self.act4 = nn.ReLU() - - self.output = nn.Conv2d(feature_size, num_anchor_points * num_classes, kernel_size=3, padding=1) - self.output_act = nn.Sigmoid() - # sub-branch forward - def forward(self, x): - out = self.conv1(x) - out = self.act1(out) - - out = self.conv2(out) - out = self.act2(out) - - out = self.output(out) - - out1 = out.permute(0, 2, 3, 1) - - batch_size, width, height, _ = out1.shape - - out2 = out1.view(batch_size, width, height, self.num_anchor_points, self.num_classes) - - return out2.contiguous().view(x.shape[0], -1, self.num_classes) - -# generate the reference points in grid layout -def generate_anchor_points(stride=16, row=3, line=3): - row_step = stride / row - line_step = stride / line - - shift_x = (np.arange(1, line + 1) - 0.5) * line_step - stride / 2 - shift_y = (np.arange(1, row + 1) - 0.5) * row_step - stride / 2 - - shift_x, shift_y = np.meshgrid(shift_x, shift_y) - - anchor_points = np.vstack(( - shift_x.ravel(), shift_y.ravel() - )).transpose() - - return anchor_points -# shift the meta-anchor to get an acnhor points -def shift(shape, stride, anchor_points): - shift_x = (np.arange(0, shape[1]) + 0.5) * stride - shift_y = (np.arange(0, shape[0]) + 0.5) * stride - - shift_x, shift_y = np.meshgrid(shift_x, shift_y) - - shifts = np.vstack(( - shift_x.ravel(), shift_y.ravel() - )).transpose() - - A = anchor_points.shape[0] - K = shifts.shape[0] - all_anchor_points = (anchor_points.reshape((1, A, 2)) + shifts.reshape((1, K, 2)).transpose((1, 0, 2))) - all_anchor_points = all_anchor_points.reshape((K * A, 2)) - - return all_anchor_points - -# this class generate all reference points on all pyramid levels -class AnchorPoints(nn.Module): - def __init__(self, pyramid_levels=None, strides=None, row=3, line=3): - super(AnchorPoints, self).__init__() - - if pyramid_levels is None: - self.pyramid_levels = [3, 4, 5, 6, 7] - else: - self.pyramid_levels = pyramid_levels - - if strides is None: - self.strides = [2 ** x for x in self.pyramid_levels] - - self.row = row - self.line = line - - def forward(self, image): - image_shape = image.shape[2:] - image_shape = np.array(image_shape) - image_shapes = [(image_shape + 2 ** x - 1) // (2 ** x) for x in self.pyramid_levels] - - all_anchor_points = np.zeros((0, 2)).astype(np.float32) - # get reference points for each level - for idx, p in enumerate(self.pyramid_levels): - anchor_points = generate_anchor_points(2**p, row=self.row, line=self.line) - shifted_anchor_points = shift(image_shapes[idx], self.strides[idx], anchor_points) - all_anchor_points = np.append(all_anchor_points, shifted_anchor_points, axis=0) - - all_anchor_points = np.expand_dims(all_anchor_points, axis=0) - # send reference points to device - if torch.cuda.is_available(): - return torch.from_numpy(all_anchor_points.astype(np.float32)).cuda() - else: - return torch.from_numpy(all_anchor_points.astype(np.float32)) - -class Decoder(nn.Module): - def __init__(self, C3_size, C4_size, C5_size, feature_size=256): - super(Decoder, self).__init__() - - # upsample C5 to get P5 from the FPN paper - self.P5_1 = nn.Conv2d(C5_size, feature_size, kernel_size=1, stride=1, padding=0) - self.P5_upsampled = nn.Upsample(scale_factor=2, mode='nearest') - self.P5_2 = nn.Conv2d(feature_size, feature_size, kernel_size=3, stride=1, padding=1) - - # add P5 elementwise to C4 - self.P4_1 = nn.Conv2d(C4_size, feature_size, kernel_size=1, stride=1, padding=0) - self.P4_upsampled = nn.Upsample(scale_factor=2, mode='nearest') - self.P4_2 = nn.Conv2d(feature_size, feature_size, kernel_size=3, stride=1, padding=1) - - # add P4 elementwise to C3 - self.P3_1 = nn.Conv2d(C3_size, feature_size, kernel_size=1, stride=1, padding=0) - self.P3_upsampled = nn.Upsample(scale_factor=2, mode='nearest') - self.P3_2 = nn.Conv2d(feature_size, feature_size, kernel_size=3, stride=1, padding=1) - - - def forward(self, inputs): - C3, C4, C5 = inputs - - P5_x = self.P5_1(C5) - P5_upsampled_x = self.P5_upsampled(P5_x) - P5_x = self.P5_2(P5_x) - - P4_x = self.P4_1(C4) - P4_x = P5_upsampled_x + P4_x - P4_upsampled_x = self.P4_upsampled(P4_x) - P4_x = self.P4_2(P4_x) - - P3_x = self.P3_1(C3) - P3_x = P3_x + P4_upsampled_x - P3_x = self.P3_2(P3_x) - - return [P3_x, P4_x, P5_x] - -# the defenition of the P2PNet model -class P2PNet(nn.Module): - def __init__(self, backbone, row=2, line=2): - super().__init__() - self.backbone = backbone - self.num_classes = 2 - # the number of all anchor points - num_anchor_points = row * line - - self.regression = RegressionModel(num_features_in=256, num_anchor_points=num_anchor_points) - self.classification = ClassificationModel(num_features_in=256, \ - num_classes=self.num_classes, \ - num_anchor_points=num_anchor_points) - - self.anchor_points = AnchorPoints(pyramid_levels=[3,], row=row, line=line) - - self.fpn = Decoder(256, 512, 512) - - def forward(self, samples: NestedTensor): - # get the backbone features - features = self.backbone(samples) - # forward the feature pyramid - features_fpn = self.fpn([features[1], features[2], features[3]]) - - batch_size = features[0].shape[0] - # run the regression and classification branch - regression = self.regression(features_fpn[1]) * 100 # 8x - classification = self.classification(features_fpn[1]) - anchor_points = self.anchor_points(samples).repeat(batch_size, 1, 1) - # decode the points as prediction - output_coord = regression + anchor_points - output_class = classification - out = {'pred_logits': output_class, 'pred_points': output_coord} - - return out - -class SetCriterion_Crowd(nn.Module): - - def __init__(self, num_classes, matcher, weight_dict, eos_coef, losses): - """ Create the criterion. - Parameters: - num_classes: number of object categories, omitting the special no-object category - matcher: module able to compute a matching between targets and proposals - weight_dict: dict containing as key the names of the losses and as values their relative weight. - eos_coef: relative classification weight applied to the no-object category - losses: list of all the losses to be applied. See get_loss for list of available losses. - """ - super().__init__() - self.num_classes = num_classes - self.matcher = matcher - self.weight_dict = weight_dict - self.eos_coef = eos_coef - self.losses = losses - empty_weight = torch.ones(self.num_classes + 1) - empty_weight[0] = self.eos_coef - self.register_buffer('empty_weight', empty_weight) - - def loss_labels(self, outputs, targets, indices, num_points): - """Classification loss (NLL) - targets dicts must contain the key "labels" containing a tensor of dim [nb_target_boxes] - """ - assert 'pred_logits' in outputs - src_logits = outputs['pred_logits'] - - idx = self._get_src_permutation_idx(indices) - target_classes_o = torch.cat([t["labels"][J] for t, (_, J) in zip(targets, indices)]) - target_classes = torch.full(src_logits.shape[:2], 0, - dtype=torch.int64, device=src_logits.device) - target_classes[idx] = target_classes_o - - loss_ce = F.cross_entropy(src_logits.transpose(1, 2), target_classes, self.empty_weight) - losses = {'loss_ce': loss_ce} - - return losses - - def loss_points(self, outputs, targets, indices, num_points): - - assert 'pred_points' in outputs - idx = self._get_src_permutation_idx(indices) - src_points = outputs['pred_points'][idx] - target_points = torch.cat([t['point'][i] for t, (_, i) in zip(targets, indices)], dim=0) - - loss_bbox = F.mse_loss(src_points, target_points, reduction='none') - - losses = {} - losses['loss_point'] = loss_bbox.sum() / num_points - - return losses - - def _get_src_permutation_idx(self, indices): - # permute predictions following indices - batch_idx = torch.cat([torch.full_like(src, i) for i, (src, _) in enumerate(indices)]) - src_idx = torch.cat([src for (src, _) in indices]) - return batch_idx, src_idx - - def _get_tgt_permutation_idx(self, indices): - # permute targets following indices - batch_idx = torch.cat([torch.full_like(tgt, i) for i, (_, tgt) in enumerate(indices)]) - tgt_idx = torch.cat([tgt for (_, tgt) in indices]) - return batch_idx, tgt_idx - - def get_loss(self, loss, outputs, targets, indices, num_points, **kwargs): - loss_map = { - 'labels': self.loss_labels, - 'points': self.loss_points, - } - assert loss in loss_map, f'do you really want to compute {loss} loss?' - return loss_map[loss](outputs, targets, indices, num_points, **kwargs) - - def forward(self, outputs, targets): - """ This performs the loss computation. - Parameters: - outputs: dict of tensors, see the output specification of the model for the format - targets: list of dicts, such that len(targets) == batch_size. - The expected keys in each dict depends on the losses applied, see each loss' doc - """ - output1 = {'pred_logits': outputs['pred_logits'], 'pred_points': outputs['pred_points']} - - indices1 = self.matcher(output1, targets) - - num_points = sum(len(t["labels"]) for t in targets) - num_points = torch.as_tensor([num_points], dtype=torch.float, device=next(iter(output1.values())).device) - if is_dist_avail_and_initialized(): - torch.distributed.all_reduce(num_points) - num_boxes = torch.clamp(num_points / get_world_size(), min=1).item() - - losses = {} - for loss in self.losses: - losses.update(self.get_loss(loss, output1, targets, indices1, num_boxes)) - - return losses - -# create the P2PNet model -def build(args, training): - # treats persons as a single class - num_classes = 1 - - backbone = build_backbone(args) - model = P2PNet(backbone, args.row, args.line) - if not training: - return model - - weight_dict = {'loss_ce': 1, 'loss_points': args.point_loss_coef} - losses = ['labels', 'points'] - matcher = build_matcher_crowd(args) - criterion = SetCriterion_Crowd(num_classes, \ - matcher=matcher, weight_dict=weight_dict, \ - eos_coef=args.eos_coef, losses=losses) - - return model, criterion \ No newline at end of file diff --git a/spaces/antinous/dreambooth-training/train_dreambooth.py b/spaces/antinous/dreambooth-training/train_dreambooth.py deleted file mode 100644 index a496382fbc895961b9902c33a9d5cc926d4fcc8d..0000000000000000000000000000000000000000 --- a/spaces/antinous/dreambooth-training/train_dreambooth.py +++ /dev/null @@ -1,881 +0,0 @@ -import argparse -import itertools -import math -import os -from pathlib import Path -from typing import Optional -import subprocess -import sys -import gc -import random - -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -from torch.utils.data import Dataset - -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import set_seed -from diffusers import AutoencoderKL, DDPMScheduler, StableDiffusionPipeline, UNet2DConditionModel -from diffusers.optimization import get_scheduler -from huggingface_hub import HfFolder, Repository, whoami -from PIL import Image -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import CLIPTextModel, CLIPTokenizer - - -logger = get_logger(__name__) - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - #required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--tokenizer_name", - type=str, - default=None, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--instance_data_dir", - type=str, - default=None, - #required=True, - help="A folder containing the training data of instance images.", - ) - parser.add_argument( - "--class_data_dir", - type=str, - default=None, - #required=False, - help="A folder containing the training data of class images.", - ) - parser.add_argument( - "--instance_prompt", - type=str, - default=None, - help="The prompt with identifier specifying the instance", - ) - parser.add_argument( - "--class_prompt", - type=str, - default="", - help="The prompt to specify images in the same class as provided instance images.", - ) - parser.add_argument( - "--with_prior_preservation", - default=False, - action="store_true", - help="Flag to add prior preservation loss.", - ) - parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.") - parser.add_argument( - "--num_class_images", - type=int, - default=100, - help=( - "Minimal class images for prior preservation loss. If not have enough images, additional images will be" - " sampled with class_prompt." - ), - ) - parser.add_argument( - "--output_dir", - type=str, - default="", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", action="store_true", help="Whether to center crop images before resizing to resolution" - ) - parser.add_argument("--train_text_encoder", action="store_true", help="Whether to train the text encoder") - parser.add_argument( - "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument( - "--sample_batch_size", type=int, default=4, help="Batch size (per device) for sampling images." - ) - parser.add_argument("--num_train_epochs", type=int, default=1) - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--gradient_checkpointing", - action="store_true", - help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=5e-6, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default="no", - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose" - "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10." - "and an Nvidia Ampere GPU." - ), - ) - - parser.add_argument( - "--save_n_steps", - type=int, - default=1, - help=("Save the model every n global_steps"), - ) - - - parser.add_argument( - "--save_starting_step", - type=int, - default=1, - help=("The step from which it starts saving intermediary checkpoints"), - ) - - parser.add_argument( - "--stop_text_encoder_training", - type=int, - default=1000000, - help=("The step at which the text_encoder is no longer trained"), - ) - - - parser.add_argument( - "--image_captions_filename", - action="store_true", - help="Get captions from filename", - ) - - - parser.add_argument( - "--dump_only_text_encoder", - action="store_true", - default=False, - help="Dump only text encoder", - ) - - parser.add_argument( - "--train_only_unet", - action="store_true", - default=False, - help="Train only the unet", - ) - - parser.add_argument( - "--cache_latents", - action="store_true", - default=False, - help="Train only the unet", - ) - - parser.add_argument( - "--Session_dir", - type=str, - default="", - help="Current session directory", - ) - - - - - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - #if args.instance_data_dir is None: - # raise ValueError("You must specify a train data directory.") - - #if args.with_prior_preservation: - # if args.class_data_dir is None: - # raise ValueError("You must specify a data directory for class images.") - # if args.class_prompt is None: - # raise ValueError("You must specify prompt for class images.") - - return args - - -class DreamBoothDataset(Dataset): - """ - A dataset to prepare the instance and class images with the prompts for fine-tuning the model. - It pre-processes the images and the tokenizes prompts. - """ - - def __init__( - self, - instance_data_root, - instance_prompt, - tokenizer, - args, - class_data_root=None, - class_prompt=None, - size=512, - center_crop=False, - ): - self.size = size - self.center_crop = center_crop - self.tokenizer = tokenizer - self.image_captions_filename = None - - self.instance_data_root = Path(instance_data_root) - if not self.instance_data_root.exists(): - raise ValueError("Instance images root doesn't exists.") - - self.instance_images_path = list(Path(instance_data_root).iterdir()) - self.num_instance_images = len(self.instance_images_path) - self.instance_prompt = instance_prompt - self._length = self.num_instance_images - - if args.image_captions_filename: - self.image_captions_filename = True - - if class_data_root is not None: - self.class_data_root = Path(class_data_root) - self.class_data_root.mkdir(parents=True, exist_ok=True) - self.class_images_path = list(self.class_data_root.iterdir()) - random.shuffle(self.class_images_path) - self.num_class_images = len(self.class_images_path) - self._length = max(self.num_class_images, self.num_instance_images) - self.class_prompt = class_prompt - else: - self.class_data_root = None - - self.image_transforms = transforms.Compose( - [ - transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR), - transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size), - transforms.ToTensor(), - transforms.Normalize([0.5], [0.5]), - ] - ) - - def __len__(self): - return self._length - - def __getitem__(self, index): - example = {} - path = self.instance_images_path[index % self.num_instance_images] - instance_image = Image.open(path) - if not instance_image.mode == "RGB": - instance_image = instance_image.convert("RGB") - - instance_prompt = self.instance_prompt - - if self.image_captions_filename: - filename = Path(path).stem - pt=''.join([i for i in filename if not i.isdigit()]) - pt=pt.replace("_"," ") - pt=pt.replace("(","") - pt=pt.replace(")","") - pt=pt.replace("-","") - instance_prompt = pt - sys.stdout.write(" " +instance_prompt+" ") - sys.stdout.flush() - - - example["instance_images"] = self.image_transforms(instance_image) - example["instance_prompt_ids"] = self.tokenizer( - instance_prompt, - padding="do_not_pad", - truncation=True, - max_length=self.tokenizer.model_max_length, - ).input_ids - - if self.class_data_root: - class_image = Image.open(self.class_images_path[index % self.num_class_images]) - if not class_image.mode == "RGB": - class_image = class_image.convert("RGB") - example["class_images"] = self.image_transforms(class_image) - example["class_prompt_ids"] = self.tokenizer( - self.class_prompt, - padding="do_not_pad", - truncation=True, - max_length=self.tokenizer.model_max_length, - ).input_ids - - return example - - - -class PromptDataset(Dataset): - "A simple dataset to prepare the prompts to generate class images on multiple GPUs." - - def __init__(self, prompt, num_samples): - self.prompt = prompt - self.num_samples = num_samples - - def __len__(self): - return self.num_samples - - def __getitem__(self, index): - example = {} - example["prompt"] = self.prompt - example["index"] = index - return example - -class LatentsDataset(Dataset): - def __init__(self, latents_cache, text_encoder_cache): - self.latents_cache = latents_cache - self.text_encoder_cache = text_encoder_cache - - def __len__(self): - return len(self.latents_cache) - - def __getitem__(self, index): - return self.latents_cache[index], self.text_encoder_cache[index] - -def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None): - if token is None: - token = HfFolder.get_token() - if organization is None: - username = whoami(token)["name"] - return f"{username}/{model_id}" - else: - return f"{organization}/{model_id}" - -def merge_two_dicts(starting_dict: dict, updater_dict: dict) -> dict: - """ - Starts from base starting dict and then adds the remaining key values from updater replacing the values from - the first starting/base dict with the second updater dict. - - For later: how does d = {**d1, **d2} replace collision? - - :param starting_dict: - :param updater_dict: - :return: - """ - new_dict: dict = starting_dict.copy() # start with keys and values of starting_dict - new_dict.update(updater_dict) # modifies starting_dict with keys and values of updater_dict - return new_dict - -def merge_args(args1: argparse.Namespace, args2: argparse.Namespace) -> argparse.Namespace: - """ - - ref: https://stackoverflow.com/questions/56136549/how-can-i-merge-two-argparse-namespaces-in-python-2-x - :param args1: - :param args2: - :return: - """ - # - the merged args - # The vars() function returns the __dict__ attribute to values of the given object e.g {field:value}. - merged_key_values_for_namespace: dict = merge_two_dicts(vars(args1), vars(args2)) - args = argparse.Namespace(**merged_key_values_for_namespace) - return args - -def run_training(args_imported): - args_default = parse_args() - args = merge_args(args_default, args_imported) - print(args) - logging_dir = Path(args.output_dir, args.logging_dir) - i=args.save_starting_step - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with="tensorboard", - logging_dir=logging_dir, - ) - - # Currently, it's not possible to do gradient accumulation when training two models with accelerate.accumulate - # This will be enabled soon in accelerate. For now, we don't allow gradient accumulation when training two models. - # TODO (patil-suraj): Remove this check when gradient accumulation with two models is enabled in accelerate. - if args.train_text_encoder and args.gradient_accumulation_steps > 1 and accelerator.num_processes > 1: - raise ValueError( - "Gradient accumulation is not supported when training the text encoder in distributed training. " - "Please set gradient_accumulation_steps to 1. This feature will be supported in the future." - ) - - if args.seed is not None: - set_seed(args.seed) - - if args.with_prior_preservation: - class_images_dir = Path(args.class_data_dir) - if not class_images_dir.exists(): - class_images_dir.mkdir(parents=True) - cur_class_images = len(list(class_images_dir.iterdir())) - - if cur_class_images < args.num_class_images: - torch_dtype = torch.float16 if accelerator.device.type == "cuda" else torch.float32 - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, torch_dtype=torch_dtype - ) - pipeline.set_progress_bar_config(disable=True) - - num_new_images = args.num_class_images - cur_class_images - logger.info(f"Number of class images to sample: {num_new_images}.") - - sample_dataset = PromptDataset(args.class_prompt, num_new_images) - sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size) - - sample_dataloader = accelerator.prepare(sample_dataloader) - pipeline.to(accelerator.device) - - for example in tqdm( - sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process - ): - with torch.autocast("cuda"): - images = pipeline(example["prompt"]).images - - for i, image in enumerate(images): - image.save(class_images_dir / f"{example['index'][i] + cur_class_images}.jpg") - - del pipeline - if torch.cuda.is_available(): - torch.cuda.empty_cache() - - # Handle the repository creation - if accelerator.is_main_process: - if args.push_to_hub: - if args.hub_model_id is None: - repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token) - else: - repo_name = args.hub_model_id - repo = Repository(args.output_dir, clone_from=repo_name) - - with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore: - if "step_*" not in gitignore: - gitignore.write("step_*\n") - if "epoch_*" not in gitignore: - gitignore.write("epoch_*\n") - elif args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - # Load the tokenizer - if args.tokenizer_name: - tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name) - elif args.pretrained_model_name_or_path: - tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer") - - # Load models and create wrapper for stable diffusion - if args.train_only_unet: - if os.path.exists(str(args.output_dir+"/text_encoder_trained")): - text_encoder = CLIPTextModel.from_pretrained(args.output_dir, subfolder="text_encoder_trained") - elif os.path.exists(str(args.output_dir+"/text_encoder")): - text_encoder = CLIPTextModel.from_pretrained(args.output_dir, subfolder="text_encoder") - else: - text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder") - else: - text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder") - vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae") - unet = UNet2DConditionModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="unet") - - vae.requires_grad_(False) - if not args.train_text_encoder: - text_encoder.requires_grad_(False) - - if args.gradient_checkpointing: - unet.enable_gradient_checkpointing() - if args.train_text_encoder: - text_encoder.gradient_checkpointing_enable() - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs - if args.use_8bit_adam: - try: - import bitsandbytes as bnb - except ImportError: - raise ImportError( - "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`." - ) - - optimizer_class = bnb.optim.AdamW8bit - else: - optimizer_class = torch.optim.AdamW - - params_to_optimize = ( - itertools.chain(unet.parameters(), text_encoder.parameters()) if args.train_text_encoder else unet.parameters() - ) - optimizer = optimizer_class( - params_to_optimize, - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - noise_scheduler = DDPMScheduler.from_config(args.pretrained_model_name_or_path, subfolder="scheduler") - - train_dataset = DreamBoothDataset( - instance_data_root=args.instance_data_dir, - instance_prompt=args.instance_prompt, - class_data_root=args.class_data_dir if args.with_prior_preservation else None, - class_prompt=args.class_prompt, - tokenizer=tokenizer, - size=args.resolution, - center_crop=args.center_crop, - args=args, - ) - - def collate_fn(examples): - input_ids = [example["instance_prompt_ids"] for example in examples] - pixel_values = [example["instance_images"] for example in examples] - - # Concat class and instance examples for prior preservation. - # We do this to avoid doing two forward passes. - if args.with_prior_preservation: - input_ids += [example["class_prompt_ids"] for example in examples] - pixel_values += [example["class_images"] for example in examples] - - pixel_values = torch.stack(pixel_values) - pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() - - input_ids = tokenizer.pad({"input_ids": input_ids}, padding=True, return_tensors="pt").input_ids - - batch = { - "input_ids": input_ids, - "pixel_values": pixel_values, - } - return batch - - train_dataloader = torch.utils.data.DataLoader( - train_dataset, batch_size=args.train_batch_size, shuffle=True, collate_fn=collate_fn - ) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps, - num_training_steps=args.max_train_steps * args.gradient_accumulation_steps, - ) - - if args.train_text_encoder: - unet, text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, text_encoder, optimizer, train_dataloader, lr_scheduler - ) - else: - unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, optimizer, train_dataloader, lr_scheduler - ) - - weight_dtype = torch.float32 - if args.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif args.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - # Move text_encode and vae to gpu. - # For mixed precision training we cast the text_encoder and vae weights to half-precision - # as these models are only used for inference, keeping weights in full precision is not required. - vae.to(accelerator.device, dtype=weight_dtype) - if not args.train_text_encoder: - text_encoder.to(accelerator.device, dtype=weight_dtype) - - - if args.cache_latents: - latents_cache = [] - text_encoder_cache = [] - for batch in tqdm(train_dataloader, desc="Caching latents"): - with torch.no_grad(): - batch["pixel_values"] = batch["pixel_values"].to(accelerator.device, non_blocking=True, dtype=weight_dtype) - batch["input_ids"] = batch["input_ids"].to(accelerator.device, non_blocking=True) - latents_cache.append(vae.encode(batch["pixel_values"]).latent_dist) - if args.train_text_encoder: - text_encoder_cache.append(batch["input_ids"]) - else: - text_encoder_cache.append(text_encoder(batch["input_ids"])[0]) - train_dataset = LatentsDataset(latents_cache, text_encoder_cache) - train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=1, collate_fn=lambda x: x, shuffle=True) - - del vae - #if not args.train_text_encoder: - # del text_encoder - if torch.cuda.is_available(): - torch.cuda.empty_cache() - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("dreambooth", config=vars(args)) - - def bar(prg): - br='|'+'█' * prg + ' ' * (25-prg)+'|' - return br - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num batches each epoch = {len(train_dataloader)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process) - global_step = 0 - - for epoch in range(args.num_train_epochs): - unet.train() - if args.train_text_encoder: - text_encoder.train() - for step, batch in enumerate(train_dataloader): - with accelerator.accumulate(unet): - # Convert images to latent space - with torch.no_grad(): - if args.cache_latents: - latents_dist = batch[0][0] - else: - latents_dist = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist - latents = latents_dist.sample() * 0.18215 - - # Sample noise that we'll add to the latents - noise = torch.randn_like(latents) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) - timesteps = timesteps.long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # Get the text embedding for conditioning - if(args.cache_latents): - if args.train_text_encoder: - encoder_hidden_states = text_encoder(batch[0][1])[0] - else: - encoder_hidden_states = batch[0][1] - else: - encoder_hidden_states = text_encoder(batch["input_ids"])[0] - - # Predict the noise residual - model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - if args.with_prior_preservation: - # Chunk the noise and model_pred into two parts and compute the loss on each part separately. - model_pred, model_pred_prior = torch.chunk(model_pred, 2, dim=0) - target, target_prior = torch.chunk(target, 2, dim=0) - - # Compute instance loss - loss = F.mse_loss(model_pred.float(), target.float(), reduction="none").mean([1, 2, 3]).mean() - - # Compute prior loss - prior_loss = F.mse_loss(model_pred_prior.float(), target_prior.float(), reduction="mean") - - # Add the prior loss to the instance loss. - loss = loss + args.prior_loss_weight * prior_loss - else: - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - accelerator.backward(loss) - if accelerator.sync_gradients: - params_to_clip = ( - itertools.chain(unet.parameters(), text_encoder.parameters()) - if args.train_text_encoder - else unet.parameters() - ) - accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm) - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - progress_bar.update(1) - global_step += 1 - - fll=round((global_step*100)/args.max_train_steps) - fll=round(fll/4) - pr=bar(fll) - - logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - progress_bar.set_description_str("Progress:"+pr) - accelerator.log(logs, step=global_step) - - if global_step >= args.max_train_steps: - break - - if args.train_text_encoder and global_step == args.stop_text_encoder_training and global_step >= 30: - if accelerator.is_main_process: - print(" " +" Freezing the text_encoder ..."+" ") - frz_dir=args.output_dir + "/text_encoder_frozen" - if os.path.exists(frz_dir): - subprocess.call('rm -r '+ frz_dir, shell=True) - os.mkdir(frz_dir) - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - pipeline.text_encoder.save_pretrained(frz_dir) - - if args.save_n_steps >= 200: - if global_step < args.max_train_steps and global_step+1==i: - ckpt_name = "_step_" + str(global_step+1) - save_dir = Path(args.output_dir+ckpt_name) - save_dir=str(save_dir) - save_dir=save_dir.replace(" ", "_") - if not os.path.exists(save_dir): - os.mkdir(save_dir) - inst=save_dir[16:] - inst=inst.replace(" ", "_") - print(" SAVING CHECKPOINT: "+args.Session_dir+"/"+inst+".ckpt") - # Create the pipeline using the trained modules and save it. - if accelerator.is_main_process: - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - pipeline.save_pretrained(save_dir) - frz_dir=args.output_dir + "/text_encoder_frozen" - if args.train_text_encoder and os.path.exists(frz_dir): - subprocess.call('rm -r '+save_dir+'/text_encoder/*.*', shell=True) - subprocess.call('cp -f '+frz_dir +'/*.* '+ save_dir+'/text_encoder', shell=True) - chkpth=args.Session_dir+"/"+inst+".ckpt" - subprocess.call('python /content/diffusers/scripts/convert_diffusers_to_original_stable_diffusion.py --model_path ' + save_dir + ' --checkpoint_path ' + chkpth + ' --half', shell=True) - subprocess.call('rm -r '+ save_dir, shell=True) - i=i+args.save_n_steps - - accelerator.wait_for_everyone() - - # Create the pipeline using using the trained modules and save it. - if accelerator.is_main_process: - if args.dump_only_text_encoder: - txt_dir=args.output_dir + "/text_encoder_trained" - if not os.path.exists(txt_dir): - os.mkdir(txt_dir) - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - pipeline.text_encoder.save_pretrained(txt_dir) - - elif args.train_only_unet: - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - pipeline.save_pretrained(args.output_dir) - txt_dir=args.output_dir + "/text_encoder_trained" - subprocess.call('rm -r '+txt_dir, shell=True) - - else: - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - frz_dir=args.output_dir + "/text_encoder_frozen" - pipeline.save_pretrained(args.output_dir) - if args.train_text_encoder and os.path.exists(frz_dir): - subprocess.call('mv -f '+frz_dir +'/*.* '+ args.output_dir+'/text_encoder', shell=True) - subprocess.call('rm -r '+ frz_dir, shell=True) - - if args.push_to_hub: - repo.push_to_hub(commit_message="End of training", blocking=False, auto_lfs_prune=True) - - accelerator.end_training() - del pipeline - torch.cuda.empty_cache() - gc.collect() -if __name__ == "__main__": - pass - #main() - diff --git a/spaces/anzorq/sd-space-creator/README.md b/spaces/anzorq/sd-space-creator/README.md deleted file mode 100644 index 2025d7e9f7c9c22f5207eb0e9d0e3ac7c79dfdb6..0000000000000000000000000000000000000000 --- a/spaces/anzorq/sd-space-creator/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: SD Space Creator -emoji: 🌌🔨 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.10.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/aphenx/bingo/src/components/ui/codeblock.tsx b/spaces/aphenx/bingo/src/components/ui/codeblock.tsx deleted file mode 100644 index aabda4e3b59f4e36b6ab79feb19d8d18b70e881b..0000000000000000000000000000000000000000 --- a/spaces/aphenx/bingo/src/components/ui/codeblock.tsx +++ /dev/null @@ -1,142 +0,0 @@ -'use client' - -import { FC, memo } from 'react' -import { Prism as SyntaxHighlighter } from 'react-syntax-highlighter' -import { coldarkDark } from 'react-syntax-highlighter/dist/cjs/styles/prism' - -import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard' -import { IconCheck, IconCopy, IconDownload } from '@/components/ui/icons' -import { Button } from '@/components/ui/button' - -interface Props { - language: string - value: string -} - -interface languageMap { - [key: string]: string | undefined -} - -export const programmingLanguages: languageMap = { - javascript: '.js', - python: '.py', - java: '.java', - c: '.c', - cpp: '.cpp', - 'c++': '.cpp', - 'c#': '.cs', - ruby: '.rb', - php: '.php', - swift: '.swift', - 'objective-c': '.m', - kotlin: '.kt', - typescript: '.ts', - go: '.go', - perl: '.pl', - rust: '.rs', - scala: '.scala', - haskell: '.hs', - lua: '.lua', - shell: '.sh', - sql: '.sql', - html: '.html', - css: '.css' - // add more file extensions here, make sure the key is same as language prop in CodeBlock.tsx component -} - -export const generateRandomString = (length: number, lowercase = false) => { - const chars = 'ABCDEFGHJKLMNPQRSTUVWXY3456789' // excluding similar looking characters like Z, 2, I, 1, O, 0 - let result = '' - for (let i = 0; i < length; i++) { - result += chars.charAt(Math.floor(Math.random() * chars.length)) - } - return lowercase ? result.toLowerCase() : result -} - -const CodeBlock: FC = memo(({ language, value }) => { - const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 }) - - const downloadAsFile = () => { - if (typeof window === 'undefined') { - return - } - const fileExtension = programmingLanguages[language] || '.file' - const suggestedFileName = `file-${generateRandomString( - 3, - true - )}${fileExtension}` - const fileName = window.prompt('Enter file name' || '', suggestedFileName) - - if (!fileName) { - // User pressed cancel on prompt. - return - } - - const blob = new Blob([value], { type: 'text/plain' }) - const url = URL.createObjectURL(blob) - const link = document.createElement('a') - link.download = fileName - link.href = url - link.style.display = 'none' - document.body.appendChild(link) - link.click() - document.body.removeChild(link) - URL.revokeObjectURL(url) - } - - const onCopy = () => { - if (isCopied) return - copyToClipboard(value) - } - - return ( -
-
- {language} -
- - -
-
- - {value} - -
- ) -}) -CodeBlock.displayName = 'CodeBlock' - -export { CodeBlock } diff --git a/spaces/ardha27/rvc_TTS/lib/infer_pack/transforms.py b/spaces/ardha27/rvc_TTS/lib/infer_pack/transforms.py deleted file mode 100644 index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000 --- a/spaces/ardha27/rvc_TTS/lib/infer_pack/transforms.py +++ /dev/null @@ -1,209 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = {"tails": tails, "tail_bound": tail_bound} - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails="linear", - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == "linear": - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError("{} tails are not implemented.".format(tails)) - - ( - outputs[inside_interval_mask], - logabsdet[inside_interval_mask], - ) = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, - right=tail_bound, - bottom=-tail_bound, - top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0.0, - right=1.0, - bottom=0.0, - top=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError("Input to a transform is not within its domain") - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError("Minimal bin width too large for the number of bins") - if min_bin_height * num_bins > 1.0: - raise ValueError("Minimal bin height too large for the number of bins") - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) + input_heights * (input_delta - input_derivatives) - b = input_heights * input_derivatives - (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) - c = -input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * ( - input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta - ) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/generic/wavenet.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/generic/wavenet.py deleted file mode 100644 index bc89da4fbe6b2425f2a39a578b8fad105a18da38..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/generic/wavenet.py +++ /dev/null @@ -1,175 +0,0 @@ -import torch -from torch import nn - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -class WN(torch.nn.Module): - """Wavenet layers with weight norm and no input conditioning. - - |-----------------------------------------------------------------------------| - | |-> tanh -| | - res -|- conv1d(dilation) -> dropout -> + -| * -> conv1d1x1 -> split -|- + -> res - g -------------------------------------| |-> sigmoid -| | - o --------------------------------------------------------------------------- + --------- o - - Args: - in_channels (int): number of input channels. - hidden_channes (int): number of hidden channels. - kernel_size (int): filter kernel size for the first conv layer. - dilation_rate (int): dilations rate to increase dilation per layer. - If it is 2, dilations are 1, 2, 4, 8 for the next 4 layers. - num_layers (int): number of wavenet layers. - c_in_channels (int): number of channels of conditioning input. - dropout_p (float): dropout rate. - weight_norm (bool): enable/disable weight norm for convolution layers. - """ - - def __init__( - self, - in_channels, - hidden_channels, - kernel_size, - dilation_rate, - num_layers, - c_in_channels=0, - dropout_p=0, - weight_norm=True, - ): - super().__init__() - assert kernel_size % 2 == 1 - assert hidden_channels % 2 == 0 - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.num_layers = num_layers - self.c_in_channels = c_in_channels - self.dropout_p = dropout_p - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.dropout = nn.Dropout(dropout_p) - - # init conditioning layer - if c_in_channels > 0: - cond_layer = torch.nn.Conv1d(c_in_channels, 2 * hidden_channels * num_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - # intermediate layers - for i in range(num_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - if i == 0: - in_layer = torch.nn.Conv1d( - in_channels, 2 * hidden_channels, kernel_size, dilation=dilation, padding=padding - ) - else: - in_layer = torch.nn.Conv1d( - hidden_channels, 2 * hidden_channels, kernel_size, dilation=dilation, padding=padding - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - if i < num_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - # setup weight norm - if not weight_norm: - self.remove_weight_norm() - - def forward(self, x, x_mask=None, g=None, **kwargs): # pylint: disable=unused-argument - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - x_mask = 1.0 if x_mask is None else x_mask - if g is not None: - g = self.cond_layer(g) - for i in range(self.num_layers): - x_in = self.in_layers[i](x) - x_in = self.dropout(x_in) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - acts = fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.num_layers - 1: - x = (x + res_skip_acts[:, : self.hidden_channels, :]) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.c_in_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class WNBlocks(nn.Module): - """Wavenet blocks. - - Note: After each block dilation resets to 1 and it increases in each block - along the dilation rate. - - Args: - in_channels (int): number of input channels. - hidden_channes (int): number of hidden channels. - kernel_size (int): filter kernel size for the first conv layer. - dilation_rate (int): dilations rate to increase dilation per layer. - If it is 2, dilations are 1, 2, 4, 8 for the next 4 layers. - num_blocks (int): number of wavenet blocks. - num_layers (int): number of wavenet layers. - c_in_channels (int): number of channels of conditioning input. - dropout_p (float): dropout rate. - weight_norm (bool): enable/disable weight norm for convolution layers. - """ - - def __init__( - self, - in_channels, - hidden_channels, - kernel_size, - dilation_rate, - num_blocks, - num_layers, - c_in_channels=0, - dropout_p=0, - weight_norm=True, - ): - super().__init__() - self.wn_blocks = nn.ModuleList() - for idx in range(num_blocks): - layer = WN( - in_channels=in_channels if idx == 0 else hidden_channels, - hidden_channels=hidden_channels, - kernel_size=kernel_size, - dilation_rate=dilation_rate, - num_layers=num_layers, - c_in_channels=c_in_channels, - dropout_p=dropout_p, - weight_norm=weight_norm, - ) - self.wn_blocks.append(layer) - - def forward(self, x, x_mask=None, g=None): - o = x - for layer in self.wn_blocks: - o = layer(o, x_mask, g) - return o diff --git a/spaces/artificialguybr/video-dubbing/TTS/recipes/bel-alex73/README.md b/spaces/artificialguybr/video-dubbing/TTS/recipes/bel-alex73/README.md deleted file mode 100644 index ad378dd9984b3fa94e1be7a0c479f9e51d88e1a6..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/recipes/bel-alex73/README.md +++ /dev/null @@ -1,62 +0,0 @@ -This description was created based on [jhlfrfufyfn/ml-bel-tts](https://github.com/jhlfrfufyfn/ml-bel-tts). Thanks a lot to jhlfrfufyfn for advices, configuration, code and ideas. - -# Training - -This recipe uses [CommonVoice](https://commonvoice.mozilla.org/en/datasets) dataset. It has format mp3/32kHz/48kbps format and contains multiple speakers because it was created for voice recognition. Looks like it's the best voice corpus of Belarussian language for today. But for creating better voice synthesis it will require to record some specific corpus with good pronunciation and good record quality. - -Looks like for Belarusian Common Voice corpus there is no sense to train full big dataset (90 hours). It's enough 30 hours dataset, that makes very good progress for 350 epochs(24000 steps on 24GiB GPU). The quality of dataset is more important that size. - -To train a model, you need to: -- download code and data -- prepare training data and generate scale_stats file -- change configuration settings -- train TTS model (GlowTTS in this example) -- train Vocoder model (HiFiGAN in this example) - -We recommend to prepare all things locally, then train models on the external computer with fast GPU. Text below describes all these steps. - -## Download code and data - -It would be good to place all things into local folder like /mycomputer/. You need files: - -- Coqui-TTS - code from this git. For example, to /mycomputer/TTS/. *Expected result: you have /mycomputer/TTS/setup.py and other files from git.* -- [Common voice dataset](https://commonvoice.mozilla.org/en/datasets) into cv-corpus/ directory near Coqui-TTS. *Expected result: you have /mycomputer/cv-corpus/be/validated.tsv and more than 1 mln .mp3 files in the /mycomputer/cv-corpus/be/clips/.* -- Belarusian text to phonemes converter - fanetyka.jar from the [https://github.com/alex73/Software-Korpus/releases](https://github.com/alex73/Software-Korpus/releases), then place it to fanetyka/ near Coqui-TTS. *Expected result: you have file /mycomputer/fanetyka/fanetyka.jar* - -Prepared data will be stored into storage/ directory near Coqui-TTS, like /mycomputer/storage/. - -## Prepare to training - locally - -Docker container was created for simplify local running. You can run `docker-prepare-start.sh` to start environment. All commands below should be started in docker console. - -* Start jupyter by the command `jupyter notebook --no-browser --allow-root --port=2525 --ip=0.0.0.0`. It will display link to http. You need to open this link, then choose `recipes/bel-alex73/choose_speaker.ipynb` notebook. You should run cells one-by-one, listen different speakers and select speaker that you want to use. After all commands in notebook, you can press Ctrl+C in docker console to stop jupyter. *Expected result: directory /mycomputer/storage/filtered_dataset/ with df_speaker.csv file and many *.wav files.* - -* Convert text to phonemes: `java -cp /a/fanetyka/fanetyka.jar org.alex73.fanetyka.impl.FanetykaTTSPrepare /storage/filtered_dataset/df_speaker.csv /storage/filtered_dataset/ipa_final_dataset.csv`. It will display all used characters at the end. You can use these characters to modify config in train_glowtts.py. *Expected result: file /mycomputer/storage/filtered_dataset/ipa_final_dataset.csv* - -* Modify configs(if you need) in the train_glowtts.py and train_hifigan.py. Then export config to old json format to create scale_stats.npy by the command `python3 recipes/bel-alex73/dump_config.py > recipes/bel-alex73/config.json`. *Expected result: file /mycomputer/TTS/recipes/bel-alex73/config.json exists.* - -* Start scale_stats.npy, that will the model to learn better: `mkdir -p /storage/TTS/; python3 TTS/bin/compute_statistics.py --config_path recipes/bel-alex73/config.json --out_path /storage/TTS/scale_stats.npy`. *Expected result: file /mycomputer/storage/TTS/scale_stats.npy exists.* - -## Training - with GPU - -You need to upload Coqui-TTS(/mycomputer/TTS/) and storage/ directory(/mycomputer/storage/) to some computer with GPU. We don't need cv-corpus/ and fanetyka/ directories for training. Install gcc, then run `pip install -e .[all,dev,notebooks]` to prepare modules. GlowTTS and HifiGan models should be learned separately based on /storage/filtered_dataset only, i.e. they are not dependent from each other. below means list of GPU ids from zero("0,1,2,3" for systems with 4 GPU). See details on the https://tts.readthedocs.io/en/latest/tutorial_for_nervous_beginners.html(multi-gpu training). - -Current setup created for 24GiB GPU. You need to change batch_size if you have more or less GPU memory. Also, you can try to set lr(learning rate) to lower value in the end of training GlowTTS. - -* Start GlowTTS model training by the command `OMP_NUM_THREADS=2 CUDA_VISIBLE_DEVICES= python3 -m trainer.distribute --script recipes/bel-alex73/train_glowtts.py`. It will produce training data into storage/output/ directory. Usually 100.000 global steps required. *Expected behavior: You will see /storage/output-glowtts//best_model_.pth files.* - -* Start HiFiGAN model training by the command `OMP_NUM_THREADS=2 CUDA_VISIBLE_DEVICES= python3 -m trainer.distribute --script recipes/bel-alex73/train_hifigan.py`. *Expected behavior: You will see /storage/output-hifigan//best_model_.pth files.* - -## How to monitor training - -* Run `nvidia-smi` to be sure that training uses all GPUs and to be sure that you are using more than 90% GPU memory and utilization. - -* Run `tensorboard --logdir=/storage/output-/` to see alignment, avg_loss metrics and check audio evaluation. You need only events.out.tfevents.\* files for that. - -## Synthesizing speech - - tts --text "" --out_path output.wav \ - --config_path /storage/output-glowtts/run/config.json \ - --model_path /storage/output-glowtts/run/best_model.pth \ - --vocoder_config_path /storage/output-hifigan/run/config.json \ - --vocoder_path /storage/output-hifigan/run/best_model.pth diff --git a/spaces/artificialguybr/video-dubbing/TTS/recipes/vctk/speedy_speech/train_speedy_speech.py b/spaces/artificialguybr/video-dubbing/TTS/recipes/vctk/speedy_speech/train_speedy_speech.py deleted file mode 100644 index 04caa6d25ac1814ed04eeeefe0090d6f11556142..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/recipes/vctk/speedy_speech/train_speedy_speech.py +++ /dev/null @@ -1,96 +0,0 @@ -import os - -from trainer import Trainer, TrainerArgs - -from TTS.config import BaseAudioConfig, BaseDatasetConfig -from TTS.tts.configs.speedy_speech_config import SpeedySpeechConfig -from TTS.tts.datasets import load_tts_samples -from TTS.tts.models.forward_tts import ForwardTTS -from TTS.tts.utils.speakers import SpeakerManager -from TTS.tts.utils.text.tokenizer import TTSTokenizer -from TTS.utils.audio import AudioProcessor - -output_path = os.path.dirname(os.path.abspath(__file__)) -dataset_config = BaseDatasetConfig(formatter="vctk", meta_file_train="", path=os.path.join(output_path, "../VCTK/")) - -audio_config = BaseAudioConfig( - sample_rate=22050, - do_trim_silence=True, - trim_db=23.0, - signal_norm=False, - mel_fmin=0.0, - mel_fmax=8000, - spec_gain=1.0, - log_func="np.log", - ref_level_db=20, - preemphasis=0.0, -) - -config = SpeedySpeechConfig( - run_name="fast_pitch_ljspeech", - audio=audio_config, - batch_size=32, - eval_batch_size=16, - num_loader_workers=8, - num_eval_loader_workers=4, - compute_input_seq_cache=True, - precompute_num_workers=4, - run_eval=True, - test_delay_epochs=-1, - epochs=1000, - text_cleaner="english_cleaners", - use_phonemes=True, - phoneme_language="en-us", - phoneme_cache_path=os.path.join(output_path, "phoneme_cache"), - print_step=50, - print_eval=False, - mixed_precision=False, - min_text_len=0, - max_text_len=500, - min_audio_len=0, - max_audio_len=500000, - output_path=output_path, - datasets=[dataset_config], - use_speaker_embedding=True, -) - -# INITIALIZE THE AUDIO PROCESSOR -# Audio processor is used for feature extraction and audio I/O. -# It mainly serves to the dataloader and the training loggers. -ap = AudioProcessor.init_from_config(config) - -# INITIALIZE THE TOKENIZER -# Tokenizer is used to convert text to sequences of token IDs. -# If characters are not defined in the config, default characters are passed to the config -tokenizer, config = TTSTokenizer.init_from_config(config) - -# LOAD DATA SAMPLES -# Each sample is a list of ```[text, audio_file_path, speaker_name]``` -# You can define your custom sample loader returning the list of samples. -# Or define your custom formatter and pass it to the `load_tts_samples`. -# Check `TTS.tts.datasets.load_tts_samples` for more details. -train_samples, eval_samples = load_tts_samples( - dataset_config, - eval_split=True, - eval_split_max_size=config.eval_split_max_size, - eval_split_size=config.eval_split_size, -) - -# init speaker manager for multi-speaker training -# it maps speaker-id to speaker-name in the model and data-loader -speaker_manager = SpeakerManager() -speaker_manager.set_ids_from_data(train_samples + eval_samples, parse_key="speaker_name") -config.model_args.num_speakers = speaker_manager.num_speakers - -# init model -model = ForwardTTS(config, ap, tokenizer, speaker_manager) - -# INITIALIZE THE TRAINER -# Trainer provides a generic API to train all the 🐸TTS models with all its perks like mixed-precision training, -# distributed training, etc. -trainer = Trainer( - TrainerArgs(), config, output_path, model=model, train_samples=train_samples, eval_samples=eval_samples -) - -# AND... 3,2,1... 🚀 -trainer.fit() diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Util/Padding.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Util/Padding.py deleted file mode 100644 index da69e55987227357a55f8e1b57fae5f7eb8cac74..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Util/Padding.py +++ /dev/null @@ -1,108 +0,0 @@ -# -# Util/Padding.py : Functions to manage padding -# -# =================================================================== -# -# Copyright (c) 2014, Legrandin -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# 1. Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# 2. Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE -# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, -# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, -# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN -# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# =================================================================== - -__all__ = [ 'pad', 'unpad' ] - -from Crypto.Util.py3compat import * - - -def pad(data_to_pad, block_size, style='pkcs7'): - """Apply standard padding. - - Args: - data_to_pad (byte string): - The data that needs to be padded. - block_size (integer): - The block boundary to use for padding. The output length is guaranteed - to be a multiple of :data:`block_size`. - style (string): - Padding algorithm. It can be *'pkcs7'* (default), *'iso7816'* or *'x923'*. - - Return: - byte string : the original data with the appropriate padding added at the end. - """ - - padding_len = block_size-len(data_to_pad)%block_size - if style == 'pkcs7': - padding = bchr(padding_len)*padding_len - elif style == 'x923': - padding = bchr(0)*(padding_len-1) + bchr(padding_len) - elif style == 'iso7816': - padding = bchr(128) + bchr(0)*(padding_len-1) - else: - raise ValueError("Unknown padding style") - return data_to_pad + padding - - -def unpad(padded_data, block_size, style='pkcs7'): - """Remove standard padding. - - Args: - padded_data (byte string): - A piece of data with padding that needs to be stripped. - block_size (integer): - The block boundary to use for padding. The input length - must be a multiple of :data:`block_size`. - style (string): - Padding algorithm. It can be *'pkcs7'* (default), *'iso7816'* or *'x923'*. - Return: - byte string : data without padding. - Raises: - ValueError: if the padding is incorrect. - """ - - pdata_len = len(padded_data) - if pdata_len == 0: - raise ValueError("Zero-length input cannot be unpadded") - if pdata_len % block_size: - raise ValueError("Input data is not padded") - if style in ('pkcs7', 'x923'): - padding_len = bord(padded_data[-1]) - if padding_len<1 or padding_len>min(block_size, pdata_len): - raise ValueError("Padding is incorrect.") - if style == 'pkcs7': - if padded_data[-padding_len:]!=bchr(padding_len)*padding_len: - raise ValueError("PKCS#7 padding is incorrect.") - else: - if padded_data[-padding_len:-1]!=bchr(0)*(padding_len-1): - raise ValueError("ANSI X.923 padding is incorrect.") - elif style == 'iso7816': - padding_len = pdata_len - padded_data.rfind(bchr(128)) - if padding_len<1 or padding_len>min(block_size, pdata_len): - raise ValueError("Padding is incorrect.") - if padding_len>1 and padded_data[1-padding_len:]!=bchr(0)*(padding_len-1): - raise ValueError("ISO 7816-4 padding is incorrect.") - else: - raise ValueError("Unknown padding style") - return padded_data[:-padding_len] - diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/us_population_over_time.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/us_population_over_time.py deleted file mode 100644 index c53ecce1a609480f4de3ebade3e5aa858046b810..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/us_population_over_time.py +++ /dev/null @@ -1,36 +0,0 @@ -""" -US Population Over Time -======================= -This chart visualizes the age distribution of the US population over time. -It uses a slider widget that is bound to the year to visualize the age -distribution over time. -""" -# category: case studies -import altair as alt -from vega_datasets import data - -source = data.population.url - -pink_blue = alt.Scale(domain=('Male', 'Female'), - range=["steelblue", "salmon"]) - -slider = alt.binding_range(min=1900, max=2000, step=10) -select_year = alt.selection_single(name="year", fields=['year'], - bind=slider, init={'year': 2000}) - -alt.Chart(source).mark_bar().encode( - x=alt.X('sex:N', title=None), - y=alt.Y('people:Q', scale=alt.Scale(domain=(0, 12000000))), - color=alt.Color('sex:N', scale=pink_blue), - column='age:O' -).properties( - width=20 -).add_selection( - select_year -).transform_calculate( - "sex", alt.expr.if_(alt.datum.sex == 1, "Male", "Female") -).transform_filter( - select_year -).configure_facet( - spacing=8 -) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/tests/__init__.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/tests/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/numel_dataset.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/numel_dataset.py deleted file mode 100644 index ac86dfd2f1d89055de909656d61d6aca85523f00..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/numel_dataset.py +++ /dev/null @@ -1,31 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch - -from . import BaseWrapperDataset - - -class NumelDataset(BaseWrapperDataset): - def __init__(self, dataset, reduce=False): - super().__init__(dataset) - self.reduce = reduce - - def __getitem__(self, index): - item = self.dataset[index] - if torch.is_tensor(item): - return torch.numel(item) - else: - return np.size(item) - - def __len__(self): - return len(self.dataset) - - def collater(self, samples): - if self.reduce: - return sum(samples) - else: - return torch.tensor(samples) diff --git a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Daryl.html b/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Daryl.html deleted file mode 100644 index 56d82f8ca51cb900d2614c4d5e0cb34c24f4d49e..0000000000000000000000000000000000000000 --- a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Daryl.html +++ /dev/null @@ -1,134 +0,0 @@ - - - - Daryl - - - - -
-

Daryl

- -
-
How did you hear about SM?
  • Googling side hustles - data related and stumbled on site
  • Eager to explore mentorship

Career?
  • DS at Apple (3 months)
  • Previously DS at Uber
  • Previously At Forbes and Viacom
  • Fell into DS by chance, majored in econ, 
    • tought himself to code
    • and learned his interests were akin to DS
    • went back to grad school for his MS in DS
    • technically ramping up was a challenge


Mentorship exp?
  • was mentoring groups of interns and grad students through a college reach-out program
  • At Forbes, industry outreach - graduate capstone program - help them build a real-world application
  • 4-5 students /semester for 4 semesters
  • ended in a pitch competition, and some mock interviews, assembled a team, and guide them through the whole project,

Beginner mistakes + how
can you help?
  • getting that first opportunity to be taken seriously
  • 1st job is hard + 1st internship is the hardest
  • personal initiative to assemble a portfolio
  • having something relevant on your resume
  • As a mentor:
    • help them fill that experience gap (working on interesting side-projects)
    • building up the confidence to interview
    • moral support

Questions for SM?
  • What do mentors typically cover from their mentees?
  • Do you have a trial period?
  • What is the biggest cohort?
  • What kind of support does SM offer?

-
-
-
- -
- - - \ No newline at end of file diff --git a/spaces/aubmindlab/Arabic-NLP/backend/sa_utils.py b/spaces/aubmindlab/Arabic-NLP/backend/sa_utils.py deleted file mode 100644 index ef59c71891d7e63677c23eb726be2a91bba61c28..0000000000000000000000000000000000000000 --- a/spaces/aubmindlab/Arabic-NLP/backend/sa_utils.py +++ /dev/null @@ -1,510 +0,0 @@ -import re -from contextlib import contextmanager - -import numpy as np -import torch -import torch.nn.functional as F -from fuzzysearch import find_near_matches -from pyarabic import araby -from torch import nn -from transformers import AutoTokenizer, BertModel, BertPreTrainedModel, pipeline -from transformers.modeling_outputs import SequenceClassifierOutput - -from .preprocess import ArabertPreprocessor, url_regexes, user_mention_regex - -multiple_char_pattern = re.compile(r"(.)\1{2,}", re.DOTALL) - -# ASAD-NEW_AraBERT_PREP-Balanced -class NewArabicPreprocessorBalanced(ArabertPreprocessor): - def __init__( - self, - model_name: str, - keep_emojis: bool = False, - remove_html_markup: bool = True, - replace_urls_emails_mentions: bool = True, - strip_tashkeel: bool = True, - strip_tatweel: bool = True, - insert_white_spaces: bool = True, - remove_non_digit_repetition: bool = True, - replace_slash_with_dash: bool = None, - map_hindi_numbers_to_arabic: bool = None, - apply_farasa_segmentation: bool = None, - ): - if "UBC-NLP" in model_name or "CAMeL-Lab" in model_name: - keep_emojis = True - remove_non_digit_repetition = True - super().__init__( - model_name=model_name, - keep_emojis=keep_emojis, - remove_html_markup=remove_html_markup, - replace_urls_emails_mentions=replace_urls_emails_mentions, - strip_tashkeel=strip_tashkeel, - strip_tatweel=strip_tatweel, - insert_white_spaces=insert_white_spaces, - remove_non_digit_repetition=remove_non_digit_repetition, - replace_slash_with_dash=replace_slash_with_dash, - map_hindi_numbers_to_arabic=map_hindi_numbers_to_arabic, - apply_farasa_segmentation=apply_farasa_segmentation, - ) - self.true_model_name = model_name - - def preprocess(self, text): - if "UBC-NLP" in self.true_model_name: - return self.ubc_prep(text) - - def ubc_prep(self, text): - text = re.sub("\s", " ", text) - text = text.replace("\\n", " ") - text = text.replace("\\r", " ") - text = araby.strip_tashkeel(text) - text = araby.strip_tatweel(text) - # replace all possible URLs - for reg in url_regexes: - text = re.sub(reg, " URL ", text) - text = re.sub("(URL\s*)+", " URL ", text) - # replace mentions with USER - text = re.sub(user_mention_regex, " USER ", text) - text = re.sub("(USER\s*)+", " USER ", text) - # replace hashtags with HASHTAG - # text = re.sub(r"#[\w\d]+", " HASH TAG ", text) - text = text.replace("#", " HASH ") - text = text.replace("_", " ") - text = " ".join(text.split()) - # text = re.sub("\B\\[Uu]\w+", "", text) - text = text.replace("\\U0001f97a", "🥺") - text = text.replace("\\U0001f928", "🤨") - text = text.replace("\\U0001f9d8", "😀") - text = text.replace("\\U0001f975", "😥") - text = text.replace("\\U0001f92f", "😲") - text = text.replace("\\U0001f92d", "🤭") - text = text.replace("\\U0001f9d1", "😐") - text = text.replace("\\U000e0067", "") - text = text.replace("\\U000e006e", "") - text = text.replace("\\U0001f90d", "♥") - text = text.replace("\\U0001f973", "🎉") - text = text.replace("\\U0001fa79", "") - text = text.replace("\\U0001f92b", "🤐") - text = text.replace("\\U0001f9da", "🦋") - text = text.replace("\\U0001f90e", "♥") - text = text.replace("\\U0001f9d0", "🧐") - text = text.replace("\\U0001f9cf", "") - text = text.replace("\\U0001f92c", "😠") - text = text.replace("\\U0001f9f8", "😸") - text = text.replace("\\U0001f9b6", "💩") - text = text.replace("\\U0001f932", "🤲") - text = text.replace("\\U0001f9e1", "🧡") - text = text.replace("\\U0001f974", "☹") - text = text.replace("\\U0001f91f", "") - text = text.replace("\\U0001f9fb", "💩") - text = text.replace("\\U0001f92a", "🤪") - text = text.replace("\\U0001f9fc", "") - text = text.replace("\\U000e0065", "") - text = text.replace("\\U0001f92e", "💩") - text = text.replace("\\U000e007f", "") - text = text.replace("\\U0001f970", "🥰") - text = text.replace("\\U0001f929", "🤩") - text = text.replace("\\U0001f6f9", "") - text = text.replace("🤍", "♥") - text = text.replace("🦠", "😷") - text = text.replace("🤢", "مقرف") - text = text.replace("🤮", "مقرف") - text = text.replace("🕠", "⌚") - text = text.replace("🤬", "😠") - text = text.replace("🤧", "😷") - text = text.replace("🥳", "🎉") - text = text.replace("🥵", "🔥") - text = text.replace("🥴", "☹") - text = text.replace("🤫", "🤐") - text = text.replace("🤥", "كذاب") - text = text.replace("\\u200d", " ") - text = text.replace("u200d", " ") - text = text.replace("\\u200c", " ") - text = text.replace("u200c", " ") - text = text.replace('"', "'") - text = text.replace("\\xa0", "") - text = text.replace("\\u2066", " ") - text = re.sub("\B\\\[Uu]\w+", "", text) - text = super(NewArabicPreprocessorBalanced, self).preprocess(text) - - text = " ".join(text.split()) - return text - - -"""CNNMarbertArabicPreprocessor""" -# ASAD-CNN_MARBERT -class CNNMarbertArabicPreprocessor(ArabertPreprocessor): - def __init__( - self, - model_name, - keep_emojis=False, - remove_html_markup=True, - replace_urls_emails_mentions=True, - remove_elongations=True, - ): - if "UBC-NLP" in model_name or "CAMeL-Lab" in model_name: - keep_emojis = True - remove_elongations = False - super().__init__( - model_name, - keep_emojis, - remove_html_markup, - replace_urls_emails_mentions, - remove_elongations, - ) - self.true_model_name = model_name - - def preprocess(self, text): - if "UBC-NLP" in self.true_model_name: - return self.ubc_prep(text) - - def ubc_prep(self, text): - text = re.sub("\s", " ", text) - text = text.replace("\\n", " ") - text = araby.strip_tashkeel(text) - text = araby.strip_tatweel(text) - # replace all possible URLs - for reg in url_regexes: - text = re.sub(reg, " URL ", text) - text = re.sub("(URL\s*)+", " URL ", text) - # replace mentions with USER - text = re.sub(user_mention_regex, " USER ", text) - text = re.sub("(USER\s*)+", " USER ", text) - # replace hashtags with HASHTAG - # text = re.sub(r"#[\w\d]+", " HASH TAG ", text) - text = text.replace("#", " HASH ") - text = text.replace("_", " ") - text = " ".join(text.split()) - text = super(CNNMarbertArabicPreprocessor, self).preprocess(text) - text = text.replace("\u200d", " ") - text = text.replace("u200d", " ") - text = text.replace("\u200c", " ") - text = text.replace("u200c", " ") - text = text.replace('"', "'") - # text = re.sub('[\d\.]+', ' NUM ', text) - # text = re.sub('(NUM\s*)+', ' NUM ', text) - text = multiple_char_pattern.sub(r"\1\1", text) - text = " ".join(text.split()) - return text - - -"""Trial5ArabicPreprocessor""" - - -class Trial5ArabicPreprocessor(ArabertPreprocessor): - def __init__( - self, - model_name, - keep_emojis=False, - remove_html_markup=True, - replace_urls_emails_mentions=True, - ): - if "UBC-NLP" in model_name: - keep_emojis = True - super().__init__( - model_name, keep_emojis, remove_html_markup, replace_urls_emails_mentions - ) - self.true_model_name = model_name - - def preprocess(self, text): - if "UBC-NLP" in self.true_model_name: - return self.ubc_prep(text) - - def ubc_prep(self, text): - text = re.sub("\s", " ", text) - text = text.replace("\\n", " ") - text = araby.strip_tashkeel(text) - text = araby.strip_tatweel(text) - # replace all possible URLs - for reg in url_regexes: - text = re.sub(reg, " URL ", text) - # replace mentions with USER - text = re.sub(user_mention_regex, " USER ", text) - # replace hashtags with HASHTAG - # text = re.sub(r"#[\w\d]+", " HASH TAG ", text) - text = text.replace("#", " HASH TAG ") - text = text.replace("_", " ") - text = " ".join(text.split()) - text = super(Trial5ArabicPreprocessor, self).preprocess(text) - # text = text.replace("السلام عليكم"," ") - # text = text.replace(find_near_matches("السلام عليكم",text,max_deletions=3,max_l_dist=3)[0].matched," ") - return text - - -"""SarcasmArabicPreprocessor""" - - -class SarcasmArabicPreprocessor(ArabertPreprocessor): - def __init__( - self, - model_name, - keep_emojis=False, - remove_html_markup=True, - replace_urls_emails_mentions=True, - ): - if "UBC-NLP" in model_name: - keep_emojis = True - super().__init__( - model_name, keep_emojis, remove_html_markup, replace_urls_emails_mentions - ) - self.true_model_name = model_name - - def preprocess(self, text): - if "UBC-NLP" in self.true_model_name: - return self.ubc_prep(text) - else: - return super(SarcasmArabicPreprocessor, self).preprocess(text) - - def ubc_prep(self, text): - text = re.sub("\s", " ", text) - text = text.replace("\\n", " ") - text = araby.strip_tashkeel(text) - text = araby.strip_tatweel(text) - # replace all possible URLs - for reg in url_regexes: - text = re.sub(reg, " URL ", text) - # replace mentions with USER - text = re.sub(user_mention_regex, " USER ", text) - # replace hashtags with HASHTAG - # text = re.sub(r"#[\w\d]+", " HASH TAG ", text) - text = text.replace("#", " HASH TAG ") - text = text.replace("_", " ") - text = text.replace('"', " ") - text = " ".join(text.split()) - text = super(SarcasmArabicPreprocessor, self).preprocess(text) - return text - - -"""NoAOAArabicPreprocessor""" - - -class NoAOAArabicPreprocessor(ArabertPreprocessor): - def __init__( - self, - model_name, - keep_emojis=False, - remove_html_markup=True, - replace_urls_emails_mentions=True, - ): - if "UBC-NLP" in model_name: - keep_emojis = True - super().__init__( - model_name, keep_emojis, remove_html_markup, replace_urls_emails_mentions - ) - self.true_model_name = model_name - - def preprocess(self, text): - if "UBC-NLP" in self.true_model_name: - return self.ubc_prep(text) - else: - return super(NoAOAArabicPreprocessor, self).preprocess(text) - - def ubc_prep(self, text): - text = re.sub("\s", " ", text) - text = text.replace("\\n", " ") - text = araby.strip_tashkeel(text) - text = araby.strip_tatweel(text) - # replace all possible URLs - for reg in url_regexes: - text = re.sub(reg, " URL ", text) - # replace mentions with USER - text = re.sub(user_mention_regex, " USER ", text) - # replace hashtags with HASHTAG - # text = re.sub(r"#[\w\d]+", " HASH TAG ", text) - text = text.replace("#", " HASH TAG ") - text = text.replace("_", " ") - text = " ".join(text.split()) - text = super(NoAOAArabicPreprocessor, self).preprocess(text) - text = text.replace("السلام عليكم", " ") - text = text.replace("ورحمة الله وبركاته", " ") - matched = find_near_matches("السلام عليكم", text, max_deletions=3, max_l_dist=3) - if len(matched) > 0: - text = text.replace(matched[0].matched, " ") - matched = find_near_matches( - "ورحمة الله وبركاته", text, max_deletions=3, max_l_dist=3 - ) - if len(matched) > 0: - text = text.replace(matched[0].matched, " ") - return text - - -class CnnBertForSequenceClassification(BertPreTrainedModel): - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - self.config = config - - self.bert = BertModel(config) - - filter_sizes = [1, 2, 3, 4, 5] - num_filters = 32 - self.convs1 = nn.ModuleList( - [nn.Conv2d(4, num_filters, (K, config.hidden_size)) for K in filter_sizes] - ) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - self.classifier = nn.Linear(len(filter_sizes) * num_filters, config.num_labels) - - self.init_weights() - - def forward( - self, - input_ids=None, - attention_mask=None, - token_type_ids=None, - position_ids=None, - head_mask=None, - inputs_embeds=None, - labels=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - ): - - return_dict = ( - return_dict if return_dict is not None else self.config.use_return_dict - ) - - outputs = self.bert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - x = outputs[2][-4:] - - x = torch.stack(x, dim=1) - x = [F.relu(conv(x)).squeeze(3) for conv in self.convs1] - x = [F.max_pool1d(i, i.size(2)).squeeze(2) for i in x] - x = torch.cat(x, 1) - x = self.dropout(x) - logits = self.classifier(x) - - loss = None - if labels is not None: - if self.config.problem_type is None: - if self.num_labels == 1: - self.config.problem_type = "regression" - elif self.num_labels > 1 and ( - labels.dtype == torch.long or labels.dtype == torch.int - ): - self.config.problem_type = "single_label_classification" - else: - self.config.problem_type = "multi_label_classification" - - if self.config.problem_type == "regression": - loss_fct = nn.MSELoss() - if self.num_labels == 1: - loss = loss_fct(logits.squeeze(), labels.squeeze()) - else: - loss = loss_fct(logits, labels) - elif self.config.problem_type == "single_label_classification": - loss_fct = nn.CrossEntropyLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - elif self.config.problem_type == "multi_label_classification": - loss_fct = nn.BCEWithLogitsLoss() - loss = loss_fct(logits, labels) - if not return_dict: - output = (logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return SequenceClassifierOutput( - loss=loss, - logits=logits, - hidden_states=None, - attentions=outputs.attentions, - ) - - -class CNNTextClassificationPipeline: - def __init__(self, model_path, device, return_all_scores=False): - self.model_path = model_path - self.model = CnnBertForSequenceClassification.from_pretrained(self.model_path) - # Special handling - self.device = torch.device("cpu" if device < 0 else f"cuda:{device}") - if self.device.type == "cuda": - self.model = self.model.to(self.device) - self.tokenizer = AutoTokenizer.from_pretrained(model_path) - self.return_all_scores = return_all_scores - - @contextmanager - def device_placement(self): - """ - Context Manager allowing tensor allocation on the user-specified device in framework agnostic way. - Returns: - Context manager - Examples:: - # Explicitly ask for tensor allocation on CUDA device :0 - pipe = pipeline(..., device=0) - with pipe.device_placement(): - # Every framework specific tensor allocation will be done on the request device - output = pipe(...) - """ - - if self.device.type == "cuda": - torch.cuda.set_device(self.device) - - yield - - def ensure_tensor_on_device(self, **inputs): - """ - Ensure PyTorch tensors are on the specified device. - Args: - inputs (keyword arguments that should be :obj:`torch.Tensor`): The tensors to place on :obj:`self.device`. - Return: - :obj:`Dict[str, torch.Tensor]`: The same as :obj:`inputs` but on the proper device. - """ - return { - name: tensor.to(self.device) if isinstance(tensor, torch.Tensor) else tensor - for name, tensor in inputs.items() - } - - def __call__(self, text): - """ - Classify the text(s) given as inputs. - Args: - args (:obj:`str` or :obj:`List[str]`): - One or several texts (or one list of prompts) to classify. - Return: - A list or a list of list of :obj:`dict`: Each result comes as list of dictionaries with the following keys: - - **label** (:obj:`str`) -- The label predicted. - - **score** (:obj:`float`) -- The corresponding probability. - If ``self.return_all_scores=True``, one such dictionary is returned per label. - """ - # outputs = super().__call__(*args, **kwargs) - inputs = self.tokenizer.batch_encode_plus( - text, - add_special_tokens=True, - max_length=64, - padding=True, - truncation="longest_first", - return_tensors="pt", - ) - - with torch.no_grad(): - inputs = self.ensure_tensor_on_device(**inputs) - predictions = self.model(**inputs)[0].cpu() - - predictions = predictions.numpy() - - if self.model.config.num_labels == 1: - scores = 1.0 / (1.0 + np.exp(-predictions)) - else: - scores = np.exp(predictions) / np.exp(predictions).sum(-1, keepdims=True) - if self.return_all_scores: - return [ - [ - {"label": self.model.config.id2label[i], "score": score.item()} - for i, score in enumerate(item) - ] - for item in scores - ] - else: - return [ - {"label": self.inv_label_map[item.argmax()], "score": item.max().item()} - for item in scores - ] diff --git a/spaces/awacke1/AI-Standard-Operating-Procedures/app.py b/spaces/awacke1/AI-Standard-Operating-Procedures/app.py deleted file mode 100644 index 29ac58899e1bcf76822dbeefaae7a12c5c3c639a..0000000000000000000000000000000000000000 --- a/spaces/awacke1/AI-Standard-Operating-Procedures/app.py +++ /dev/null @@ -1,436 +0,0 @@ -import streamlit as st -#import graphviz as gv -#import pillow as pil -#from graphviz import Source -#from PIL import Image -#import io - -st.set_page_config(layout="wide") - -st.markdown(""" - -## Standard Operating Procedures -| SOP No. | Standard Operating Procedure | Description | Top Ten Keywords | Wikipedia Link | SOP Icon | -|---------|------------------------------|-------------|-----------------|----------------|---------| -| 1 | SOP-01: Risk Assessment | Identifying, evaluating, and prioritizing compliance risks | risk, assessment, evaluate, prioritize, compliance, identify, analysis, management, mitigation, control | https://en.wikipedia.org/wiki/Risk_assessment | 🌡️ | -| 2 | SOP-02: Policy Development | Creating clear and concise compliance policies and procedures | policy, development, create, clear, concise, compliance, procedure, regulation, standard, guideline | https://en.wikipedia.org/wiki/Policy | 📜 | -| 3 | SOP-03: Training | Providing regular compliance training to employees | training, compliance, regular, employee, development, program, education, workshop, seminar, course | https://en.wikipedia.org/wiki/Training | 🎓 | -| 4 | SOP-04: Monitoring | Conducting periodic compliance audits and monitoring activities | monitoring, periodic, compliance, audit, review, assessment, evaluation, inspection, surveillance, oversight | https://en.wikipedia.org/wiki/Monitoring_and_evaluation | 👀 | -| 5 | SOP-05: Reporting | Establishing a process for reporting and addressing compliance issues | reporting, process, establish, compliance, issue, address, record, communication, notification, investigation | https://en.wikipedia.org/wiki/Reporting | 📊 | -| 6 | SOP-06: Incident Management | Handling compliance incidents and implementing corrective actions | incident, management, compliance, handle, implement, corrective, action, investigation, response, resolution | https://en.wikipedia.org/wiki/Incident_management | 🚨 | -| 7 | SOP-07: Recordkeeping | Maintaining accurate and up-to-date compliance records and documentation | recordkeeping, maintain, accurate, up-to-date, compliance, documentation, archive, storage, filing, record | https://en.wikipedia.org/wiki/Record_keeping | 📁 | - -st.graphviz_chart(''' -digraph { - // Nodes - A [label="SOP-01: Risk Assessment 🎯"] - B [label="Risk Context 📚"] - C [label="Evaluating Risks 📊"] - D [label="Prioritizing Risks ⚖️"] - E [label="Compliance Risk ⚠️"] - F [label="Analysis Role 🔍"] - G [label="Risk Management 💼"] - - // Edges - A -> B - A -> C - A -> D - A -> E - A -> F - A -> G - } -''') - -1. What is the purpose of SOP-01: Risk Assessment? -- The purpose of SOP-01: Risk Assessment is to identify, evaluate, and prioritize compliance risks. - -2. What does the term “risk” refer to in the context of risk assessment? -- In the context of risk assessment, the term “risk” refers to the potential for an event or situation to have a negative impact on an organization or project. - -3. What is the process for evaluating risks? -- The process for evaluating risks typically involves identifying the potential risks, analyzing their likelihood and potential impact, and prioritizing them based on their severity. - -4. How do you prioritize risks in a risk assessment? -- Risks can be prioritized in a risk assessment by considering their potential impact, likelihood of occurrence, and the organization’s ability to mitigate or control them. - -5. What is compliance risk? -- Compliance risk refers to the risk associated with non-compliance with laws, regulations, or internal policies and procedures. - -6. What is the role of analysis in risk assessment? -- Analysis plays a crucial role in risk assessment by helping to identify potential risks, evaluate their impact and likelihood, and develop strategies for mitigating or controlling them. - -7. What is risk management? -- Risk management is the process of identifying, assessing, and prioritizing risks, and developing strategies to mitigate or control them. - -8. What is risk mitigation? -- Risk mitigation refers to the process of minimizing or preventing the negative impact of potential risks. - -9. What is risk control? -- Risk control refers to the measures taken to manage or reduce the likelihood and severity of potential risks. - -10. Why is risk assessment important? -- Risk assessment is important because it helps organizations to identify and manage potential risks, leading to better decision-making, improved performance, and reduced negative impacts. - - -st.graphviz_chart(''' -digraph { - H [label="SOP-02: Policy Development 📝"] - I [label="Policy Definition 📚"] - J [label="Policy Process 🔄"] - K [label="Clear Policies 💡"] - H -> I - H -> J - H -> K - } -''') - -1. What is the purpose of SOP-02: Policy Development? -- The purpose of SOP-02: Policy Development is to create clear and concise compliance policies and procedures. - -2. What is a policy? -- A policy is a set of guidelines or principles that are developed to guide decision-making and behavior within an organization. - -3. What is the process for policy development? -- The process for policy development typically involves identifying the need for the policy, researching and gathering information, drafting the policy, obtaining feedback and approval, and implementing the policy. - -4. Why is it important for policies to be clear and concise? -- It is important for policies to be clear and concise so that they can be easily understood and followed by all members of the organization. This helps to ensure that everyone is on the same page and that compliance is maintained. - -5. What is compliance? -- Compliance refers to the act of following laws, regulations, or internal policies and procedures. - -6. What is a procedure? -- A procedure is a set of step-by-step instructions or guidelines for how to perform a specific task or activity. - -7. What is a regulation? -- A regulation is a rule or law that is put in place by a government or regulatory body to ensure compliance and standardization. - -8. What is a standard? -- A standard is a set of guidelines or principles that are developed to ensure consistent and high-quality performance or behavior. - -9. What is a guideline? -- A guideline is a set of recommendations or tips that are developed to assist with decision-making or performance. - -10. Why is policy development important? -- Policy development is important because it helps to ensure that an organization is operating in compliance with regulations and standards, while also promoting consistency and clarity in decision-making and behavior. - - -st.graphviz_chart(''' -digraph { - // Nodes - L [label="SOP-03: Training 📚"] - M [label="Training Definition 🧠"] - N [label="Regular Training 🗓️"] - O [label="Providing Training 💼"] - - L -> M - L -> N - L -> O - - } -''') - -1. What is the purpose of SOP-03: Training? -- The purpose of SOP-03: Training is to provide regular compliance training to employees. - -2. What is training? -- Training is the process of developing skills, knowledge, or behavior through education and instruction. - -3. Why is regular compliance training important? -- Regular compliance training is important to ensure that employees are aware of, and adhere to, laws, regulations, and company policies and procedures. - -4. What is compliance? -- Compliance refers to the act of following laws, regulations, or internal policies and procedures. - -5. Who is responsible for providing compliance training? -- It is typically the responsibility of the employer or organization to provide compliance training to their employees. - -6. What is employee development? -- Employee development refers to the process of improving an employee’s skills, knowledge, and abilities through training and education programs. - -7. What is a training program? -- A training program is a structured approach to employee development that is designed to improve skills, knowledge, and abilities related to a specific job or task. - -8. What is an education workshop? -- An education workshop is a training session that is designed to provide participants with information and skills related to a specific topic or field. - -9. What is a seminar? -- A seminar is a training event that typically involves an expert speaker or panel discussing a specific topic or issue. - -10. What is a training course? -- A training course is a structured program of learning that is typically designed to improve skills or knowledge related to a specific job or task. - -st.graphviz_chart(''' -digraph { - // Nodes - - P [label="SOP-04: Monitoring 📈"] - Q [label="Monitoring Definition 👁️"] - R [label="Periodic Monitoring ⏳"] - - P -> Q - P -> R - - - } -''') - -1. What is the purpose of SOP-04: Monitoring? -- The purpose of SOP-04: Monitoring is to conduct periodic compliance audits and monitoring activities. - -2. What is monitoring? -- Monitoring is the process of tracking and observing an activity or process to ensure that it is operating as intended. - -3. What does periodic mean in the context of monitoring? -- In the context of monitoring, periodic refers to activities that are conducted at regular intervals, rather than continuously. - -4. What is compliance? -- Compliance refers to the act of following laws, regulations, or internal policies and procedures. - -5. What is an audit? -- An audit is a systematic examination of an organization or process to evaluate compliance, performance, or financial status. - -6. What is a review? -- A review is an evaluation of an organization or process to assess performance or compliance. - -7. What is an assessment? -- An assessment is a process of evaluating the performance, compliance, or quality of an organization or process. - -8. What is an evaluation? -- An evaluation is a systematic process of collecting and analyzing information to assess the effectiveness, efficiency, or relevance of an organization or process. - -9. What is an inspection? -- An inspection is an examination or review of an organization or process to evaluate compliance, performance, or safety. - -10. What is surveillance? -- Surveillance is the act of closely monitoring an activity or process to ensure compliance, safety, or security. - -st.graphviz_chart(''' -digraph { - // Nodes - - S [label="SOP-05: Reporting 📊"] - T [label="Reporting Process 🔄"] - U [label="Compliance Issues 🚩"] - - S -> T - S -> U - - } -''') - -1. What is the purpose of SOP-05: Reporting? -- The purpose of SOP-05: Reporting is to establish a process for reporting and addressing compliance issues. - -2. What is reporting? -- Reporting is the process of notifying others about an event or situation, typically for the purpose of documentation or action. - -3. What does the term “process” mean in the context of SOP-05: Reporting? -- In the context of SOP-05: Reporting, “process” refers to the steps and procedures that are established to ensure that compliance issues are identified, reported, and addressed in a timely and effective manner. - -4. What is compliance? -- Compliance refers to the act of following laws, regulations, or internal policies and procedures. - -5. What is a compliance issue? -- A compliance issue is an event or situation that violates laws, regulations, or internal policies and procedures. - -6. What does it mean to address a compliance issue? -- To address a compliance issue means to take appropriate steps to investigate, resolve, and prevent similar issues in the future. - -7. What is a record? -- A record is a document or other form of evidence that is created or maintained for legal, administrative, or business purposes. - -8. What is communication? -- Communication is the exchange of information between individuals or groups, typically through speaking, writing, or other forms of expression. - -9. What is notification? -- Notification is the process of informing individuals or groups about a particular event or situation. - -10. What is an investigation? -- An investigation is a process of gathering information and evidence to uncover the facts about a particular event or situation. - -st.graphviz_chart(''' -digraph { - - V [label="SOP-06: Incident Management 🚨"] - W [label="Incident Definition ❗"] - X [label="Handling Incidents 👩‍🔧"] - Y [label="Corrective Actions 🔧"] - - V -> W - V -> X - V -> Y - - } -''') - - -st.graphviz_chart(''' -digraph { - Z [label="SOP-07: Recordkeeping 🗄️"] - AA [label="Maintaining Records 📋"] - - Z -> AA - } -''') - - -1. What is the purpose of SOP-06: Incident Management? -- The purpose of SOP-06: Incident Management is to handle compliance incidents and implement corrective actions. - -2. What is an incident? -- An incident is an event or situation that is unexpected or disrupts normal operations. - -3. What is management? -- Management refers to the process of planning, organizing, and controlling resources to achieve organizational goals. - -4. What is compliance? -- Compliance refers to the act of following laws, regulations, or internal policies and procedures. - -5. What does it mean to handle an incident? -- To handle an incident means to respond to and manage the incident in a way that minimizes its impact and prevents a recurrence. - -6. What does it mean to implement corrective actions? -- To implement corrective actions means to take steps to address the root cause of an incident and prevent it from happening again. - -7. What is a corrective action? -- A corrective action is a step or process that is taken to address the root cause of an incident and prevent its recurrence. - -8. What is an investigation? -- An investigation is a process of gathering information and evidence to uncover the facts about a particular event or situation. - -9. What is a response? -- A response is the immediate action taken in response to an incident to prevent further harm or damage. - -10. What is a resolution? -- A resolution is a decision or action taken to resolve an incident or issue and to prevent its recurrence. - -1. What is the purpose of SOP-07: Recordkeeping? -- The purpose of SOP-07: Recordkeeping is to maintain accurate and up-to-date compliance records and documentation. - -2. What is recordkeeping? -- Recordkeeping is the process of creating, managing, and storing information for legal, administrative, or business purposes. - -3. What does it mean to maintain records? -- To maintain records means to keep records accurate, complete, and up-to-date to ensure that they are reliable and useful when needed. - -4. What does it mean for records to be accurate and up-to-date? -- For records to be accurate and up-to-date means that they reflect the current state of affairs and contain the correct information. - -5. What is compliance? -- Compliance refers to the act of following laws, regulations, or internal policies and procedures. - -6. What is documentation? -- Documentation is information that is recorded and stored for legal, administrative, or business purposes. - -7. What is an archive? -- An archive is a collection of historical records or documents that are preserved for research, reference, or legal purposes. - -8. What is storage? -- Storage is the physical or digital location where records or documents are kept for future reference or use. - -9. What is filing? -- Filing is the process of organizing documents or records into a structured system for easy retrieval and access. - -10. Why is recordkeeping important? -- Recordkeeping is important for maintaining compliance, establishing accountability, facilitating business operations, and preserving historical information/documentation. -""") - - - -# SOP-01: Risk Assessment -st.graphviz_chart(''' -digraph { - A [label="SOP-01: Risk Assessment 🎯"] - B [label="Risk Context 📚"] - C [label="Evaluating Risks 📊"] - D [label="Prioritizing Risks ⚖️"] - E [label="Compliance Risk ⚠️"] - F [label="Analysis Role 🔍"] - G [label="Risk Management 💼"] - - A -> B - A -> C - A -> D - A -> E - A -> F - A -> G -} -''') - -# SOP-02: Policy Development -st.graphviz_chart(''' -digraph { - H [label="SOP-02: Policy Development 📝"] - I [label="Policy Definition 📚"] - J [label="Policy Process 🔄"] - K [label="Clear Policies 💡"] - - H -> I - H -> J - H -> K -} -''') - -# SOP-03: Training -st.graphviz_chart(''' -digraph { - L [label="SOP-03: Training 📚"] - M [label="Training Definition 🧠"] - N [label="Regular Training 🗓️"] - O [label="Providing Training 💼"] - - L -> M - L -> N - L -> O -} -''') - -# SOP-04: Monitoring -st.graphviz_chart(''' -digraph { - P [label="SOP-04: Monitoring 📈"] - Q [label="Monitoring Definition 👁️"] - R [label="Periodic Monitoring ⏳"] - - P -> Q - P -> R -} -''') - -# SOP-05: Reporting -st.graphviz_chart(''' -digraph { - S [label="SOP-05: Reporting 📊"] - T [label="Reporting Process 🔄"] - U [label="Compliance Issues 🚩"] - - S -> T - S -> U -} -''') - -# SOP-06: Incident Management -st.graphviz_chart(''' -digraph { - V [label="SOP-06: Incident Management 🚨"] - W [label="Incident Definition ❗"] - X [label="Handling Incidents 👩‍🔧"] - Y [label="Corrective Actions 🔧"] - - V -> W - V -> X - V -> Y -} -''') - -# SOP-07: Recordkeeping -st.graphviz_chart(''' -digraph { - Z [label="SOP-07: Recordkeeping 🗄️"] - AA [label="Maintaining Records 📋"] - - Z -> AA -} -''') - diff --git a/spaces/awacke1/HTML5-Javascript-3D-Breakout-Game/index.html b/spaces/awacke1/HTML5-Javascript-3D-Breakout-Game/index.html deleted file mode 100644 index d3ffb661299aed01a7754c87e5ce5b0f0b095b08..0000000000000000000000000000000000000000 --- a/spaces/awacke1/HTML5-Javascript-3D-Breakout-Game/index.html +++ /dev/null @@ -1,46 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - diff --git a/spaces/awacke1/QandAGenerator/README.md b/spaces/awacke1/QandAGenerator/README.md deleted file mode 100644 index 7258924ff6c68bf35a22896d3a9275148bac8c07..0000000000000000000000000000000000000000 --- a/spaces/awacke1/QandAGenerator/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 📖NLP Semantic Role Label QnA❓ -emoji: 📖❓ -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/awacke1/USMLE-Medical-License-Exam-EDA/README.md b/spaces/awacke1/USMLE-Medical-License-Exam-EDA/README.md deleted file mode 100644 index 3e2de7f85cf2a76648ffebf9a4dddb8c7a2f6e76..0000000000000000000000000000000000000000 --- a/spaces/awacke1/USMLE-Medical-License-Exam-EDA/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: USMLE Medical License Exam EDA -emoji: 🎓💡 -colorFrom: yellow -colorTo: yellow -sdk: streamlit -sdk_version: 1.27.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/tonemapping_fragment.glsl.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/tonemapping_fragment.glsl.js deleted file mode 100644 index 46944f4ba1bb58144984a4f294c92c070250290f..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/tonemapping_fragment.glsl.js +++ /dev/null @@ -1,7 +0,0 @@ -export default /* glsl */` -#if defined( TONE_MAPPING ) - - gl_FragColor.rgb = toneMapping( gl_FragColor.rgb ); - -#endif -`; diff --git a/spaces/barani/ControlNet/app_scribble_interactive.py b/spaces/barani/ControlNet/app_scribble_interactive.py deleted file mode 100644 index 36663c5a1fa37492bfa717c301d33a6b0b49fff5..0000000000000000000000000000000000000000 --- a/spaces/barani/ControlNet/app_scribble_interactive.py +++ /dev/null @@ -1,112 +0,0 @@ -#!/usr/bin/env python - -import gradio as gr -import numpy as np - -from utils import randomize_seed_fn - - -def create_canvas(w, h): - return np.zeros(shape=(h, w, 3), dtype=np.uint8) + 255 - - -def create_demo(process, max_images=12, default_num_images=3): - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - canvas_width = gr.Slider(label='Canvas width', - minimum=256, - maximum=512, - value=512, - step=1) - canvas_height = gr.Slider(label='Canvas height', - minimum=256, - maximum=512, - value=512, - step=1) - create_button = gr.Button('Open drawing canvas!') - image = gr.Image(tool='sketch', brush_radius=10) - prompt = gr.Textbox(label='Prompt') - run_button = gr.Button('Run') - with gr.Accordion('Advanced options', open=False): - num_samples = gr.Slider(label='Number of images', - minimum=1, - maximum=max_images, - value=default_num_images, - step=1) - image_resolution = gr.Slider(label='Image resolution', - minimum=256, - maximum=512, - value=512, - step=256) - num_steps = gr.Slider(label='Number of steps', - minimum=1, - maximum=100, - value=20, - step=1) - guidance_scale = gr.Slider(label='Guidance scale', - minimum=0.1, - maximum=30.0, - value=9.0, - step=0.1) - seed = gr.Slider(label='Seed', - minimum=0, - maximum=1000000, - step=1, - value=0, - randomize=True) - randomize_seed = gr.Checkbox(label='Randomize seed', - value=True) - a_prompt = gr.Textbox( - label='Additional prompt', - value='best quality, extremely detailed') - n_prompt = gr.Textbox( - label='Negative prompt', - value= - 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality' - ) - with gr.Column(): - result = gr.Gallery(label='Output', show_label=False).style( - columns=2, object_fit='scale-down') - - create_button.click(fn=create_canvas, - inputs=[canvas_width, canvas_height], - outputs=image, - queue=False) - inputs = [ - image, - prompt, - a_prompt, - n_prompt, - num_samples, - image_resolution, - num_steps, - guidance_scale, - seed, - ] - prompt.submit( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - ).then( - fn=process, - inputs=inputs, - outputs=result, - ) - run_button.click( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - ).then( - fn=process, - inputs=inputs, - outputs=result, - ) - return demo - - -if __name__ == '__main__': - from model import Model - model = Model(task_name='scribble') - demo = create_demo(model.process_scribble_interactive) - demo.queue().launch() diff --git a/spaces/better57/CHATGPT/readme/README_en.md b/spaces/better57/CHATGPT/readme/README_en.md deleted file mode 100644 index a906ecb3ebc411f5cdeb33d661266a489a20c3b0..0000000000000000000000000000000000000000 --- a/spaces/better57/CHATGPT/readme/README_en.md +++ /dev/null @@ -1,127 +0,0 @@ -
- - 简体中文 | English | 日本語 -
- -

川虎 Chat 🐯 Chuanhu Chat

-
- - Logo - - -

-

Lightweight and User-friendly Web-UI for LLMs including ChatGPT/ChatGLM/LLaMA

-

- - Tests Passing - - - GitHub Contributors - - - GitHub pull requests - -

- Streaming / Unlimited conversations / Save history / Preset prompts / Chat with files / Web search
- LaTeX rendering / Table rendering / Code highlighting
- Auto dark mode / Adaptive web interface / WeChat-like theme
- Multi-parameters tuning / Multi-API-Key support / Multi-user support
- Compatible with GPT-4 / Local deployment for LLMs -

- Video Tutorial - · - 2.0 Introduction - · - 3.0 Introduction & Tutorial - || - Online trial - · - One-Click deployment -

-

- Animation Demo -

-

-
- -## Usage Tips - -- To better control the ChatGPT, use System Prompt. -- To use a Prompt Template, select the Prompt Template Collection file first, and then choose certain prompt from the drop-down menu. -- To try again if the response is unsatisfactory, use `🔄 Regenerate` button. -- To start a new line in the input box, press Shift + Enter keys. -- To quickly switch between input history, press and key in the input box. -- To deploy the program onto a server, change the last line of the program to `demo.launch(server_name="0.0.0.0", server_port=)`. -- To get a public shared link, change the last line of the program to `demo.launch(share=True)`. Please be noted that the program must be running in order to be accessed via a public link. -- To use it in Hugging Face Spaces: It is recommended to **Duplicate Space** and run the program in your own Space for a faster and more secure experience. - -## Installation - -```shell -git clone https://github.com/GaiZhenbiao/ChuanhuChatGPT.git -cd ChuanhuChatGPT -pip install -r requirements.txt -``` - -Then make a copy of `config_example.json`, rename it to `config.json`, and then fill in your API-Key and other settings in the file. - -```shell -python ChuanhuChatbot.py -``` - -A browser window will open and you will be able to chat with ChatGPT. - -> **Note** -> -> Please check our [wiki page](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程) for detailed instructions. - -## Troubleshooting - -When you encounter problems, you should try manually pulling the latest changes of this project first. The steps are as follows: - -1. Download the latest code archive by clicking on `Download ZIP` on the webpage, or - ```shell - git pull https://github.com/GaiZhenbiao/ChuanhuChatGPT.git main -f - ``` -2. Try installing the dependencies again (as this project may have introduced new dependencies) - ``` - pip install -r requirements.txt - ``` -3. Update Gradio - ``` - pip install gradio --upgrade --force-reinstall - ``` - -Generally, you can solve most problems by following these steps. - -If the problem still exists, please refer to this page: [Frequently Asked Questions (FAQ)](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题) - -This page lists almost all the possible problems and solutions. Please read it carefully. - -## More Information - -More information could be found in our [wiki](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki): - -- [How to contribute a translation](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/Localization) -- [How to make a contribution](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/贡献指南) -- [How to cite the project](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用许可#如何引用该项目) -- [Project changelog](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/更新日志) -- [Project license](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用许可) - -## Starchart - -[![Star History Chart](https://api.star-history.com/svg?repos=GaiZhenbiao/ChuanhuChatGPT&type=Date)](https://star-history.com/#GaiZhenbiao/ChuanhuChatGPT&Date) - -## Contributors - - - - - -## Sponsor - -🐯 If you find this project helpful, feel free to buy me a coke or a cup of coffee~ - -Buy Me A Coffee - -image diff --git a/spaces/bibekyess/bgpt/chat.py b/spaces/bibekyess/bgpt/chat.py deleted file mode 100644 index ce2eb8a4942a3509d125a769ce54c8432a8d064c..0000000000000000000000000000000000000000 --- a/spaces/bibekyess/bgpt/chat.py +++ /dev/null @@ -1,122 +0,0 @@ -import json -import random - -import torch - -from model import NeuralNet -from nltk_utils import bag_of_words, tokenize -from spell_check import correct_typos - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -with open("intents.json") as json_data: - intents = json.load(json_data) - -FILE = "data.pth" -data = torch.load(FILE) - -input_size = data["input_size"] -hidden_size = data["hidden_size"] -output_size = data["output_size"] -all_words = data["all_words"] -tags = data["tags"] -model_state = data["model_state"] - -model = NeuralNet(input_size, hidden_size, output_size).to(device) -model.load_state_dict(model_state) -model.eval() - -bot_name = "BGPT" -# print( -# "Hello, I am B-BOT, personal ChatBOT of Mr. Bibek. Let's chat! (type 'quit' or 'q' to exit)" # NoQA -# ) - -def generate_tag(sentence): - # sentence = input("You: ") - sentence = correct_typos(sentence) - # print(sentence) - if sentence.lower() == "quit" or sentence.lower() == "q": - # Needs to quit - pass - - sentence = tokenize(sentence) - X = bag_of_words(sentence, all_words) - X = X.reshape(1, X.shape[0]) - X = torch.from_numpy(X).to(device) - - output = model(X) - _, predicted = torch.max(output, dim=1) - - tag = tags[predicted.item()] - return tag - -def generate_response(sentence): - # sentence = input("You: ") - sentence = correct_typos(sentence) - # print(sentence) - if sentence.lower() == "quit" or sentence.lower() == "q": - # Needs to quit - pass - - sentence = tokenize(sentence) - X = bag_of_words(sentence, all_words) - X = X.reshape(1, X.shape[0]) - X = torch.from_numpy(X).to(device) - - output = model(X) - _, predicted = torch.max(output, dim=1) - - tag = tags[predicted.item()] - - probs = torch.softmax(output, dim=1) - prob = probs[0][predicted.item()] - if prob.item() > 0.8: - for intent in intents["intents"]: - if tag == intent["tag"]: - return f"{bot_name}: {random.choice(intent['responses'])}" - else: - return ( - f"{bot_name}: Sorry, I didn't understand... Can you be more " - "specific on your question? You can ask about Bibek's skillset, " - "experiences, portfolio, education, achievements " - "and KAIST activities." - "These are some sample questions: " - "(I) Tell me about Bibek,\n" - "(II) What skills does he have?,\n" - "(III) What work experience does Bibek have?,\n" - "(IV) What is Bibek's educational background?,\n" - "(V) What awards has he won?,\n" - "(VI) What projects has he completed? &\n" - "(VII) How can I contact Bibek?" - ) - - -# while True: -# # sentence = "do you use credit cards?" -# sentence = input("You: ") -# if sentence.lower() == "quit" or sentence.lower() == "q": -# break - -# sentence = tokenize(sentence) -# X = bag_of_words(sentence, all_words) -# X = X.reshape(1, X.shape[0]) -# X = torch.from_numpy(X).to(device) - -# output = model(X) -# _, predicted = torch.max(output, dim=1) - -# tag = tags[predicted.item()] - -# probs = torch.softmax(output, dim=1) -# prob = probs[0][predicted.item()] -# if prob.item() > 0.8: -# for intent in intents["intents"]: -# if tag == intent["tag"]: -# print(f"{bot_name}: {random.choice(intent['responses'])}") -# else: -# print( -# f"{bot_name}: Sorry, I do not understand... Can you be more " -# "specific on your question? You can ask about Bibek's skillset, " -# "experiences, portfolio, education, achievements " -# "and KAIST activities." -# ) diff --git a/spaces/bioriAsaeru/text-to-voice/ExpressVPN671KeysByDuCkyXA[PATCHED] Download.md b/spaces/bioriAsaeru/text-to-voice/ExpressVPN671KeysByDuCkyXA[PATCHED] Download.md deleted file mode 100644 index 7977db8e78dbf4e4c505b1ba53053645d6f7de67..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/ExpressVPN671KeysByDuCkyXA[PATCHED] Download.md +++ /dev/null @@ -1,12 +0,0 @@ - -

https://coub.com/stories/3076156-expressvpn671keysbyduckyxadownload-bernpac. Reply. 14. november 2022 at 15:14. https://trello.com/c/N4xGNfFr/12-expressvpn671keysbyduckyxadownload-better. How to Crack.

-

expressvpn671keysbyduckyxadownload-better. Reply. siganna 1. oktober 2022 at 15:14. https://trello.com/c/N4xGNfFr/12-expressvpn671keysbyduckyxadownload-better.
-

ExpressVPN671KeysByDuCkyXAdownload


Downloadhttps://urloso.com/2uyOa1



-

Thekki ltattifawriua. https://coub.com/stories/3076156-expressvpn671keysbyduckyxadownload-bernpac https://coub.com/stories/3076157-dsj-3-download-free-full-version-descreil https://coub.com/stories/3076156-expressvpn671keysbyduckyxadownload-bernpac

-

https://jwmarine.org/expressvpn671keysbyduckyxadownload/3076156-expressvpn671keysbyduckyxadownload-bernpac. https://coub.com/stories/3076156-expressvpn671keysbyduckyxadownload-bernpac
-

https://jwmarine.org/expressvpn671keysbyduckyxadownload/3076156-expressvpn671keysbyduckyxadownload-bernpac. https://coub.com/stories/3076156-expressvpn671keysbyduckyxadownload-bernpac. Reply. Reply. Reply.

-

https://jwmarine.org/expressvpn671keysbyduckyxadownload/3076156-expressvpn671keysbyduckyxadownload-bernpac. https://coub.com/stories/3076156-expressvpn671keysbyduckyxadownload-bernpac. https://www.vakantiehuiswinkel.nl/expressvpn671keysbyduckyxadownload/.

-

https://coub.com/stories/3076156-expressvpn671keysbyduckyxadownload-bernpac. https://jwmarine.org/expressvpn671keysbyduckyxadownload/3076156-expressvpn671keysbyduckyxadownload-bernpac. Reply. Reply. Reply. Reply.

-

899543212b
-
-
\ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Formatodeactadematrimonioenblancoparallenar.md b/spaces/bioriAsaeru/text-to-voice/Formatodeactadematrimonioenblancoparallenar.md deleted file mode 100644 index 70c01368e36faef4accef8723be589b921527f85..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Formatodeactadematrimonioenblancoparallenar.md +++ /dev/null @@ -1,6 +0,0 @@ -

Formatodeactadematrimonioenblancoparallenar


Download File ✫✫✫ https://urloso.com/2uyP3L



-
-2020-10-23. Octanerender Plugin For Archicad Download [BETTER] · Formatodeactadematrimonioenblancoparallenar ... 4d29de3e1b
-
-
-

diff --git a/spaces/bioriAsaeru/text-to-voice/Handjob Swallows Cock Ring.md b/spaces/bioriAsaeru/text-to-voice/Handjob Swallows Cock Ring.md deleted file mode 100644 index 8523e2284fcd485d9779ec3d50c3dae7db36bbbf..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Handjob Swallows Cock Ring.md +++ /dev/null @@ -1,6 +0,0 @@ -

Handjob Swallows Cock Ring


DOWNLOAD · https://urloso.com/2uyO8E



- - aaccfb2cb3
-
-
-

diff --git a/spaces/bioriAsaeru/text-to-voice/Hitman Blood Money Version 1.2 Repack Mr DJ Download.md b/spaces/bioriAsaeru/text-to-voice/Hitman Blood Money Version 1.2 Repack Mr DJ Download.md deleted file mode 100644 index c9360d136c2126f25a3abeb796b1d01fdff88ad1..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Hitman Blood Money Version 1.2 Repack Mr DJ Download.md +++ /dev/null @@ -1,18 +0,0 @@ -

Hitman Blood Money version 1.2 repack Mr DJ download


Download File ✒ ✒ ✒ https://urloso.com/2uyPjt



-
-Developer : IO InteractiveTitle : Hitman Blood Money,game is on June 16, 2013 (Added On )Game was released in Japan on.game is made by the company IO Interactive.This PC Game is developed By game studio IO Interactive(IOEi).Hitman Blood Money is a stealth,action game,with a stylish and cinematic style.Hitman Blood Money can be set in different locations,places or in action.This game was published by Eidos Entertainment in South Korean in.Game was available in South Korea(2010) on. - -Download Hitman Blood Money Game - -Download this Game - -download version games for free,download version games for pc,download version games for ps3,download version games for windows - -Download Hitman Blood Money version 1.2. - Welcome to Free PC Games Download | Mega Games, This game is titled Hitman Blood Money version 1.2. Game is on June 16, 2013 (Added On )Game was released in Japan on.Game is made by the company IO Interactive.This PC Game is developed By game studio IO Interactive(IOEi).Hitman Blood Money is a stealth,action game,with a stylish and cinematic style.Hitman Blood Money can be set in different locations,places or in action.This game was published by Eidos Entertainment in South Korean in.Game was available in South Korea(2010) on. - -Hitman Blood Money is a stealth,action game,with a stylish and cinematic style.Hitman Blood Money can be set in different locations,places or in action.This game was published by Eidos Entertainment in South Korean in.Game was available in South Korea(2010) on.Hitman Blood Money Game is the sequel of the 2006 Hitman Manhunt version.Hitman Manhunt is a stealth,action game,with a stylish and cinematic style.Hitman Manhunt can be set in different locations,places or in action.Game was published by Eidos Entertainment in South Korean in.Game was available in South Korea(2006) on.Game was available in South Korea(2006) on. - -Hitman Blood Money PC Game is the sequel of the 2006 Hitman Manhunt version.Hitman Manhunt is a stealth,action game,with a stylish and cinematic 4fefd39f24
-
-
-

diff --git a/spaces/bradley6597/gdrive-illustration-search/style.css b/spaces/bradley6597/gdrive-illustration-search/style.css deleted file mode 100644 index da32a4592eb8d39c714ea28820c63714241aea5d..0000000000000000000000000000000000000000 --- a/spaces/bradley6597/gdrive-illustration-search/style.css +++ /dev/null @@ -1,30 +0,0 @@ -footer{ - display: none !important; -} - -td img{ - background-image: - linear-gradient(45deg, lightgrey 25%, transparent 25%), - linear-gradient(135deg, lightgrey 25%, transparent 25%), - linear-gradient(45deg, transparent 75%, lightgrey 75%), - linear-gradient(135deg, transparent 75%, lightgrey 75%); - - background-size: 20px 20px; - background-position: 0 0, 10px 0, 10px -10px, 0px 10px; -} - -#toTopBtn { - position: fixed; - bottom: 10px; - float: right; - right: 18.5%; - left: 77.25%; - height: 30px; - max-width: 100px; - width: 100%; - font-size: 12px; - border-color: rgba(217,24,120, .5); - background-color: rgba(35,153,249,.5); - padding: .5px; - border-radius: 4px; - } \ No newline at end of file diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/models/__init__.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/models/__init__.py deleted file mode 100644 index be6bfe4b787a132aeaabaed1c3437c9ecd5c656c..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/models/__init__.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -""" -Models for EnCodec, AudioGen, MusicGen, as well as the generic LMModel. -""" -# flake8: noqa -from . import builders, loaders -from .encodec import ( - CompressionModel, EncodecModel, DAC, - HFEncodecModel, HFEncodecCompressionModel) -from .audiogen import AudioGen -from .lm import LMModel -from .multibanddiffusion import MultiBandDiffusion -from .musicgen import MusicGen -from .unet import DiffusionUnet diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/tests/modules/test_conv.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/tests/modules/test_conv.py deleted file mode 100644 index 28fbc4f1a0ebaf41b56947b767958ae696e75eec..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/tests/modules/test_conv.py +++ /dev/null @@ -1,203 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from itertools import product -import math -import random - -import pytest -import torch -from torch import nn - -from audiocraft.modules import ( - NormConv1d, - NormConvTranspose1d, - StreamableConv1d, - StreamableConvTranspose1d, - pad1d, - unpad1d, -) - - -def test_get_extra_padding_for_conv1d(): - # TODO: Implement me! - pass - - -def test_pad1d_zeros(): - x = torch.randn(1, 1, 20) - - xp1 = pad1d(x, (0, 5), mode='constant', value=0.) - assert xp1.shape[-1] == 25 - xp2 = pad1d(x, (5, 5), mode='constant', value=0.) - assert xp2.shape[-1] == 30 - xp3 = pad1d(x, (0, 0), mode='constant', value=0.) - assert xp3.shape[-1] == 20 - xp4 = pad1d(x, (10, 30), mode='constant', value=0.) - assert xp4.shape[-1] == 60 - - with pytest.raises(AssertionError): - pad1d(x, (-1, 0), mode='constant', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (0, -1), mode='constant', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (-1, -1), mode='constant', value=0.) - - -def test_pad1d_reflect(): - x = torch.randn(1, 1, 20) - - xp1 = pad1d(x, (0, 5), mode='reflect', value=0.) - assert xp1.shape[-1] == 25 - xp2 = pad1d(x, (5, 5), mode='reflect', value=0.) - assert xp2.shape[-1] == 30 - xp3 = pad1d(x, (0, 0), mode='reflect', value=0.) - assert xp3.shape[-1] == 20 - xp4 = pad1d(x, (10, 30), mode='reflect', value=0.) - assert xp4.shape[-1] == 60 - - with pytest.raises(AssertionError): - pad1d(x, (-1, 0), mode='reflect', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (0, -1), mode='reflect', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (-1, -1), mode='reflect', value=0.) - - -def test_unpad1d(): - x = torch.randn(1, 1, 20) - - u1 = unpad1d(x, (5, 5)) - assert u1.shape[-1] == 10 - u2 = unpad1d(x, (0, 5)) - assert u2.shape[-1] == 15 - u3 = unpad1d(x, (5, 0)) - assert u3.shape[-1] == 15 - u4 = unpad1d(x, (0, 0)) - assert u4.shape[-1] == x.shape[-1] - - with pytest.raises(AssertionError): - unpad1d(x, (-1, 0)) - - with pytest.raises(AssertionError): - unpad1d(x, (0, -1)) - - with pytest.raises(AssertionError): - unpad1d(x, (-1, -1)) - - -class TestNormConv1d: - - def test_norm_conv1d_modules(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - C_out, kernel_size, stride = 1, 4, 1 - expected_out_length = int((T - kernel_size) / stride + 1) - wn_conv = NormConv1d(C, 1, kernel_size=4, norm='weight_norm') - gn_conv = NormConv1d(C, 1, kernel_size=4, norm='time_group_norm') - nn_conv = NormConv1d(C, 1, kernel_size=4, norm='none') - - assert isinstance(wn_conv.norm, nn.Identity) - assert isinstance(wn_conv.conv, nn.Conv1d) - - assert isinstance(gn_conv.norm, nn.GroupNorm) - assert isinstance(gn_conv.conv, nn.Conv1d) - - assert isinstance(nn_conv.norm, nn.Identity) - assert isinstance(nn_conv.conv, nn.Conv1d) - - for conv_layer in [wn_conv, gn_conv, nn_conv]: - out = conv_layer(t0) - assert isinstance(out, torch.Tensor) - assert list(out.shape) == [N, C_out, expected_out_length] - - -class TestNormConvTranspose1d: - - def test_normalizations(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - C_out, kernel_size, stride = 1, 4, 1 - expected_out_length = (T - 1) * stride + (kernel_size - 1) + 1 - - wn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='weight_norm') - gn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='time_group_norm') - nn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='none') - - assert isinstance(wn_convtr.norm, nn.Identity) - assert isinstance(wn_convtr.convtr, nn.ConvTranspose1d) - - assert isinstance(gn_convtr.norm, nn.GroupNorm) - assert isinstance(gn_convtr.convtr, nn.ConvTranspose1d) - - assert isinstance(nn_convtr.norm, nn.Identity) - assert isinstance(nn_convtr.convtr, nn.ConvTranspose1d) - - for convtr_layer in [wn_convtr, gn_convtr, nn_convtr]: - out = convtr_layer(t0) - assert isinstance(out, torch.Tensor) - assert list(out.shape) == [N, C_out, expected_out_length] - - -class TestStreamableConv1d: - - def get_streamable_conv1d_output_length(self, length, kernel_size, stride, dilation): - # StreamableConv1d internally pads to make sure that the last window is full - padding_total = (kernel_size - 1) * dilation - (stride - 1) - n_frames = (length - kernel_size + padding_total) / stride + 1 - ideal_length = (math.ceil(n_frames) - 1) * stride + (kernel_size - padding_total) - return ideal_length // stride - - def test_streamable_conv1d(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - C_out = 1 - - # conv params are [(kernel_size, stride, dilation)] - conv_params = [(4, 1, 1), (4, 2, 1), (3, 1, 3), (10, 5, 1), (3, 2, 3)] - for causal, (kernel_size, stride, dilation) in product([False, True], conv_params): - expected_out_length = self.get_streamable_conv1d_output_length(T, kernel_size, stride, dilation) - sconv = StreamableConv1d(C, C_out, kernel_size=kernel_size, stride=stride, dilation=dilation, causal=causal) - out = sconv(t0) - assert isinstance(out, torch.Tensor) - print(list(out.shape), [N, C_out, expected_out_length]) - assert list(out.shape) == [N, C_out, expected_out_length] - - -class TestStreamableConvTranspose1d: - - def get_streamable_convtr1d_output_length(self, length, kernel_size, stride): - padding_total = (kernel_size - stride) - return (length - 1) * stride - padding_total + (kernel_size - 1) + 1 - - def test_streamable_convtr1d(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - C_out = 1 - - with pytest.raises(AssertionError): - StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=False, trim_right_ratio=0.5) - StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=True, trim_right_ratio=-1.) - StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=True, trim_right_ratio=2) - - # causal params are [(causal, trim_right)] - causal_params = [(False, 1.0), (True, 1.0), (True, 0.5), (True, 0.0)] - # conv params are [(kernel_size, stride)] - conv_params = [(4, 1), (4, 2), (3, 1), (10, 5)] - for ((causal, trim_right_ratio), (kernel_size, stride)) in product(causal_params, conv_params): - expected_out_length = self.get_streamable_convtr1d_output_length(T, kernel_size, stride) - sconvtr = StreamableConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, - causal=causal, trim_right_ratio=trim_right_ratio) - out = sconvtr(t0) - assert isinstance(out, torch.Tensor) - assert list(out.shape) == [N, C_out, expected_out_length] diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/PointRend/point_rend/mask_head.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/PointRend/point_rend/mask_head.py deleted file mode 100644 index 46dd64721578bd45eb208206bbd5e7908cb6a148..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/PointRend/point_rend/mask_head.py +++ /dev/null @@ -1,435 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import math -import numpy as np -from typing import Dict, List, Tuple -import fvcore.nn.weight_init as weight_init -import torch -from torch import Tensor, nn -from torch.nn import functional as F - -from detectron2.config import configurable -from detectron2.layers import Conv2d, ShapeSpec, cat, interpolate -from detectron2.modeling import ROI_MASK_HEAD_REGISTRY -from detectron2.modeling.roi_heads.mask_head import mask_rcnn_inference, mask_rcnn_loss -from detectron2.structures import Boxes - -from .point_features import ( - generate_regular_grid_point_coords, - get_point_coords_wrt_image, - get_uncertain_point_coords_on_grid, - get_uncertain_point_coords_with_randomness, - point_sample, - point_sample_fine_grained_features, - sample_point_labels, -) -from .point_head import build_point_head, roi_mask_point_loss - - -def calculate_uncertainty(logits, classes): - """ - We estimate uncerainty as L1 distance between 0.0 and the logit prediction in 'logits' for the - foreground class in `classes`. - Args: - logits (Tensor): A tensor of shape (R, C, ...) or (R, 1, ...) for class-specific or - class-agnostic, where R is the total number of predicted masks in all images and C is - the number of foreground classes. The values are logits. - classes (list): A list of length R that contains either predicted of ground truth class - for eash predicted mask. - Returns: - scores (Tensor): A tensor of shape (R, 1, ...) that contains uncertainty scores with - the most uncertain locations having the highest uncertainty score. - """ - if logits.shape[1] == 1: - gt_class_logits = logits.clone() - else: - gt_class_logits = logits[ - torch.arange(logits.shape[0], device=logits.device), classes - ].unsqueeze(1) - return -(torch.abs(gt_class_logits)) - - -class ConvFCHead(nn.Module): - """ - A mask head with fully connected layers. Given pooled features it first reduces channels and - spatial dimensions with conv layers and then uses FC layers to predict coarse masks analogously - to the standard box head. - """ - - _version = 2 - - @configurable - def __init__( - self, input_shape: ShapeSpec, *, conv_dim: int, fc_dims: List[int], output_shape: Tuple[int] - ): - """ - Args: - conv_dim: the output dimension of the conv layers - fc_dims: a list of N>0 integers representing the output dimensions of N FC layers - output_shape: shape of the output mask prediction - """ - super().__init__() - - # fmt: off - input_channels = input_shape.channels - input_h = input_shape.height - input_w = input_shape.width - self.output_shape = output_shape - # fmt: on - - self.conv_layers = [] - if input_channels > conv_dim: - self.reduce_channel_dim_conv = Conv2d( - input_channels, - conv_dim, - kernel_size=1, - stride=1, - padding=0, - bias=True, - activation=F.relu, - ) - self.conv_layers.append(self.reduce_channel_dim_conv) - - self.reduce_spatial_dim_conv = Conv2d( - conv_dim, conv_dim, kernel_size=2, stride=2, padding=0, bias=True, activation=F.relu - ) - self.conv_layers.append(self.reduce_spatial_dim_conv) - - input_dim = conv_dim * input_h * input_w - input_dim //= 4 - - self.fcs = [] - for k, fc_dim in enumerate(fc_dims): - fc = nn.Linear(input_dim, fc_dim) - self.add_module("fc{}".format(k + 1), fc) - self.fcs.append(fc) - input_dim = fc_dim - - output_dim = int(np.prod(self.output_shape)) - - self.prediction = nn.Linear(fc_dims[-1], output_dim) - # use normal distribution initialization for mask prediction layer - nn.init.normal_(self.prediction.weight, std=0.001) - nn.init.constant_(self.prediction.bias, 0) - - for layer in self.conv_layers: - weight_init.c2_msra_fill(layer) - for layer in self.fcs: - weight_init.c2_xavier_fill(layer) - - @classmethod - def from_config(cls, cfg, input_shape): - output_shape = ( - cfg.MODEL.ROI_HEADS.NUM_CLASSES, - cfg.MODEL.ROI_MASK_HEAD.OUTPUT_SIDE_RESOLUTION, - cfg.MODEL.ROI_MASK_HEAD.OUTPUT_SIDE_RESOLUTION, - ) - fc_dim = cfg.MODEL.ROI_MASK_HEAD.FC_DIM - num_fc = cfg.MODEL.ROI_MASK_HEAD.NUM_FC - ret = dict( - input_shape=input_shape, - conv_dim=cfg.MODEL.ROI_MASK_HEAD.CONV_DIM, - fc_dims=[fc_dim] * num_fc, - output_shape=output_shape, - ) - return ret - - def forward(self, x): - N = x.shape[0] - for layer in self.conv_layers: - x = layer(x) - x = torch.flatten(x, start_dim=1) - for layer in self.fcs: - x = F.relu(layer(x)) - output_shape = [N] + list(self.output_shape) - return self.prediction(x).view(*output_shape) - - def _load_from_state_dict( - self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs - ): - version = local_metadata.get("version", None) - - if version is None or version < 2: - logger = logging.getLogger(__name__) - logger.warning( - "Weight format of PointRend models have changed! " - "Applying automatic conversion now ..." - ) - for k in list(state_dict.keys()): - newk = k - if k.startswith(prefix + "coarse_mask_fc"): - newk = k.replace(prefix + "coarse_mask_fc", prefix + "fc") - if newk != k: - state_dict[newk] = state_dict[k] - del state_dict[k] - - -@ROI_MASK_HEAD_REGISTRY.register() -class PointRendMaskHead(nn.Module): - def __init__(self, cfg, input_shape: Dict[str, ShapeSpec]): - super().__init__() - self._feature_scales = {k: 1.0 / v.stride for k, v in input_shape.items()} - # point head - self._init_point_head(cfg, input_shape) - # coarse mask head - self.roi_pooler_in_features = cfg.MODEL.ROI_MASK_HEAD.IN_FEATURES - self.roi_pooler_size = cfg.MODEL.ROI_MASK_HEAD.POOLER_RESOLUTION - self._feature_scales = {k: 1.0 / v.stride for k, v in input_shape.items()} - in_channels = np.sum([input_shape[f].channels for f in self.roi_pooler_in_features]) - self._init_roi_head( - cfg, - ShapeSpec( - channels=in_channels, - width=self.roi_pooler_size, - height=self.roi_pooler_size, - ), - ) - - def _init_roi_head(self, cfg, input_shape): - self.coarse_head = ConvFCHead(cfg, input_shape) - - def _init_point_head(self, cfg, input_shape): - # fmt: off - self.mask_point_on = cfg.MODEL.ROI_MASK_HEAD.POINT_HEAD_ON - if not self.mask_point_on: - return - assert cfg.MODEL.ROI_HEADS.NUM_CLASSES == cfg.MODEL.POINT_HEAD.NUM_CLASSES - self.mask_point_in_features = cfg.MODEL.POINT_HEAD.IN_FEATURES - self.mask_point_train_num_points = cfg.MODEL.POINT_HEAD.TRAIN_NUM_POINTS - self.mask_point_oversample_ratio = cfg.MODEL.POINT_HEAD.OVERSAMPLE_RATIO - self.mask_point_importance_sample_ratio = cfg.MODEL.POINT_HEAD.IMPORTANCE_SAMPLE_RATIO - # next three parameters are use in the adaptive subdivions inference procedure - self.mask_point_subdivision_init_resolution = cfg.MODEL.ROI_MASK_HEAD.OUTPUT_SIDE_RESOLUTION - self.mask_point_subdivision_steps = cfg.MODEL.POINT_HEAD.SUBDIVISION_STEPS - self.mask_point_subdivision_num_points = cfg.MODEL.POINT_HEAD.SUBDIVISION_NUM_POINTS - # fmt: on - - in_channels = int(np.sum([input_shape[f].channels for f in self.mask_point_in_features])) - self.point_head = build_point_head(cfg, ShapeSpec(channels=in_channels, width=1, height=1)) - - # An optimization to skip unused subdivision steps: if after subdivision, all pixels on - # the mask will be selected and recomputed anyway, we should just double our init_resolution - while ( - 4 * self.mask_point_subdivision_init_resolution**2 - <= self.mask_point_subdivision_num_points - ): - self.mask_point_subdivision_init_resolution *= 2 - self.mask_point_subdivision_steps -= 1 - - def forward(self, features, instances): - """ - Args: - features (dict[str, Tensor]): a dict of image-level features - instances (list[Instances]): proposals in training; detected - instances in inference - """ - if self.training: - proposal_boxes = [x.proposal_boxes for x in instances] - coarse_mask = self.coarse_head(self._roi_pooler(features, proposal_boxes)) - losses = {"loss_mask": mask_rcnn_loss(coarse_mask, instances)} - if not self.mask_point_on: - return losses - - point_coords, point_labels = self._sample_train_points(coarse_mask, instances) - point_fine_grained_features = self._point_pooler(features, proposal_boxes, point_coords) - point_logits = self._get_point_logits( - point_fine_grained_features, point_coords, coarse_mask - ) - losses["loss_mask_point"] = roi_mask_point_loss(point_logits, instances, point_labels) - return losses - else: - pred_boxes = [x.pred_boxes for x in instances] - coarse_mask = self.coarse_head(self._roi_pooler(features, pred_boxes)) - return self._subdivision_inference(features, coarse_mask, instances) - - def _roi_pooler(self, features: List[Tensor], boxes: List[Boxes]): - """ - Extract per-box feature. This is similar to RoIAlign(sampling_ratio=1) except: - 1. It's implemented by point_sample - 2. It pools features across all levels and concat them, while typically - RoIAlign select one level for every box. However in the config we only use - one level (p2) so there is no difference. - - Returns: - Tensor of shape (R, C, pooler_size, pooler_size) where R is the total number of boxes - """ - features_list = [features[k] for k in self.roi_pooler_in_features] - features_scales = [self._feature_scales[k] for k in self.roi_pooler_in_features] - - num_boxes = sum(x.tensor.size(0) for x in boxes) - output_size = self.roi_pooler_size - point_coords = generate_regular_grid_point_coords(num_boxes, output_size, boxes[0].device) - # For regular grids of points, this function is equivalent to `len(features_list)' calls - # of `ROIAlign` (with `SAMPLING_RATIO=1`), and concat the results. - roi_features, _ = point_sample_fine_grained_features( - features_list, features_scales, boxes, point_coords - ) - return roi_features.view(num_boxes, roi_features.shape[1], output_size, output_size) - - def _sample_train_points(self, coarse_mask, instances): - assert self.training - gt_classes = cat([x.gt_classes for x in instances]) - with torch.no_grad(): - # sample point_coords - point_coords = get_uncertain_point_coords_with_randomness( - coarse_mask, - lambda logits: calculate_uncertainty(logits, gt_classes), - self.mask_point_train_num_points, - self.mask_point_oversample_ratio, - self.mask_point_importance_sample_ratio, - ) - # sample point_labels - proposal_boxes = [x.proposal_boxes for x in instances] - cat_boxes = Boxes.cat(proposal_boxes) - point_coords_wrt_image = get_point_coords_wrt_image(cat_boxes.tensor, point_coords) - point_labels = sample_point_labels(instances, point_coords_wrt_image) - return point_coords, point_labels - - def _point_pooler(self, features, proposal_boxes, point_coords): - point_features_list = [features[k] for k in self.mask_point_in_features] - point_features_scales = [self._feature_scales[k] for k in self.mask_point_in_features] - # sample image-level features - point_fine_grained_features, _ = point_sample_fine_grained_features( - point_features_list, point_features_scales, proposal_boxes, point_coords - ) - return point_fine_grained_features - - def _get_point_logits(self, point_fine_grained_features, point_coords, coarse_mask): - coarse_features = point_sample(coarse_mask, point_coords, align_corners=False) - point_logits = self.point_head(point_fine_grained_features, coarse_features) - return point_logits - - def _subdivision_inference(self, features, mask_representations, instances): - assert not self.training - - pred_boxes = [x.pred_boxes for x in instances] - pred_classes = cat([x.pred_classes for x in instances]) - - mask_logits = None - # +1 here to include an initial step to generate the coarsest mask - # prediction with init_resolution, when mask_logits is None. - # We compute initial mask by sampling on a regular grid. coarse_mask - # can be used as initial mask as well, but it's typically very low-res - # so it will be completely overwritten during subdivision anyway. - for _ in range(self.mask_point_subdivision_steps + 1): - if mask_logits is None: - point_coords = generate_regular_grid_point_coords( - pred_classes.size(0), - self.mask_point_subdivision_init_resolution, - pred_boxes[0].device, - ) - else: - mask_logits = interpolate( - mask_logits, scale_factor=2, mode="bilinear", align_corners=False - ) - uncertainty_map = calculate_uncertainty(mask_logits, pred_classes) - point_indices, point_coords = get_uncertain_point_coords_on_grid( - uncertainty_map, self.mask_point_subdivision_num_points - ) - - # Run the point head for every point in point_coords - fine_grained_features = self._point_pooler(features, pred_boxes, point_coords) - point_logits = self._get_point_logits( - fine_grained_features, point_coords, mask_representations - ) - - if mask_logits is None: - # Create initial mask_logits using point_logits on this regular grid - R, C, _ = point_logits.shape - mask_logits = point_logits.reshape( - R, - C, - self.mask_point_subdivision_init_resolution, - self.mask_point_subdivision_init_resolution, - ) - # The subdivision code will fail with the empty list of boxes - if len(pred_classes) == 0: - mask_rcnn_inference(mask_logits, instances) - return instances - else: - # Put point predictions to the right places on the upsampled grid. - R, C, H, W = mask_logits.shape - point_indices = point_indices.unsqueeze(1).expand(-1, C, -1) - mask_logits = ( - mask_logits.reshape(R, C, H * W) - .scatter_(2, point_indices, point_logits) - .view(R, C, H, W) - ) - mask_rcnn_inference(mask_logits, instances) - return instances - - -@ROI_MASK_HEAD_REGISTRY.register() -class ImplicitPointRendMaskHead(PointRendMaskHead): - def __init__(self, cfg, input_shape: Dict[str, ShapeSpec]): - super().__init__(cfg, input_shape) - - def _init_roi_head(self, cfg, input_shape): - assert hasattr(self, "num_params"), "Please initialize point_head first!" - self.parameter_head = ConvFCHead(cfg, input_shape, output_shape=(self.num_params,)) - self.regularizer = cfg.MODEL.IMPLICIT_POINTREND.PARAMS_L2_REGULARIZER - - def _init_point_head(self, cfg, input_shape): - # fmt: off - self.mask_point_on = True # always on - assert cfg.MODEL.ROI_HEADS.NUM_CLASSES == cfg.MODEL.POINT_HEAD.NUM_CLASSES - self.mask_point_in_features = cfg.MODEL.POINT_HEAD.IN_FEATURES - self.mask_point_train_num_points = cfg.MODEL.POINT_HEAD.TRAIN_NUM_POINTS - # next two parameters are use in the adaptive subdivions inference procedure - self.mask_point_subdivision_steps = cfg.MODEL.POINT_HEAD.SUBDIVISION_STEPS - self.mask_point_subdivision_num_points = cfg.MODEL.POINT_HEAD.SUBDIVISION_NUM_POINTS - # fmt: on - - in_channels = int(np.sum([input_shape[f].channels for f in self.mask_point_in_features])) - self.point_head = build_point_head(cfg, ShapeSpec(channels=in_channels, width=1, height=1)) - self.num_params = self.point_head.num_params - - # inference parameters - self.mask_point_subdivision_init_resolution = int( - math.sqrt(self.mask_point_subdivision_num_points) - ) - assert ( - self.mask_point_subdivision_init_resolution - * self.mask_point_subdivision_init_resolution - == self.mask_point_subdivision_num_points - ) - - def forward(self, features, instances): - """ - Args: - features (dict[str, Tensor]): a dict of image-level features - instances (list[Instances]): proposals in training; detected - instances in inference - """ - if self.training: - proposal_boxes = [x.proposal_boxes for x in instances] - parameters = self.parameter_head(self._roi_pooler(features, proposal_boxes)) - losses = {"loss_l2": self.regularizer * (parameters**2).mean()} - - point_coords, point_labels = self._uniform_sample_train_points(instances) - point_fine_grained_features = self._point_pooler(features, proposal_boxes, point_coords) - point_logits = self._get_point_logits( - point_fine_grained_features, point_coords, parameters - ) - losses["loss_mask_point"] = roi_mask_point_loss(point_logits, instances, point_labels) - return losses - else: - pred_boxes = [x.pred_boxes for x in instances] - parameters = self.parameter_head(self._roi_pooler(features, pred_boxes)) - return self._subdivision_inference(features, parameters, instances) - - def _uniform_sample_train_points(self, instances): - assert self.training - proposal_boxes = [x.proposal_boxes for x in instances] - cat_boxes = Boxes.cat(proposal_boxes) - # uniform sample - point_coords = torch.rand( - len(cat_boxes), self.mask_point_train_num_points, 2, device=cat_boxes.tensor.device - ) - # sample point_labels - point_coords_wrt_image = get_point_coords_wrt_image(cat_boxes.tensor, point_coords) - point_labels = sample_point_labels(instances, point_coords_wrt_image) - return point_coords, point_labels - - def _get_point_logits(self, fine_grained_features, point_coords, parameters): - return self.point_head(fine_grained_features, point_coords, parameters) diff --git a/spaces/camenduru-com/seamless/Build/GZIP.loader.js b/spaces/camenduru-com/seamless/Build/GZIP.loader.js deleted file mode 100644 index 6bcb648e952a40f7347ee111a82cf4cac8a1d15b..0000000000000000000000000000000000000000 --- a/spaces/camenduru-com/seamless/Build/GZIP.loader.js +++ /dev/null @@ -1 +0,0 @@ -function createUnityInstance(t,r,d){function i(e,t){if(!i.aborted&&r.showBanner)return"error"==t&&(i.aborted=!0),r.showBanner(e,t);switch(t){case"error":console.error(e);break;case"warning":console.warn(e);break;default:console.log(e)}}function n(e){var t=e.reason||e.error,r=t?t.toString():e.message||e.reason||"",n=t&&t.stack?t.stack.toString():"";(r+="\n"+(n=n.startsWith(r)?n.substring(r.length):n).trim())&&c.stackTraceRegExp&&c.stackTraceRegExp.test(r)&&C(r,e.filename||t&&(t.fileName||t.sourceURL)||"",e.lineno||t&&(t.lineNumber||t.line)||0)}function e(e,t,r){var n=e[t];void 0!==n&&n||(console.warn('Config option "'+t+'" is missing or empty. Falling back to default value: "'+r+'". Consider updating your WebGL template to include the missing config option.'),e[t]=r)}d=d||function(){};var o,c={canvas:t,webglContextAttributes:{preserveDrawingBuffer:!1,powerPreference:2},cacheControl:function(e){return e==c.dataUrl?"must-revalidate":"no-store"},streamingAssetsUrl:"StreamingAssets",downloadProgress:{},deinitializers:[],intervals:{},setInterval:function(e,t){e=window.setInterval(e,t);return this.intervals[e]=!0,e},clearInterval:function(e){delete this.intervals[e],window.clearInterval(e)},preRun:[],postRun:[],print:function(e){console.log(e)},printErr:function(e){console.error(e),"string"==typeof e&&-1!=e.indexOf("wasm streaming compile failed")&&(-1!=e.toLowerCase().indexOf("mime")?i('HTTP Response Header "Content-Type" configured incorrectly on the server for file '+c.codeUrl+' , should be "application/wasm". Startup time performance will suffer.',"warning"):i('WebAssembly streaming compilation failed! This can happen for example if "Content-Encoding" HTTP header is incorrectly enabled on the server for file '+c.codeUrl+", but the file is not pre-compressed on disk (or vice versa). Check the Network tab in browser Devtools to debug server header configuration.","warning"))},locateFile:function(e){return"build.wasm"==e?this.codeUrl:e},disabledCanvasEvents:["contextmenu","dragstart"]};for(o in e(r,"companyName","Unity"),e(r,"productName","WebGL Player"),e(r,"productVersion","1.0"),r)c[o]=r[o];c.streamingAssetsUrl=new URL(c.streamingAssetsUrl,document.URL).href;var a=c.disabledCanvasEvents.slice();function s(e){e.preventDefault()}a.forEach(function(e){t.addEventListener(e,s)}),window.addEventListener("error",n),window.addEventListener("unhandledrejection",n),c.deinitializers.push(function(){for(var e in c.disableAccessToMediaDevices(),a.forEach(function(e){t.removeEventListener(e,s)}),window.removeEventListener("error",n),window.removeEventListener("unhandledrejection",n),c.intervals)window.clearInterval(e);c.intervals={}}),c.QuitCleanup=function(){for(var e=0;eIf using custom web server, verify that web server is sending .br files with HTTP Response Header "Content-Encoding: br". Brotli compression may not be supported in Firefox over HTTP connections. '+n+' See https://bugzilla.mozilla.org/show_bug.cgi?id=1670675 for more information.':"Unable to parse "+c.frameworkUrl+'!
If using custom web server, verify that web server is sending .br files with HTTP Response Header "Content-Encoding: br". Brotli compression may not be supported over HTTP connections. Migrate your server to use HTTPS.'),void i(r,"error"))}i("Unable to parse "+c.frameworkUrl+"! The file is corrupt, or compression was misconfigured? (check Content-Encoding HTTP Response Header on web server)","error")}var o=unityFramework;unityFramework=null,s.onload=null,a(o)},s.onerror=function(e){i("Unable to load file "+c.frameworkUrl+"! Check that the file exists on the remote server. (also check browser Console and Devtools Network tab to debug)","error")},document.body.appendChild(s),c.deinitializers.push(function(){document.body.removeChild(s)})}).then(function(e){e(c)});x(r="dataUrl"),e=c.cacheControl(c[r]),t=c.companyName&&c.productName?c.cachedFetch:c.fetchWithProgress,n=c[r],n=/file:\/\//.exec(n)?"same-origin":void 0;var r,e,t,n,o=t(c[r],{method:"GET",companyName:c.companyName,productName:c.productName,control:e,mode:n,onProgress:function(e){x(r,e)}}).then(function(e){return e.parsedBody}).catch(function(e){var t="Failed to download file "+c[r];"file:"==location.protocol?i(t+". Loading web pages via a file:// URL without a web server is not supported by this browser. Please use a local development web server to host Unity content, or use the Unity Build and Run option.","error"):console.error(t)});c.preRun.push(function(){c.addRunDependency("dataUrl"),o.then(function(e){var t=new DataView(e.buffer,e.byteOffset,e.byteLength),r=0,n="UnityWebData1.0\0";if(!String.fromCharCode.apply(null,e.subarray(r,r+n.length))==n)throw"unknown data format";var o=t.getUint32(r+=n.length,!0);for(r+=4;r - - :param radius: Standard deviation of the Gaussian kernel. - """ - - name = "GaussianBlur" - - def __init__(self, radius=2): - self.radius = radius - - def filter(self, image): - return image.gaussian_blur(self.radius) - - -class BoxBlur(MultibandFilter): - """Blurs the image by setting each pixel to the average value of the pixels - in a square box extending radius pixels in each direction. - Supports float radius of arbitrary size. Uses an optimized implementation - which runs in linear time relative to the size of the image - for any radius value. - - :param radius: Size of the box in one direction. Radius 0 does not blur, - returns an identical image. Radius 1 takes 1 pixel - in each direction, i.e. 9 pixels in total. - """ - - name = "BoxBlur" - - def __init__(self, radius): - if radius < 0: - msg = "radius must be >= 0" - raise ValueError(msg) - self.radius = radius - - def filter(self, image): - return image.box_blur(self.radius) - - -class UnsharpMask(MultibandFilter): - """Unsharp mask filter. - - See Wikipedia's entry on `digital unsharp masking`_ for an explanation of - the parameters. - - :param radius: Blur Radius - :param percent: Unsharp strength, in percent - :param threshold: Threshold controls the minimum brightness change that - will be sharpened - - .. _digital unsharp masking: https://en.wikipedia.org/wiki/Unsharp_masking#Digital_unsharp_masking - - """ # noqa: E501 - - name = "UnsharpMask" - - def __init__(self, radius=2, percent=150, threshold=3): - self.radius = radius - self.percent = percent - self.threshold = threshold - - def filter(self, image): - return image.unsharp_mask(self.radius, self.percent, self.threshold) - - -class BLUR(BuiltinFilter): - name = "Blur" - # fmt: off - filterargs = (5, 5), 16, 0, ( - 1, 1, 1, 1, 1, - 1, 0, 0, 0, 1, - 1, 0, 0, 0, 1, - 1, 0, 0, 0, 1, - 1, 1, 1, 1, 1, - ) - # fmt: on - - -class CONTOUR(BuiltinFilter): - name = "Contour" - # fmt: off - filterargs = (3, 3), 1, 255, ( - -1, -1, -1, - -1, 8, -1, - -1, -1, -1, - ) - # fmt: on - - -class DETAIL(BuiltinFilter): - name = "Detail" - # fmt: off - filterargs = (3, 3), 6, 0, ( - 0, -1, 0, - -1, 10, -1, - 0, -1, 0, - ) - # fmt: on - - -class EDGE_ENHANCE(BuiltinFilter): - name = "Edge-enhance" - # fmt: off - filterargs = (3, 3), 2, 0, ( - -1, -1, -1, - -1, 10, -1, - -1, -1, -1, - ) - # fmt: on - - -class EDGE_ENHANCE_MORE(BuiltinFilter): - name = "Edge-enhance More" - # fmt: off - filterargs = (3, 3), 1, 0, ( - -1, -1, -1, - -1, 9, -1, - -1, -1, -1, - ) - # fmt: on - - -class EMBOSS(BuiltinFilter): - name = "Emboss" - # fmt: off - filterargs = (3, 3), 1, 128, ( - -1, 0, 0, - 0, 1, 0, - 0, 0, 0, - ) - # fmt: on - - -class FIND_EDGES(BuiltinFilter): - name = "Find Edges" - # fmt: off - filterargs = (3, 3), 1, 0, ( - -1, -1, -1, - -1, 8, -1, - -1, -1, -1, - ) - # fmt: on - - -class SHARPEN(BuiltinFilter): - name = "Sharpen" - # fmt: off - filterargs = (3, 3), 16, 0, ( - -2, -2, -2, - -2, 32, -2, - -2, -2, -2, - ) - # fmt: on - - -class SMOOTH(BuiltinFilter): - name = "Smooth" - # fmt: off - filterargs = (3, 3), 13, 0, ( - 1, 1, 1, - 1, 5, 1, - 1, 1, 1, - ) - # fmt: on - - -class SMOOTH_MORE(BuiltinFilter): - name = "Smooth More" - # fmt: off - filterargs = (5, 5), 100, 0, ( - 1, 1, 1, 1, 1, - 1, 5, 5, 5, 1, - 1, 5, 44, 5, 1, - 1, 5, 5, 5, 1, - 1, 1, 1, 1, 1, - ) - # fmt: on - - -class Color3DLUT(MultibandFilter): - """Three-dimensional color lookup table. - - Transforms 3-channel pixels using the values of the channels as coordinates - in the 3D lookup table and interpolating the nearest elements. - - This method allows you to apply almost any color transformation - in constant time by using pre-calculated decimated tables. - - .. versionadded:: 5.2.0 - - :param size: Size of the table. One int or tuple of (int, int, int). - Minimal size in any dimension is 2, maximum is 65. - :param table: Flat lookup table. A list of ``channels * size**3`` - float elements or a list of ``size**3`` channels-sized - tuples with floats. Channels are changed first, - then first dimension, then second, then third. - Value 0.0 corresponds lowest value of output, 1.0 highest. - :param channels: Number of channels in the table. Could be 3 or 4. - Default is 3. - :param target_mode: A mode for the result image. Should have not less - than ``channels`` channels. Default is ``None``, - which means that mode wouldn't be changed. - """ - - name = "Color 3D LUT" - - def __init__(self, size, table, channels=3, target_mode=None, **kwargs): - if channels not in (3, 4): - msg = "Only 3 or 4 output channels are supported" - raise ValueError(msg) - self.size = size = self._check_size(size) - self.channels = channels - self.mode = target_mode - - # Hidden flag `_copy_table=False` could be used to avoid extra copying - # of the table if the table is specially made for the constructor. - copy_table = kwargs.get("_copy_table", True) - items = size[0] * size[1] * size[2] - wrong_size = False - - numpy = None - if hasattr(table, "shape"): - try: - import numpy - except ImportError: # pragma: no cover - pass - - if numpy and isinstance(table, numpy.ndarray): - if copy_table: - table = table.copy() - - if table.shape in [ - (items * channels,), - (items, channels), - (size[2], size[1], size[0], channels), - ]: - table = table.reshape(items * channels) - else: - wrong_size = True - - else: - if copy_table: - table = list(table) - - # Convert to a flat list - if table and isinstance(table[0], (list, tuple)): - table, raw_table = [], table - for pixel in raw_table: - if len(pixel) != channels: - msg = ( - "The elements of the table should " - f"have a length of {channels}." - ) - raise ValueError(msg) - table.extend(pixel) - - if wrong_size or len(table) != items * channels: - msg = ( - "The table should have either channels * size**3 float items " - "or size**3 items of channels-sized tuples with floats. " - f"Table should be: {channels}x{size[0]}x{size[1]}x{size[2]}. " - f"Actual length: {len(table)}" - ) - raise ValueError(msg) - self.table = table - - @staticmethod - def _check_size(size): - try: - _, _, _ = size - except ValueError as e: - msg = "Size should be either an integer or a tuple of three integers." - raise ValueError(msg) from e - except TypeError: - size = (size, size, size) - size = [int(x) for x in size] - for size_1d in size: - if not 2 <= size_1d <= 65: - msg = "Size should be in [2, 65] range." - raise ValueError(msg) - return size - - @classmethod - def generate(cls, size, callback, channels=3, target_mode=None): - """Generates new LUT using provided callback. - - :param size: Size of the table. Passed to the constructor. - :param callback: Function with three parameters which correspond - three color channels. Will be called ``size**3`` - times with values from 0.0 to 1.0 and should return - a tuple with ``channels`` elements. - :param channels: The number of channels which should return callback. - :param target_mode: Passed to the constructor of the resulting - lookup table. - """ - size_1d, size_2d, size_3d = cls._check_size(size) - if channels not in (3, 4): - msg = "Only 3 or 4 output channels are supported" - raise ValueError(msg) - - table = [0] * (size_1d * size_2d * size_3d * channels) - idx_out = 0 - for b in range(size_3d): - for g in range(size_2d): - for r in range(size_1d): - table[idx_out : idx_out + channels] = callback( - r / (size_1d - 1), g / (size_2d - 1), b / (size_3d - 1) - ) - idx_out += channels - - return cls( - (size_1d, size_2d, size_3d), - table, - channels=channels, - target_mode=target_mode, - _copy_table=False, - ) - - def transform(self, callback, with_normals=False, channels=None, target_mode=None): - """Transforms the table values using provided callback and returns - a new LUT with altered values. - - :param callback: A function which takes old lookup table values - and returns a new set of values. The number - of arguments which function should take is - ``self.channels`` or ``3 + self.channels`` - if ``with_normals`` flag is set. - Should return a tuple of ``self.channels`` or - ``channels`` elements if it is set. - :param with_normals: If true, ``callback`` will be called with - coordinates in the color cube as the first - three arguments. Otherwise, ``callback`` - will be called only with actual color values. - :param channels: The number of channels in the resulting lookup table. - :param target_mode: Passed to the constructor of the resulting - lookup table. - """ - if channels not in (None, 3, 4): - msg = "Only 3 or 4 output channels are supported" - raise ValueError(msg) - ch_in = self.channels - ch_out = channels or ch_in - size_1d, size_2d, size_3d = self.size - - table = [0] * (size_1d * size_2d * size_3d * ch_out) - idx_in = 0 - idx_out = 0 - for b in range(size_3d): - for g in range(size_2d): - for r in range(size_1d): - values = self.table[idx_in : idx_in + ch_in] - if with_normals: - values = callback( - r / (size_1d - 1), - g / (size_2d - 1), - b / (size_3d - 1), - *values, - ) - else: - values = callback(*values) - table[idx_out : idx_out + ch_out] = values - idx_in += ch_in - idx_out += ch_out - - return type(self)( - self.size, - table, - channels=ch_out, - target_mode=target_mode or self.mode, - _copy_table=False, - ) - - def __repr__(self): - r = [ - f"{self.__class__.__name__} from {self.table.__class__.__name__}", - "size={:d}x{:d}x{:d}".format(*self.size), - f"channels={self.channels:d}", - ] - if self.mode: - r.append(f"target_mode={self.mode}") - return "<{}>".format(" ".join(r)) - - def filter(self, image): - from . import Image - - return image.color_lut_3d( - self.mode or image.mode, - Image.Resampling.BILINEAR, - self.channels, - self.size[0], - self.size[1], - self.size[2], - self.table, - ) diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/new_baselines/mask_rcnn_R_50_FPN_50ep_LSJ.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/new_baselines/mask_rcnn_R_50_FPN_50ep_LSJ.py deleted file mode 100644 index 2ca1ede262cf5c37a3a54778458c74aff1479411..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/new_baselines/mask_rcnn_R_50_FPN_50ep_LSJ.py +++ /dev/null @@ -1,14 +0,0 @@ -from .mask_rcnn_R_50_FPN_100ep_LSJ import ( - dataloader, - lr_multiplier, - model, - optimizer, - train, -) - -train.max_iter //= 2 # 100ep -> 50ep - -lr_multiplier.scheduler.milestones = [ - milestone // 2 for milestone in lr_multiplier.scheduler.milestones -] -lr_multiplier.scheduler.num_updates = train.max_iter diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/utils/colormap.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/utils/colormap.py deleted file mode 100644 index 14ded1659b40b161358c4aaf9cc84ffe0ffafe64..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/utils/colormap.py +++ /dev/null @@ -1,158 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -""" -An awesome colormap for really neat visualizations. -Copied from Detectron, and removed gray colors. -""" - -import numpy as np -import random - -__all__ = ["colormap", "random_color", "random_colors"] - -# fmt: off -# RGB: -_COLORS = np.array( - [ - 0.000, 0.447, 0.741, - 0.850, 0.325, 0.098, - 0.929, 0.694, 0.125, - 0.494, 0.184, 0.556, - 0.466, 0.674, 0.188, - 0.301, 0.745, 0.933, - 0.635, 0.078, 0.184, - 0.300, 0.300, 0.300, - 0.600, 0.600, 0.600, - 1.000, 0.000, 0.000, - 1.000, 0.500, 0.000, - 0.749, 0.749, 0.000, - 0.000, 1.000, 0.000, - 0.000, 0.000, 1.000, - 0.667, 0.000, 1.000, - 0.333, 0.333, 0.000, - 0.333, 0.667, 0.000, - 0.333, 1.000, 0.000, - 0.667, 0.333, 0.000, - 0.667, 0.667, 0.000, - 0.667, 1.000, 0.000, - 1.000, 0.333, 0.000, - 1.000, 0.667, 0.000, - 1.000, 1.000, 0.000, - 0.000, 0.333, 0.500, - 0.000, 0.667, 0.500, - 0.000, 1.000, 0.500, - 0.333, 0.000, 0.500, - 0.333, 0.333, 0.500, - 0.333, 0.667, 0.500, - 0.333, 1.000, 0.500, - 0.667, 0.000, 0.500, - 0.667, 0.333, 0.500, - 0.667, 0.667, 0.500, - 0.667, 1.000, 0.500, - 1.000, 0.000, 0.500, - 1.000, 0.333, 0.500, - 1.000, 0.667, 0.500, - 1.000, 1.000, 0.500, - 0.000, 0.333, 1.000, - 0.000, 0.667, 1.000, - 0.000, 1.000, 1.000, - 0.333, 0.000, 1.000, - 0.333, 0.333, 1.000, - 0.333, 0.667, 1.000, - 0.333, 1.000, 1.000, - 0.667, 0.000, 1.000, - 0.667, 0.333, 1.000, - 0.667, 0.667, 1.000, - 0.667, 1.000, 1.000, - 1.000, 0.000, 1.000, - 1.000, 0.333, 1.000, - 1.000, 0.667, 1.000, - 0.333, 0.000, 0.000, - 0.500, 0.000, 0.000, - 0.667, 0.000, 0.000, - 0.833, 0.000, 0.000, - 1.000, 0.000, 0.000, - 0.000, 0.167, 0.000, - 0.000, 0.333, 0.000, - 0.000, 0.500, 0.000, - 0.000, 0.667, 0.000, - 0.000, 0.833, 0.000, - 0.000, 1.000, 0.000, - 0.000, 0.000, 0.167, - 0.000, 0.000, 0.333, - 0.000, 0.000, 0.500, - 0.000, 0.000, 0.667, - 0.000, 0.000, 0.833, - 0.000, 0.000, 1.000, - 0.000, 0.000, 0.000, - 0.143, 0.143, 0.143, - 0.857, 0.857, 0.857, - 1.000, 1.000, 1.000 - ] -).astype(np.float32).reshape(-1, 3) -# fmt: on - - -def colormap(rgb=False, maximum=255): - """ - Args: - rgb (bool): whether to return RGB colors or BGR colors. - maximum (int): either 255 or 1 - - Returns: - ndarray: a float32 array of Nx3 colors, in range [0, 255] or [0, 1] - """ - assert maximum in [255, 1], maximum - c = _COLORS * maximum - if not rgb: - c = c[:, ::-1] - return c - - -def random_color(rgb=False, maximum=255): - """ - Args: - rgb (bool): whether to return RGB colors or BGR colors. - maximum (int): either 255 or 1 - - Returns: - ndarray: a vector of 3 numbers - """ - idx = np.random.randint(0, len(_COLORS)) - ret = _COLORS[idx] * maximum - if not rgb: - ret = ret[::-1] - return ret - - -def random_colors(N, rgb=False, maximum=255): - """ - Args: - N (int): number of unique colors needed - rgb (bool): whether to return RGB colors or BGR colors. - maximum (int): either 255 or 1 - - Returns: - ndarray: a list of random_color - """ - indices = random.sample(range(len(_COLORS)), N) - ret = [_COLORS[i] * maximum for i in indices] - if not rgb: - ret = [x[::-1] for x in ret] - return ret - - -if __name__ == "__main__": - import cv2 - - size = 100 - H, W = 10, 10 - canvas = np.random.rand(H * size, W * size, 3).astype("float32") - for h in range(H): - for w in range(W): - idx = h * W + w - if idx >= len(_COLORS): - break - canvas[h * size : (h + 1) * size, w * size : (w + 1) * size] = _COLORS[idx] - cv2.imshow("a", canvas) - cv2.waitKey(0) diff --git a/spaces/carlostoxtli/ace/style.css b/spaces/carlostoxtli/ace/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/carlostoxtli/ace/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/cenji1109285052/img-to-music/constants.py b/spaces/cenji1109285052/img-to-music/constants.py deleted file mode 100644 index 86863d1b778d4c66f0d8e1e0b699f1bb937c1d50..0000000000000000000000000000000000000000 --- a/spaces/cenji1109285052/img-to-music/constants.py +++ /dev/null @@ -1,9 +0,0 @@ -import numpy as np -import os - -MUBERT_LICENSE = os.environ.get('MUBERT_LICENSE') -MUBERT_TOKEN = os.environ.get('MUBERT_TOKEN') - -MUBERT_MODE = "loop" -MUBERT_TAGS_STRING = 'tribal,action,kids,neo-classic,run 130,pumped,jazz / funk,ethnic,dubtechno,reggae,acid jazz,liquidfunk,funk,witch house,tech house,underground,artists,mystical,disco,sensorium,r&b,agender,psychedelic trance / psytrance,peaceful,run 140,piano,run 160,setting,meditation,christmas,ambient,horror,cinematic,electro house,idm,bass,minimal,underscore,drums,glitchy,beautiful,technology,tribal house,country pop,jazz & funk,documentary,space,classical,valentines,chillstep,experimental,trap,new jack swing,drama,post-rock,tense,corporate,neutral,happy,analog,funky,spiritual,sberzvuk special,chill hop,dramatic,catchy,holidays,fitness 90,optimistic,orchestra,acid techno,energizing,romantic,minimal house,breaks,hyper pop,warm up,dreamy,dark,urban,microfunk,dub,nu disco,vogue,keys,hardcore,aggressive,indie,electro funk,beauty,relaxing,trance,pop,hiphop,soft,acoustic,chillrave / ethno-house,deep techno,angry,dance,fun,dubstep,tropical,latin pop,heroic,world music,inspirational,uplifting,atmosphere,art,epic,advertising,chillout,scary,spooky,slow ballad,saxophone,summer,erotic,jazzy,energy 100,kara mar,xmas,atmospheric,indie pop,hip-hop,yoga,reggaeton,lounge,travel,running,folk,chillrave & ethno-house,detective,darkambient,chill,fantasy,minimal techno,special,night,tropical house,downtempo,lullaby,meditative,upbeat,glitch hop,fitness,neurofunk,sexual,indie rock,future pop,jazz,cyberpunk,melancholic,happy hardcore,family / kids,synths,electric guitar,comedy,psychedelic trance & psytrance,edm,psychedelic rock,calm,zen,bells,podcast,melodic house,ethnic percussion,nature,heavy,bassline,indie dance,techno,drumnbass,synth pop,vaporwave,sad,8-bit,chillgressive,deep,orchestral,futuristic,hardtechno,nostalgic,big room,sci-fi,tutorial,joyful,pads,minimal 170,drill,ethnic 108,amusing,sleepy ambient,psychill,italo disco,lofi,house,acoustic guitar,bassline house,rock,k-pop,synthwave,deep house,electronica,gabber,nightlife,sport & fitness,road trip,celebration,electro,disco house,electronic' -MUBERT_TAGS = np.array(MUBERT_TAGS_STRING.split(',')) \ No newline at end of file diff --git a/spaces/chasemcdo/hf_localai/examples/localai-webui/README.md b/spaces/chasemcdo/hf_localai/examples/localai-webui/README.md deleted file mode 100644 index 8e36f40a25c425e2d9409baf6015f843e13bb10d..0000000000000000000000000000000000000000 --- a/spaces/chasemcdo/hf_localai/examples/localai-webui/README.md +++ /dev/null @@ -1,26 +0,0 @@ -# localai-webui - -Example of integration with [dhruvgera/localai-frontend](https://github.com/Dhruvgera/LocalAI-frontend). - -![image](https://user-images.githubusercontent.com/42107491/235344183-44b5967d-ba22-4331-804c-8da7004a5d35.png) - -## Setup - -```bash -# Clone LocalAI -git clone https://github.com/go-skynet/LocalAI - -cd LocalAI/examples/localai-webui - -# (optional) Checkout a specific LocalAI tag -# git checkout -b build - -# Download any desired models to models/ in the parent LocalAI project dir -# For example: wget https://gpt4all.io/models/ggml-gpt4all-j.bin - -# start with docker-compose -docker-compose up -d --build -``` - -Open http://localhost:3000 for the Web UI. - diff --git a/spaces/chendl/compositional_test/transformers/examples/pytorch/language-modeling/run_clm_no_trainer.py b/spaces/chendl/compositional_test/transformers/examples/pytorch/language-modeling/run_clm_no_trainer.py deleted file mode 100644 index 3e09215457b0ba7816ff9b2bb1cc3344e913f536..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/pytorch/language-modeling/run_clm_no_trainer.py +++ /dev/null @@ -1,685 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -# Copyright 2021 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -Fine-tuning the library models for causal language modeling (GPT, GPT-2, CTRL, ...) -on a text file or a dataset without using HuggingFace Trainer. - -Here is the full list of checkpoints on the hub that can be fine-tuned by this script: -https://huggingface.co/models?filter=text-generation -""" -# You can also adapt this script on your own causal language modeling task. Pointers for this are left as comments. - -import argparse -import json -import logging -import math -import os -import random -from itertools import chain -from pathlib import Path - -import datasets -import torch -from accelerate import Accelerator, DistributedType -from accelerate.logging import get_logger -from accelerate.utils import set_seed -from datasets import load_dataset -from huggingface_hub import Repository, create_repo -from torch.utils.data import DataLoader -from tqdm.auto import tqdm - -import transformers -from transformers import ( - CONFIG_MAPPING, - MODEL_MAPPING, - AutoConfig, - AutoModelForCausalLM, - AutoTokenizer, - SchedulerType, - default_data_collator, - get_scheduler, -) -from transformers.utils import check_min_version, get_full_repo_name, send_example_telemetry -from transformers.utils.versions import require_version - - -# Will error if the minimal version of Transformers is not installed. Remove at your own risks. -check_min_version("4.28.0") - -logger = get_logger(__name__) - -require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/language-modeling/requirements.txt") - -MODEL_CONFIG_CLASSES = list(MODEL_MAPPING.keys()) -MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES) - - -def parse_args(): - parser = argparse.ArgumentParser(description="Finetune a transformers model on a causal language modeling task") - parser.add_argument( - "--dataset_name", - type=str, - default=None, - help="The name of the dataset to use (via the datasets library).", - ) - parser.add_argument( - "--dataset_config_name", - type=str, - default=None, - help="The configuration name of the dataset to use (via the datasets library).", - ) - parser.add_argument( - "--train_file", type=str, default=None, help="A csv or a json file containing the training data." - ) - parser.add_argument( - "--validation_file", type=str, default=None, help="A csv or a json file containing the validation data." - ) - parser.add_argument( - "--validation_split_percentage", - default=5, - help="The percentage of the train set used as validation set in case there's no validation split", - ) - parser.add_argument( - "--model_name_or_path", - type=str, - help="Path to pretrained model or model identifier from huggingface.co/models.", - required=False, - ) - parser.add_argument( - "--config_name", - type=str, - default=None, - help="Pretrained config name or path if not the same as model_name", - ) - parser.add_argument( - "--tokenizer_name", - type=str, - default=None, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--use_slow_tokenizer", - action="store_true", - help="If passed, will use a slow tokenizer (not backed by the 🤗 Tokenizers library).", - ) - parser.add_argument( - "--per_device_train_batch_size", - type=int, - default=8, - help="Batch size (per device) for the training dataloader.", - ) - parser.add_argument( - "--per_device_eval_batch_size", - type=int, - default=8, - help="Batch size (per device) for the evaluation dataloader.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=5e-5, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument("--weight_decay", type=float, default=0.0, help="Weight decay to use.") - parser.add_argument("--num_train_epochs", type=int, default=3, help="Total number of training epochs to perform.") - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--lr_scheduler_type", - type=SchedulerType, - default="linear", - help="The scheduler type to use.", - choices=["linear", "cosine", "cosine_with_restarts", "polynomial", "constant", "constant_with_warmup"], - ) - parser.add_argument( - "--num_warmup_steps", type=int, default=0, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument("--output_dir", type=str, default=None, help="Where to store the final model.") - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--model_type", - type=str, - default=None, - help="Model type to use if training from scratch.", - choices=MODEL_TYPES, - ) - parser.add_argument( - "--block_size", - type=int, - default=None, - help=( - "Optional input sequence length after tokenization. The training dataset will be truncated in block of" - " this size for training. Default to the model max input length for single sentence inputs (take into" - " account special tokens)." - ), - ) - parser.add_argument( - "--preprocessing_num_workers", - type=int, - default=None, - help="The number of processes to use for the preprocessing.", - ) - parser.add_argument( - "--overwrite_cache", action="store_true", help="Overwrite the cached training and evaluation sets" - ) - parser.add_argument( - "--no_keep_linebreaks", action="store_true", help="Do not keep line breaks when using TXT files." - ) - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument( - "--hub_model_id", type=str, help="The name of the repository to keep in sync with the local `output_dir`." - ) - parser.add_argument("--hub_token", type=str, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--checkpointing_steps", - type=str, - default=None, - help="Whether the various states should be saved at the end of every n steps, or 'epoch' for each epoch.", - ) - parser.add_argument( - "--resume_from_checkpoint", - type=str, - default=None, - help="If the training should continue from a checkpoint folder.", - ) - parser.add_argument( - "--with_tracking", - action="store_true", - help="Whether to enable experiment trackers for logging.", - ) - parser.add_argument( - "--report_to", - type=str, - default="all", - help=( - 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`,' - ' `"wandb"`, `"comet_ml"` and `"clearml"`. Use `"all"` (default) to report to all integrations.' - "Only applicable when `--with_tracking` is passed." - ), - ) - parser.add_argument( - "--low_cpu_mem_usage", - action="store_true", - help=( - "It is an option to create the model as an empty shell, then only materialize its parameters when the pretrained weights are loaded." - "If passed, LLM loading time and RAM consumption will be benefited." - ), - ) - args = parser.parse_args() - - # Sanity checks - if args.dataset_name is None and args.train_file is None and args.validation_file is None: - raise ValueError("Need either a dataset name or a training/validation file.") - else: - if args.train_file is not None: - extension = args.train_file.split(".")[-1] - assert extension in ["csv", "json", "txt"], "`train_file` should be a csv, json or txt file." - if args.validation_file is not None: - extension = args.validation_file.split(".")[-1] - assert extension in ["csv", "json", "txt"], "`validation_file` should be a csv, json or txt file." - - if args.push_to_hub: - assert args.output_dir is not None, "Need an `output_dir` to create a repo when `--push_to_hub` is passed." - - return args - - -def main(): - args = parse_args() - - # Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The - # information sent is the one passed as arguments along with your Python/PyTorch versions. - send_example_telemetry("run_clm_no_trainer", args) - - # Initialize the accelerator. We will let the accelerator handle device placement for us in this example. - # If we're using tracking, we also need to initialize it here and it will by default pick up all supported trackers - # in the environment - accelerator_log_kwargs = {} - - if args.with_tracking: - accelerator_log_kwargs["log_with"] = args.report_to - accelerator_log_kwargs["logging_dir"] = args.output_dir - - accelerator = Accelerator(gradient_accumulation_steps=args.gradient_accumulation_steps, **accelerator_log_kwargs) - - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - logger.info(accelerator.state, main_process_only=False) - if accelerator.is_local_main_process: - datasets.utils.logging.set_verbosity_warning() - transformers.utils.logging.set_verbosity_info() - else: - datasets.utils.logging.set_verbosity_error() - transformers.utils.logging.set_verbosity_error() - - # If passed along, set the training seed now. - if args.seed is not None: - set_seed(args.seed) - - # Handle the repository creation - if accelerator.is_main_process: - if args.push_to_hub: - if args.hub_model_id is None: - repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token) - else: - repo_name = args.hub_model_id - create_repo(repo_name, exist_ok=True, token=args.hub_token) - repo = Repository(args.output_dir, clone_from=repo_name, token=args.hub_token) - - with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore: - if "step_*" not in gitignore: - gitignore.write("step_*\n") - if "epoch_*" not in gitignore: - gitignore.write("epoch_*\n") - elif args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - accelerator.wait_for_everyone() - - # Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below) - # or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/ - # (the dataset will be downloaded automatically from the datasets Hub). - # - # For CSV/JSON files, this script will use the column called 'text' or the first column if no column called - # 'text' is found. You can easily tweak this behavior (see below). - # - # In distributed training, the load_dataset function guarantee that only one local process can concurrently - # download the dataset. - if args.dataset_name is not None: - # Downloading and loading a dataset from the hub. - raw_datasets = load_dataset(args.dataset_name, args.dataset_config_name) - if "validation" not in raw_datasets.keys(): - raw_datasets["validation"] = load_dataset( - args.dataset_name, - args.dataset_config_name, - split=f"train[:{args.validation_split_percentage}%]", - ) - raw_datasets["train"] = load_dataset( - args.dataset_name, - args.dataset_config_name, - split=f"train[{args.validation_split_percentage}%:]", - ) - else: - data_files = {} - dataset_args = {} - if args.train_file is not None: - data_files["train"] = args.train_file - if args.validation_file is not None: - data_files["validation"] = args.validation_file - extension = args.train_file.split(".")[-1] - if extension == "txt": - extension = "text" - dataset_args["keep_linebreaks"] = not args.no_keep_linebreaks - raw_datasets = load_dataset(extension, data_files=data_files, **dataset_args) - # If no validation data is there, validation_split_percentage will be used to divide the dataset. - if "validation" not in raw_datasets.keys(): - raw_datasets["validation"] = load_dataset( - extension, - data_files=data_files, - split=f"train[:{args.validation_split_percentage}%]", - **dataset_args, - ) - raw_datasets["train"] = load_dataset( - extension, - data_files=data_files, - split=f"train[{args.validation_split_percentage}%:]", - **dataset_args, - ) - - # See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at - # https://huggingface.co/docs/datasets/loading_datasets.html. - - # Load pretrained model and tokenizer - # - # In distributed training, the .from_pretrained methods guarantee that only one local process can concurrently - # download model & vocab. - if args.config_name: - config = AutoConfig.from_pretrained(args.config_name) - elif args.model_name_or_path: - config = AutoConfig.from_pretrained(args.model_name_or_path) - else: - config = CONFIG_MAPPING[args.model_type]() - logger.warning("You are instantiating a new config instance from scratch.") - - if args.tokenizer_name: - tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name, use_fast=not args.use_slow_tokenizer) - elif args.model_name_or_path: - tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path, use_fast=not args.use_slow_tokenizer) - else: - raise ValueError( - "You are instantiating a new tokenizer from scratch. This is not supported by this script." - "You can do it from another script, save it, and load it from here, using --tokenizer_name." - ) - - if args.model_name_or_path: - model = AutoModelForCausalLM.from_pretrained( - args.model_name_or_path, - from_tf=bool(".ckpt" in args.model_name_or_path), - config=config, - low_cpu_mem_usage=args.low_cpu_mem_usage, - ) - else: - logger.info("Training new model from scratch") - model = AutoModelForCausalLM.from_config(config) - - # We resize the embeddings only when necessary to avoid index errors. If you are creating a model from scratch - # on a small vocab and want a smaller embedding size, remove this test. - embedding_size = model.get_input_embeddings().weight.shape[0] - if len(tokenizer) > embedding_size: - model.resize_token_embeddings(len(tokenizer)) - - # Preprocessing the datasets. - # First we tokenize all the texts. - column_names = raw_datasets["train"].column_names - text_column_name = "text" if "text" in column_names else column_names[0] - - def tokenize_function(examples): - return tokenizer(examples[text_column_name]) - - with accelerator.main_process_first(): - tokenized_datasets = raw_datasets.map( - tokenize_function, - batched=True, - num_proc=args.preprocessing_num_workers, - remove_columns=column_names, - load_from_cache_file=not args.overwrite_cache, - desc="Running tokenizer on dataset", - ) - - if args.block_size is None: - block_size = tokenizer.model_max_length - if block_size > 1024: - logger.warning( - "The chosen tokenizer supports a `model_max_length` that is longer than the default `block_size` value" - " of 1024. If you would like to use a longer `block_size` up to `tokenizer.model_max_length` you can" - " override this default with `--block_size xxx`." - ) - block_size = 1024 - else: - if args.block_size > tokenizer.model_max_length: - logger.warning( - f"The block_size passed ({args.block_size}) is larger than the maximum length for the model" - f"({tokenizer.model_max_length}). Using block_size={tokenizer.model_max_length}." - ) - block_size = min(args.block_size, tokenizer.model_max_length) - - # Main data processing function that will concatenate all texts from our dataset and generate chunks of block_size. - def group_texts(examples): - # Concatenate all texts. - concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()} - total_length = len(concatenated_examples[list(examples.keys())[0]]) - # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can - # customize this part to your needs. - if total_length >= block_size: - total_length = (total_length // block_size) * block_size - # Split by chunks of max_len. - result = { - k: [t[i : i + block_size] for i in range(0, total_length, block_size)] - for k, t in concatenated_examples.items() - } - result["labels"] = result["input_ids"].copy() - return result - - # Note that with `batched=True`, this map processes 1,000 texts together, so group_texts throws away a remainder - # for each of those groups of 1,000 texts. You can adjust that batch_size here but a higher value might be slower - # to preprocess. - # - # To speed up this part, we use multiprocessing. See the documentation of the map method for more information: - # https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map - - with accelerator.main_process_first(): - lm_datasets = tokenized_datasets.map( - group_texts, - batched=True, - num_proc=args.preprocessing_num_workers, - load_from_cache_file=not args.overwrite_cache, - desc=f"Grouping texts in chunks of {block_size}", - ) - - train_dataset = lm_datasets["train"] - eval_dataset = lm_datasets["validation"] - - # Log a few random samples from the training set: - for index in random.sample(range(len(train_dataset)), 3): - logger.info(f"Sample {index} of the training set: {train_dataset[index]}.") - - # DataLoaders creation: - train_dataloader = DataLoader( - train_dataset, shuffle=True, collate_fn=default_data_collator, batch_size=args.per_device_train_batch_size - ) - eval_dataloader = DataLoader( - eval_dataset, collate_fn=default_data_collator, batch_size=args.per_device_eval_batch_size - ) - - # Optimizer - # Split weights in two groups, one with weight decay and the other not. - no_decay = ["bias", "layer_norm.weight"] - optimizer_grouped_parameters = [ - { - "params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], - "weight_decay": args.weight_decay, - }, - { - "params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], - "weight_decay": 0.0, - }, - ] - optimizer = torch.optim.AdamW(optimizer_grouped_parameters, lr=args.learning_rate) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - name=args.lr_scheduler_type, - optimizer=optimizer, - num_warmup_steps=args.num_warmup_steps * args.gradient_accumulation_steps, - num_training_steps=args.max_train_steps * args.gradient_accumulation_steps, - ) - - # Prepare everything with our `accelerator`. - model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare( - model, optimizer, train_dataloader, eval_dataloader, lr_scheduler - ) - - # On TPU, the tie weights in our model have been disconnected, so we need to restore the ties. - if accelerator.distributed_type == DistributedType.TPU: - model.tie_weights() - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # Figure out how many steps we should save the Accelerator states - checkpointing_steps = args.checkpointing_steps - if checkpointing_steps is not None and checkpointing_steps.isdigit(): - checkpointing_steps = int(checkpointing_steps) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if args.with_tracking: - experiment_config = vars(args) - # TensorBoard cannot log Enums, need the raw value - experiment_config["lr_scheduler_type"] = experiment_config["lr_scheduler_type"].value - accelerator.init_trackers("clm_no_trainer", experiment_config) - - # Train! - total_batch_size = args.per_device_train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.per_device_train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process) - completed_steps = 0 - starting_epoch = 0 - - # Potentially load in the weights and states from a previous save - if args.resume_from_checkpoint: - if args.resume_from_checkpoint is not None or args.resume_from_checkpoint != "": - accelerator.print(f"Resumed from checkpoint: {args.resume_from_checkpoint}") - accelerator.load_state(args.resume_from_checkpoint) - path = os.path.basename(args.resume_from_checkpoint) - else: - # Get the most recent checkpoint - dirs = [f.name for f in os.scandir(os.getcwd()) if f.is_dir()] - dirs.sort(key=os.path.getctime) - path = dirs[-1] # Sorts folders by date modified, most recent checkpoint is the last - # Extract `epoch_{i}` or `step_{i}` - training_difference = os.path.splitext(path)[0] - - if "epoch" in training_difference: - starting_epoch = int(training_difference.replace("epoch_", "")) + 1 - resume_step = None - else: - # need to multiply `gradient_accumulation_steps` to reflect real steps - resume_step = int(training_difference.replace("step_", "")) * args.gradient_accumulation_steps - starting_epoch = resume_step // len(train_dataloader) - resume_step -= starting_epoch * len(train_dataloader) - - # update the progress_bar if load from checkpoint - progress_bar.update(starting_epoch * num_update_steps_per_epoch) - completed_steps = starting_epoch * num_update_steps_per_epoch - - for epoch in range(starting_epoch, args.num_train_epochs): - model.train() - if args.with_tracking: - total_loss = 0 - for step, batch in enumerate(train_dataloader): - # We need to skip steps until we reach the resumed step - if args.resume_from_checkpoint and epoch == starting_epoch: - if resume_step is not None and step < resume_step: - if step % args.gradient_accumulation_steps == 0: - progress_bar.update(1) - completed_steps += 1 - continue - - with accelerator.accumulate(model): - outputs = model(**batch) - loss = outputs.loss - # We keep track of the loss at each epoch - if args.with_tracking: - total_loss += loss.detach().float() - accelerator.backward(loss) - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - progress_bar.update(1) - completed_steps += 1 - - if isinstance(checkpointing_steps, int): - if completed_steps % checkpointing_steps == 0: - output_dir = f"step_{completed_steps }" - if args.output_dir is not None: - output_dir = os.path.join(args.output_dir, output_dir) - accelerator.save_state(output_dir) - if completed_steps >= args.max_train_steps: - break - - model.eval() - losses = [] - for step, batch in enumerate(eval_dataloader): - with torch.no_grad(): - outputs = model(**batch) - - loss = outputs.loss - losses.append(accelerator.gather_for_metrics(loss.repeat(args.per_device_eval_batch_size))) - - losses = torch.cat(losses) - try: - eval_loss = torch.mean(losses) - perplexity = math.exp(eval_loss) - except OverflowError: - perplexity = float("inf") - - logger.info(f"epoch {epoch}: perplexity: {perplexity} eval_loss: {eval_loss}") - - if args.with_tracking: - accelerator.log( - { - "perplexity": perplexity, - "eval_loss": eval_loss, - "train_loss": total_loss.item() / len(train_dataloader), - "epoch": epoch, - "step": completed_steps, - }, - step=completed_steps, - ) - - if args.push_to_hub and epoch < args.num_train_epochs - 1: - accelerator.wait_for_everyone() - unwrapped_model = accelerator.unwrap_model(model) - unwrapped_model.save_pretrained( - args.output_dir, is_main_process=accelerator.is_main_process, save_function=accelerator.save - ) - if accelerator.is_main_process: - tokenizer.save_pretrained(args.output_dir) - repo.push_to_hub( - commit_message=f"Training in progress epoch {epoch}", blocking=False, auto_lfs_prune=True - ) - - if args.checkpointing_steps == "epoch": - output_dir = f"epoch_{epoch}" - if args.output_dir is not None: - output_dir = os.path.join(args.output_dir, output_dir) - accelerator.save_state(output_dir) - - if args.with_tracking: - accelerator.end_training() - - if args.output_dir is not None: - accelerator.wait_for_everyone() - unwrapped_model = accelerator.unwrap_model(model) - unwrapped_model.save_pretrained( - args.output_dir, is_main_process=accelerator.is_main_process, save_function=accelerator.save - ) - if accelerator.is_main_process: - tokenizer.save_pretrained(args.output_dir) - if args.push_to_hub: - repo.push_to_hub(commit_message="End of training", auto_lfs_prune=True) - - with open(os.path.join(args.output_dir, "all_results.json"), "w") as f: - json.dump({"perplexity": perplexity}, f) - - -if __name__ == "__main__": - main() diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/adversarial/README.md b/spaces/chendl/compositional_test/transformers/examples/research_projects/adversarial/README.md deleted file mode 100644 index 3e331a05f4534067ca371ab44832ef4a86dc67f4..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/adversarial/README.md +++ /dev/null @@ -1,38 +0,0 @@ -## Adversarial evaluation of model performances - -Here is an example on evaluating a model using adversarial evaluation of natural language inference with the Heuristic Analysis for NLI Systems (HANS) dataset [McCoy et al., 2019](https://arxiv.org/abs/1902.01007). The example was gracefully provided by [Nafise Sadat Moosavi](https://github.com/ns-moosavi). - -The HANS dataset can be downloaded from [this location](https://github.com/tommccoy1/hans). - -This is an example of using test_hans.py: - -```bash -export HANS_DIR=path-to-hans -export MODEL_TYPE=type-of-the-model-e.g.-bert-roberta-xlnet-etc -export MODEL_PATH=path-to-the-model-directory-that-is-trained-on-NLI-e.g.-by-using-run_glue.py - -python run_hans.py \ - --task_name hans \ - --model_type $MODEL_TYPE \ - --do_eval \ - --data_dir $HANS_DIR \ - --model_name_or_path $MODEL_PATH \ - --max_seq_length 128 \ - --output_dir $MODEL_PATH \ -``` - -This will create the hans_predictions.txt file in MODEL_PATH, which can then be evaluated using hans/evaluate_heur_output.py from the HANS dataset. - -The results of the BERT-base model that is trained on MNLI using batch size 8 and the random seed 42 on the HANS dataset is as follows: - -```bash -Heuristic entailed results: -lexical_overlap: 0.9702 -subsequence: 0.9942 -constituent: 0.9962 - -Heuristic non-entailed results: -lexical_overlap: 0.199 -subsequence: 0.0396 -constituent: 0.118 -``` diff --git a/spaces/chenman/Meina-MeinaMix/README.md b/spaces/chenman/Meina-MeinaMix/README.md deleted file mode 100644 index a151cff5f5f23422ae5cab26a627100f330375ae..0000000000000000000000000000000000000000 --- a/spaces/chenman/Meina-MeinaMix/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Meina MeinaMix -emoji: 👁 -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: creativeml-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/chrisjay/simple-mnist-classification/app.py b/spaces/chrisjay/simple-mnist-classification/app.py deleted file mode 100644 index 1233f13f3d6a1742a593a161314281085e09fa8d..0000000000000000000000000000000000000000 --- a/spaces/chrisjay/simple-mnist-classification/app.py +++ /dev/null @@ -1,196 +0,0 @@ -import os -import torch -import gradio as gr -import torchvision -import torch.nn as nn -import torch.nn.functional as F -import torch.optim as optim - -# This is just to show an interface where one draws a number and gets prediction. - -n_epochs = 10 -batch_size_train = 128 -batch_size_test = 1000 -learning_rate = 0.01 -momentum = 0.5 -log_interval = 10 -random_seed = 1 -TRAIN_CUTOFF = 10 -MODEL_PATH = 'weights' -os.makedirs(MODEL_PATH,exist_ok=True) -METRIC_PATH = os.path.join(MODEL_PATH,'metrics.json') -MODEL_WEIGHTS_PATH = os.path.join(MODEL_PATH,'mnist_model.pth') -OPTIMIZER_PATH = os.path.join(MODEL_PATH,'optimizer.pth') -REPOSITORY_DIR = "data" -LOCAL_DIR = 'data_local' - - - - -HF_TOKEN = os.getenv("HF_TOKEN") -MODEL_REPO = 'mnist-adversarial-model' -HF_DATASET ="mnist-adversarial-dataset" -DATASET_REPO_URL = f"https://huggingface.co/datasets/chrisjay/{HF_DATASET}" -MODEL_REPO_URL = f"https://huggingface.co/model/chrisjay/{MODEL_REPO}" - - -torch.backends.cudnn.enabled = False -torch.manual_seed(random_seed) - - - -TRAIN_TRANSFORM = torchvision.transforms.Compose([ - torchvision.transforms.ToTensor(), - torchvision.transforms.Normalize( - (0.1307,), (0.3081,)) - ]) - - - -# Source: https://nextjournal.com/gkoehler/pytorch-mnist -class MNIST_Model(nn.Module): - def __init__(self): - super(MNIST_Model, self).__init__() - self.conv1 = nn.Conv2d(1, 10, kernel_size=5) - self.conv2 = nn.Conv2d(10, 20, kernel_size=5) - self.conv2_drop = nn.Dropout2d() - self.fc1 = nn.Linear(320, 50) - self.fc2 = nn.Linear(50, 10) - - def forward(self, x): - x = F.relu(F.max_pool2d(self.conv1(x), 2)) - x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2)) - x = x.view(-1, 320) - x = F.relu(self.fc1(x)) - x = F.dropout(x, training=self.training) - x = self.fc2(x) - return F.log_softmax(x) - -train_loader = torch.utils.data.DataLoader( - torchvision.datasets.MNIST('files/', train=True, download=True, - transform=torchvision.transforms.Compose([ - torchvision.transforms.ToTensor(), - torchvision.transforms.Normalize( - mean=(0.1307,), std=(0.3081,)) - ])), - batch_size=batch_size_train, shuffle=True) - -test_loader = torch.utils.data.DataLoader( - torchvision.datasets.MNIST('files/', train=False, download=True, - transform=torchvision.transforms.Compose([ - torchvision.transforms.ToTensor(), - torchvision.transforms.Normalize( - (0.1307,), (0.3081,)) - ])), - batch_size=batch_size_test, shuffle=True) - -def train(epoch,network,optimizer,train_loader): - - train_losses=[] - network.train() - for batch_idx, (data, target) in enumerate(train_loader): - optimizer.zero_grad() - output = network(data) - loss = F.nll_loss(output, target) - loss.backward() - optimizer.step() - if batch_idx % log_interval == 0: - print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( - epoch, batch_idx * len(data), len(train_loader.dataset), - 100. * batch_idx / len(train_loader), loss.item())) - train_losses.append(loss.item()) - - torch.save(network.state_dict(), MODEL_WEIGHTS_PATH) - torch.save(optimizer.state_dict(), OPTIMIZER_PATH) - -def test(): - test_losses=[] - network.eval() - test_loss = 0 - correct = 0 - with torch.no_grad(): - for data, target in test_loader: - output = network(data) - test_loss += F.nll_loss(output, target, size_average=False).item() - pred = output.data.max(1, keepdim=True)[1] - correct += pred.eq(target.data.view_as(pred)).sum() - test_loss /= len(test_loader.dataset) - test_losses.append(test_loss) - acc = 100. * correct / len(test_loader.dataset) - acc = acc.item() - test_metric = '〽Current test metric -> Avg. loss: `{:.4f}`, Accuracy: `{:.0f}%`\n'.format( - test_loss,acc) - print(test_metric) - return test_metric,acc - - - -random_seed = 1 -torch.backends.cudnn.enabled = False -torch.manual_seed(random_seed) - -network = MNIST_Model() #Initialize the model with random weights -optimizer = optim.SGD(network.parameters(), lr=learning_rate, - momentum=momentum) - - -model_state_dict = MODEL_WEIGHTS_PATH -optimizer_state_dict = OPTIMIZER_PATH -if os.path.exists(model_state_dict) and os.path.exists(optimizer_state_dict): - network_state_dict = torch.load(model_state_dict) - network.load_state_dict(network_state_dict) - - optimizer_state_dict = torch.load(optimizer_state_dict) - optimizer.load_state_dict(optimizer_state_dict) -# Train - -#for epoch in range(n_epochs): - -# train(epoch,network,optimizer,train_loader) -# test() - - -def image_classifier(inp): - """ - It takes an image as input and returns a dictionary of class labels and their corresponding - confidence scores. - - :param inp: the image to be classified - :return: A dictionary of the class index and the confidence value. - """ - input_image = torchvision.transforms.ToTensor()(inp).unsqueeze(0) - with torch.no_grad(): - - prediction = torch.nn.functional.softmax(network(input_image)[0], dim=0) - #pred_number = prediction.data.max(1, keepdim=True)[1] - sorted_prediction = torch.sort(prediction,descending=True) - confidences={} - for s,v in zip(sorted_prediction.indices.numpy().tolist(),sorted_prediction.values.numpy().tolist()): - confidences.update({s:v}) - return confidences - - - - -def main(): - block = gr.Blocks() - - with block: - - with gr.Row(): - - - image_input =gr.inputs.Image(source="canvas",shape=(28,28),invert_colors=True,image_mode="L",type="pil") - label_output = gr.outputs.Label(num_top_classes=10) - - image_input.change(image_classifier,inputs = [image_input],outputs=[label_output]) - - - - block.launch() - - - - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/christhegamechanger/background_swapping/libs.py b/spaces/christhegamechanger/background_swapping/libs.py deleted file mode 100644 index 0bcb0f9c02c2914c336bac33c3058d6577c73f85..0000000000000000000000000000000000000000 --- a/spaces/christhegamechanger/background_swapping/libs.py +++ /dev/null @@ -1,10 +0,0 @@ -import torch -import json -import os -import numpy as np -import cv2 -import tensorflow as tf -import streamlit as st -import time -from PIL import Image -from tensorflow.keras.utils import CustomObjectScope diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/httputil.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/httputil.py deleted file mode 100644 index 9bb8e26508aa675aa5639816db4c1dcb47d281e9..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/httputil.py +++ /dev/null @@ -1,226 +0,0 @@ -import atexit -import http -import logging -import multiprocessing -import os -import sys -import socket -import time -from typing import Dict, Any, Optional - -import certifi -import lz4.frame -import urllib3 -import zstandard -from urllib3.poolmanager import PoolManager, ProxyManager -from urllib3.response import HTTPResponse - -from clickhouse_connect.driver.exceptions import ProgrammingError -from clickhouse_connect import common - -logger = logging.getLogger(__name__) - -# We disable this warning. Verify must explicitly set to false, so we assume the user knows what they're doing -urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) - -# Increase this number just to be safe when ClickHouse is returning progress headers -http.client._MAXHEADERS = 10000 # pylint: disable=protected-access - -DEFAULT_KEEP_INTERVAL = 30 -DEFAULT_KEEP_COUNT = 3 -DEFAULT_KEEP_IDLE = 30 - -SOCKET_TCP = socket.IPPROTO_TCP - -core_socket_options = [ - (socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1), - (SOCKET_TCP, socket.TCP_NODELAY, 1), - (socket.SOL_SOCKET, socket.SO_SNDBUF, 1024 * 256), - (socket.SOL_SOCKET, socket.SO_SNDBUF, 1024 * 256) -] - -logging.getLogger('urllib3').setLevel(logging.WARNING) -_proxy_managers = {} -all_managers = {} - - -@atexit.register -def close_managers(): - for manager in all_managers: - manager.clear() - - -# pylint: disable=no-member,too-many-arguments,too-many-branches -def get_pool_manager_options(keep_interval: int = DEFAULT_KEEP_INTERVAL, - keep_count: int = DEFAULT_KEEP_COUNT, - keep_idle: int = DEFAULT_KEEP_IDLE, - ca_cert: str = None, - verify: bool = True, - client_cert: str = None, - client_cert_key: str = None, - **options) -> Dict[str, Any]: - socket_options = core_socket_options.copy() - if getattr(socket, 'TCP_KEEPINTVL', None) is not None: - socket_options.append((SOCKET_TCP, socket.TCP_KEEPINTVL, keep_interval)) - if getattr(socket, 'TCP_KEEPCNT', None) is not None: - socket_options.append((SOCKET_TCP, socket.TCP_KEEPCNT, keep_count)) - if getattr(socket, 'TCP_KEEPIDLE', None) is not None: - socket_options.append((SOCKET_TCP, socket.TCP_KEEPIDLE, keep_idle)) - if sys.platform == 'darwin': - socket_options.append((SOCKET_TCP, getattr(socket, 'TCP_KEEPALIVE', 0x10), keep_interval)) - options['maxsize'] = options.get('maxsize', 8) - options['retries'] = options.get('retries', 1) - if ca_cert == 'certifi': - ca_cert = certifi.where() - options['cert_reqs'] = 'CERT_REQUIRED' if verify else 'CERT_NONE' - if ca_cert: - options['ca_certs'] = ca_cert - if client_cert: - options['cert_file'] = client_cert - if client_cert_key: - options['key_file'] = client_cert_key - options['socket_options'] = socket_options - options['block'] = options.get('block', False) - return options - - -def get_pool_manager(keep_interval: int = DEFAULT_KEEP_INTERVAL, - keep_count: int = DEFAULT_KEEP_COUNT, - keep_idle: int = DEFAULT_KEEP_IDLE, - ca_cert: str = None, - verify: bool = True, - client_cert: str = None, - client_cert_key: str = None, - http_proxy: str = None, - https_proxy: str = None, - **options): - options = get_pool_manager_options(keep_interval, - keep_count, - keep_idle, - ca_cert, - verify, - client_cert, - client_cert_key, - **options) - if http_proxy: - if https_proxy: - raise ProgrammingError('Only one of http_proxy or https_proxy should be specified') - if not http_proxy.startswith('http'): - http_proxy = f'http://{http_proxy}' - manager = ProxyManager(http_proxy, **options) - elif https_proxy: - if not https_proxy.startswith('http'): - https_proxy = f'https://{https_proxy}' - manager = ProxyManager(https_proxy, **options) - else: - manager = PoolManager(**options) - all_managers[manager] = int(time.time()) - return manager - - -def check_conn_reset(manager: PoolManager): - reset_seconds = common.get_setting('max_connection_age') - if reset_seconds: - last_reset = all_managers.get(manager, 0) - now = int(time.time()) - if last_reset < now - reset_seconds: - logger.debug('connection reset') - manager.clear() - all_managers[manager] = now - - -def get_proxy_manager(host: str, http_proxy): - key = f'{host}__{http_proxy}' - if key in _proxy_managers: - return _proxy_managers[key] - proxy_manager = get_pool_manager(http_proxy=http_proxy) - _proxy_managers[key] = proxy_manager - return proxy_manager - - -def get_response_data(response: HTTPResponse) -> bytes: - encoding = response.headers.get('content-encoding', None) - if encoding == 'zstd': - try: - zstd_decom = zstandard.ZstdDecompressor() - return zstd_decom.stream_reader(response.data).read() - except zstandard.ZstdError: - pass - if encoding == 'lz4': - lz4_decom = lz4.frame.LZ4FrameDecompressor() - return lz4_decom.decompress(response.data, len(response.data)) - return response.data - - -def check_env_proxy(scheme: str, host: str, port: int) -> Optional[str]: - env_var = f'{scheme}_proxy'.lower() - proxy = os.environ.get(env_var) - if not proxy: - proxy = os.environ.get(env_var.upper()) - if not proxy: - return None - no_proxy = os.environ.get('no_proxy') - if not no_proxy: - no_proxy = os.environ.get('NO_PROXY') - if not no_proxy: - return proxy - if no_proxy == '*': - return None # Wildcard no proxy means don't actually proxy anything - host = host.lower() - for name in no_proxy.split(','): - name = name.strip() - if name: - name = name.lstrip('.').lower() - if name in (host, f'{host}:{port}'): - return None # Host or host/port matches - if host.endswith('.' + name): - return None # Domain matches - return proxy - - -_default_pool_manager = get_pool_manager() - - -def default_pool_manager(): - if multiprocessing.current_process().name == 'MainProcess': - return _default_pool_manager - # PoolManagers don't seem to be safe for some multiprocessing environments, always return a new one - return get_pool_manager() - - -class ResponseSource: - def __init__(self, response: HTTPResponse, chunk_size: int = 1024 * 1024): - self.response = response - compression = response.headers.get('content-encoding') - if compression == 'zstd': - zstd_decom = zstandard.ZstdDecompressor().decompressobj() - - def decompress(): - while True: - chunk = response.read(chunk_size, decode_content=False) - if not chunk: - break - yield zstd_decom.decompress(chunk) - - self.gen = decompress() - elif compression == 'lz4': - lz4_decom = lz4.frame.LZ4FrameDecompressor() - - def decompress(): - while lz4_decom.needs_input: - data = self.response.read(chunk_size, decode_content=False) - if lz4_decom.unused_data: - data = lz4_decom.unused_data + data - if not data: - return - chunk = lz4_decom.decompress(data) - if chunk: - yield chunk - - self.gen = decompress() - else: - self.gen = response.stream(amt=chunk_size, decode_content=True) - - def close(self): - self.response.drain_conn() - self.response.close() diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/cu2qu/benchmark.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/cu2qu/benchmark.py deleted file mode 100644 index 2ab1e966b1745b868518f46087cc562e11026822..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/cu2qu/benchmark.py +++ /dev/null @@ -1,55 +0,0 @@ -"""Benchmark the cu2qu algorithm performance.""" - -from .cu2qu import * -import random -import timeit - -MAX_ERR = 0.05 - - -def generate_curve(): - return [ - tuple(float(random.randint(0, 2048)) for coord in range(2)) - for point in range(4) - ] - - -def setup_curve_to_quadratic(): - return generate_curve(), MAX_ERR - - -def setup_curves_to_quadratic(): - num_curves = 3 - return ([generate_curve() for curve in range(num_curves)], [MAX_ERR] * num_curves) - - -def run_benchmark(module, function, setup_suffix="", repeat=5, number=1000): - setup_func = "setup_" + function - if setup_suffix: - print("%s with %s:" % (function, setup_suffix), end="") - setup_func += "_" + setup_suffix - else: - print("%s:" % function, end="") - - def wrapper(function, setup_func): - function = globals()[function] - setup_func = globals()[setup_func] - - def wrapped(): - return function(*setup_func()) - - return wrapped - - results = timeit.repeat(wrapper(function, setup_func), repeat=repeat, number=number) - print("\t%5.1fus" % (min(results) * 1000000.0 / number)) - - -def main(): - """Benchmark the cu2qu algorithm performance.""" - run_benchmark("cu2qu", "curve_to_quadratic") - run_benchmark("cu2qu", "curves_to_quadratic") - - -if __name__ == "__main__": - random.seed(1) - main() diff --git a/spaces/cihyFjudo/fairness-paper-search/Download [UPD] Game Pc Dewasa.md b/spaces/cihyFjudo/fairness-paper-search/Download [UPD] Game Pc Dewasa.md deleted file mode 100644 index 123ab23f665d45788cf469adc392d3ad2b7b747b..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Download [UPD] Game Pc Dewasa.md +++ /dev/null @@ -1,46 +0,0 @@ - -

People love free steam games, no doubt. But what many people hate is downloading so many parts and trying to install them on their own. This is why we are the only site that pre-installs every game for you. We have many categories like shooters, action, racing, simulators and even VR games! We strive to satisfy our users and ask for nothing in return. We revolutionized the downloading scene and will continue being your #1 site for free games.

-

download game pc dewasa


Download Zip === https://tinurli.com/2uwhWh



-

Game wik wik adalah sebutan untuk genre game dewasa yang bisa menjadi salah satu alternatif penghilang bosan paling ampuh. Tentu saja, game khusus dewasa hanya boleh dimainkan oleh mereka yang berusia 18 tahun ke atas.

-

Daya tarik game wik wik adalah alur cerita yang bebas, mengandung adegan dewasa tanpa sensor, dan tak sedikit yang interaktif. Contohnya Summertime Saga yang dianggap sebagai game haram di Indonesia.

-

College Brawl merupakan game petualangan berbalut unsur dewasa yang dikembangkan oleh Supercell. Permainan ini tidak tersedia di platform resmi, karena memang dinilai terlalu memuat konten vulgar.

-

-

Sepanjang permainan, kamu akan disuguhkan animasi seksual yang beragam. Alhasil, game Android dewasa ini khusus untuk kalian yang sudah berusia 18 tahun ke atas. Menariknya, game ini juga menyajikan gameplay fighting.

-

Pocket Girl adalah sebuah game atau aplikasi Android yang sangat unik dan viral. Game ini dikembangkan oleh PFC Ventures dan memiliki konsep simulasi kehidupan. Pemain akan berinteraksi dengan karakter wanita cantik yang terlihat seperti manusia sungguhan.

-

Dalam game ini, pemain diberikan kebebasan untuk memberikan perintah kepada karakter tersebut, seperti menari, mengubah gaya pakaian, atau menyapu lantai. Fitur interaktif yang unik ini membuat game ini menjadi lebih menarik dibandingkan dengan game simulasi kehidupan lainnya.

-

Lalu, apakah game yang dikenak dengan nama Poco Girl ini juga termasuk game wik wik? Walaupun pemain diberikan kebebasan untuk memberikan perintah, tapi game ini tetap memperhatikan nilai-nilai moral yang baik.

-

Jikage Rising merupakan game simulasi dewasa yang berlatar belakang desa Konoha dengan banyak karakter serial anime Naruto di dalamnya. Game ini dibuat oleh pengembang pihak ketiga, Smiling Dog.

-

Di dalam permainan ini kamu akan diajak untuk menjalani kehidupan seorang ninja di desa Konoha. Namun, Jikage Rising MOD APK bukanlah game fighting yang biasanya disuguhkan game bertema ninja atau aksi secara umum.

-

Camp With Mom APK merupakan game khusus dewasa 18+ yang bergenre petualangan bersama ibu dan anaknya. Game ini punya konsep naratif-interaktif yang nggak mengharuskan pemain terlibat aktif.

-

Kunoichi Trainer adalah game wik wik dewasa Android yang berlatar belakang anime favorit sepanjang masa, "Naruto". Mengingat adanya unsur full bokeh museum internet anime japanese, Play Store menghapusnya.

-

Pasalnya, RapePlay adalah sebuah permainan yang mengharuskan pemainnya untuk menyetubuhi tiga cewek yang ada di dalam permainan itu. Pastinya, buat kamu yang masih di bawah umur dilarang keras jika ingin memainkan game dewasa yang satu ini.

-

Game 7 Sins adalah game dewasa wik wik dengan genre simulasi kehidupan. Sebagai pemain, kamu akan disibukkan berinteraksi dengan orang-orang dengan interaksi yang berdasarkan 7 dosa mematikan.

-

Game ini dibuat oleh penggemar Ganyu yang telah cukup umur dengan konsep game kencan di mana pemain dapat mengontrol karakter Ganyu sesukamu. Sebagai pemain, kamu bebas pilih alur sekaligus berimajinasi tanpa batasan.

-

Rise of Eros merupakan game Android khusus dewasa terbaru yang dirilis oleh EROLABS, developer game asal Tiongkok. Game tersebut dilabeli 18+ ke atas sehingga hanya boleh dimainkan oleh gamers yang sudah cukup umur saja.

-

Game wik-wik dewasa ini mengangkat kisah peperangan dan cinta antar dewa-dewi bersama manusia. Kisah perang itu bermula ketika Inase ingin menghidupkan kekasihnya melalui sosok dewa purba bernama Eros.

-

Treasure of Nadia adalah game dengan gameplay andalan Treasure of Nadia adalah misi berburu harta karun. Selain itu, dalam permainan ini pemain dibebaskan buat melakukan berbagai hal.

-

Pasalnya, Treasure of Nadia mengusung konsep naratif-interaktif yang memungkinkan pemain buat menentukan jalan ceritanya sendiri. Game ini menampilkan perempuan seksi dan "adegan panas". Oleh karena itu, game ini hanya bisa dimainkan gamers berusia 18 tahun ke atas.

-

Evil Life adalah game yang simulasi yang memiliki gameplay dan alur cerita yang tidak monoton. Game wik wik khusus untuk gamer dewasa ini menyediakan banyak karakter wanita dengan berbagai macam profesi seperti penjaga bar, guru, pelatih gym, dan lain-lain.

-

Lovecraft Locker adalah game simulasi dengan konsep "simulasi kencan" berlatar sekolah yang hanya berisikan murid wanita. Tugas "utama" permainan ini adalah menjalin hubungan "romantis" dengan para murid wanita yang ada di sekolah.

-

Namun, Lovecraft Locker "berbeda" dibanding game simulasi kencan lain seperti Summertime Saga misalnya. Pasalnya, di sini kamu akan memainkan karakter monster gurita yang bersembunyi di balik loker dan bukan bermain sebagai karakter pria.

-

Jika kamu pernah memainkan atau mengetahui Tentacle Locker APK, Lovecraft Locker adalah game sejenis yang memiliki kemiripan secara gameplay maupun visual. Sebagai monster gurita, kamu harus bisa "memikat hati" para murid wanita sehingga mereka masuk ke dalam loker.

-

Game haram sampai saat ini masih menjadi perbincangan panas. Terlebih lagi pada tahun 2019 lalu, Majelis Ulama Indonesia (MUI) Aceh mengeluarkan fatwa mengenai beberapa game yang dianggap haram buat dimainkan.

-

Oleh karena itu, sebelum kamu bermain game Android atau iPhone terbaik yang ada di toko resmi, ada baiknya kamu simak deretan game haram MUI yang ada di sini. Jaka juga sudah menyiapkan beberapa game yang tidak kena fatwa, tetapi "diharamkan" karena mengandung unsur pornografi. Penasaran? Simak di bawah, yuk!

-

Game wik wik terbaik bisa diketahui dari konten atau alur cerita yang ditawarkan. Biasanya, game wik wik atau dewasa hadir dengan cerita menarik yang tidak hanya sekadar mengandalkan visual vulgar saja.

-

Berdasarkan rekomendasi yang sudah Jaka himpun di atas, Summertime Saga bisa menjadi pilihan prioritas untuk game semacam ini. Dibanding game wik wik lainnya, Summertime Saga punya cerita yang lebih dekat dengan kehidupan sehari-hari.

-

Pemilihan game wik wik terbaik versi Jaka ini berdasarkan cerita, grafis, hingga fitur-fitur lengkap yang ada pada Summertime Saga. Selain itu, game ini bisa kamu dapatkan dari sumber yang aman atau bukan pihak ketiga meski tidak tersedia di Google Play dan App Store.

-

Itulah deretan game wik wik terbaik 2023 yang bisa dimainkan untuk mendapatkan hiburan segar dengan alur cerita menarik. Berbagai judul game tersebut hadir dengan gameplay bervariasi yang unik.

-

Perlu diingat, deretan game di atas hanya boleh dimainkan oleh mereka yang setidaknya sudah berusia 18 tahun ke atas. Berbagai konten yang mengandung unsur dewasa atau vulgar di dalamnya tidak boleh dikonsumsi sembarang orang!

-

House Party is a highly popular 3D comedy dating game developed for Windows. It is developed and published by American studio Eek! Games. In the first week of its launch, the game sold about 30,000 copies. Throughout the first year, it made 300,000 total sales.

-

The game is fun and simple to play. However, it is not suitable for kids to play it as it contains strong sexual elements. If you are interested to know more about this game, you should give this article a read.

-

House Party is a 3D comedy adventure dating sim developed and published by Eek! Games. The game was launched in 2017 and quickly became popular among players. However, the game is available for the Windows platform.

-

Right after the launch, the game had to face many controversies. A year later after its release, the makers were forced to take it down from Steam as there were many complaints regarding the content of the game.

-

At present, the game is available in two different versions. This includes the base game which is available on Steam. In this version, all the explicit content is censored using black censored bars. The other version is the uncensored Adult version.

-

This is an interesting dating game where the player character is invited to a house party where the majority of the gameplay is set. You are given 25 story opportunities to choose from. The game will progress based on the story option you select. However, you should be warned that many of the quests have bad effects on other characters. Also, you cannot complete all opportunities as there are many contradicting decisions to be made.

-

A game without good gameplay is like a body without a soul. If you have been searching for an addictive dating sim, your search ends here. House Party is one of the best dating games you will find out there.

-

The main catch of the game is that you cannot form a romantic relationship with all the characters. However, you can interact with all of them. You are also required to complete different quests for them. The game offers you 25 different story options to choose from.

-

It should be reminded that the game has got strong sexual elements and themes. Streaming such content on Twitch or any other platform can get you blocked. In that case, you should play the censored version of the game.

-

House Party is an interesting dating game that is most suitable for adults. If you are looking for something different, then you should try out House Party. For those who are yet to play it, here is a brief overview of the features you will get in the game.

-

HentaiGamer has a large collection of sex games, porn games and adult games which you can download and play for free. We do not host any files on our servers. hentaigamer.org contains only links to other sites. If you have any legal issues please contact the appropriate media file owners or host sites.

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Khallas 3 the movie eng sub download The hidden gem of Indian cinema.md b/spaces/cihyFjudo/fairness-paper-search/Khallas 3 the movie eng sub download The hidden gem of Indian cinema.md deleted file mode 100644 index 6b6ffc5618aa6f0c769e750577182caaac44fe2d..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Khallas 3 the movie eng sub download The hidden gem of Indian cinema.md +++ /dev/null @@ -1,24 +0,0 @@ -
-

Koppikar was born in Mahim, Bombay (now Mumbai) in a Konkani family.[1] She has one younger brother. She graduated in Life Sciences at Ramnarain Ruia College in Mumbai. While at college she appeared in a photoshoot for Indian photographer Gautam Rajadhyaksha. The shoot led to work in advertising as a model, notably for L'Oréal, Rexona, Camay, Tips & Toes and Coca-Cola. Koppikar competed in the 1995 Miss India contest, winning the Miss Talent Crown.[2] Her modelling work gave her an introduction to the film industry and to her first film appearance in the Telugu movie W/o V. Vara Prasad in 1997.[2]

-

Khallas 3 the movie eng sub download


Download Ziphttps://tinurli.com/2uwj42



-

The 1998 Hindi film Ek Tha Dil Ek Thi Dhadkan, directed by Shahrukh Sultan, is often stated to be Koppikar's first film, but there is no evidence the project was ever released.[2] Her career must therefore be said to have begun with the 1997 Telugu film W/o V. Vara Prasad, in which she appeared in a song with actor Vineeth. Her first movie in Tamil was Kadhal Kavidhai costarring Prashanth for which she won the Filmfare Best Female Debut Award. Her next Tamil movie was En Swasa Kaatre (1998]) opposite Arvind Swamy, directed by K. S. Ravi, with music by A. R. Rahman followed by a cameo appearance in Praveen Gandhi's Jodi starring Prashanth and Simran. In 1999, Koppikar starred in the gangland movie Nenjinile starring Vijay and directed by S. A. Chandrasekhar.[3]

-

After a brief appearance in Bollywood's Fiza and Rahul, while she came back to the south in 2001 for Sundar C's Tamil project Kaathal Solla Vanthen the movie never took off.[4] Koppikar's last two films down South were Telugu comedy Prematho Raa starring Venkatesh and Simran, and directed by Uday Shankar, and Narasimha starring Vijayakanth.[5]

-

In 2002, Koppikar appeared in an item number in Ram Gopal Verma's underworld movie Company starring Ajay Devgan, Vivek Oberoi and Manisha Koirala. The chartbusting item number choreographed by Ganesh Hegde earned her title of Khallas Girl. Another notable item number Ishq Samundar, in Sanjay Gupta's Reservoir Dogs remake Kaante starring Amitabh Bachchan, Sanjay Dutt and Sunil Shetty, raised her profile higher. She also won the Stardust Award for the Most Exciting New Face for her Khallas number.[7]

-

Koppikar acted in five films in 2003. In Dil Ka Rishta, she starred opposite Arjun Rampal and Aishwarya Rai. Prawaal Raman's portmanteau film Darna Mana Hai saw her opposite Aftab Shivdasani. Chandraprakash Dwivedi cast her alongside Urmila Matondkar and Manoj Bajpai in the critically acclaimed Pinjar which went on to win the National Film Award for Best Feature Film on National Integration.[8] A brief role in J. P. Dutta's war movie LOC Kargil paired her opposite Sunil Shetty again. And in Harry Baweja's Qayamat: City Under Threat, she played one of three terrorists fighting off co-stars Ajay Devgan and Sunil Shetty; a role which earned her a Filmfare nomination in the Best Villain category.[9]

-

-

Masaan Movie 2012 Torrent 720p
GSG Transform Plugin FULL VERSION download
(pthc-pedo) Arina, Nelia and Nastia, Toys, les and pee
LC Technology Solid State Doctor 3.1.0.8 Keygen-TSZ free download
Snehithudu Vijay Full Movie Telugu 195
Rush In Dual Audio Eng Hindi
Advanced Post Types Order Nulled
xforce keygen 64-bit Inventor 2014 crack
Multilizer 2013 Pdf Translator Full Crack
Synthetik Studio Artist 4.05 Portable

-

baixar inazuma eleven strikers de ps2 iso
kunci jawaban buku pr intan pariwara geografi kelas x | updated
Batch-Add Windows Firewall Blacklist by Charles de Havilland setup free
Awara Paagal Deewana full movie in hindi download 720p movie
solidworks 2013 sp3 0 full multilanguage integrated x86 x64
XOXO Droplets Full Version Extension Free Download [full version]
soal matematika smp semester 1 kelas vii dan penyelesaian
dead or alive nude modgolkes
navifirm plus 2 7 cracked windshield
crack AutoCAD Mechanical 2018 crack

-

Socha Na Tha 720p movies
Virtual City 2 Paradise Resort - Full PreCracked - Foxy Games pc game
download Aa Gaye Munde U.K. De hd 720p full movie in hindi
KATYA SANTOS WET WILD KINKY COME SHAG ME 2004 DVDRIP SpEnSeR
Aermod View Crack.epub
imperial glory crack serial key
SNESSuperNintendoGamesCollection765ROMSSnes9x153rar
eJay Dance 6 Reloaded eng crack
Desktop Icon Toy v4.6 Keygen download
Crack Tipard Iphone To Pc Transfer

-

LEGO.Marvel.Super.Heroes.2.Update.v1.0.0.17365-CODEX version download
Download Program Kerja Uks Smp
Jak And Daxter Pc Game 14
Pengurusan Wad Dan Penjagaan Keselamatan Pesakit Pdf 18
b r automation studio download crack for gta
x hdl 4.2 5 crack
password.txt 1.4kb
MMA Love Never Dies 720p torrent
toontrack ezdrummer serial number keygen
durusul lughah gontor pdf free

-

the Guzaarish 720p movies
free activation key for tally erp 9.0 crack
building construction s p arora s p bindra pdf free download rar
moviestorm free download with crack
all episodes of beyblade season 1 cartoon in hindi
Petite Ella Model Agency Free Nn teen models Pics Forum Ls M
contracted movie download in hindi dubbed
a grande aventura 3o ano lingua portuguesa pdf download
Outsourced hd 720p movie download
AutoCAD 2013 Portable Cracked YHaz rarAutoCAD 2013 Portable Cracked YHaz rar

-

Sex clips gays hub, orgasm sex games...
Maker key download program free
Tulsa's Historic Greenwood District download book
small tits hairy bush
Harold Ashby
leesboek met veel sex
MU invite les 33 mineurs chiliens!
Robert Planel Trumpet Concerto Pdf Free
Azkend 2: The World Beneath Download For Pc [serial Number]l
Saloon A Plein Regime Nu

-

Product Key For Windows 7 Ultimate My Id 00426 Oem 9141204 13000
Nandhipurathu Nayagi Novel.pdf
Fce Use Of English 2 Student Book.82
gunday full songs hd 1080p blu ray
Atj2259c Usb Driver
ZONE OF THE ENDERS THE 2nd RUNNER : M RS : trainer download
Dead Rising 3 Apocalypse Edition crack activation code
Download Ppjoy Joystick Driver 0.8.4.6 -
Simatic s7 200 plc password crack
Lesson 5 Homework Practice Surface Area Of Pyramids Answers

-

tamil hd movies download 1080p Baba
shaun t t25 free download full workout
Garam Masala hindi 720p download
Chhota Bheem - Himalayan Adventure movie in tamil hd 1080p
descargar smaart live 7 full crack kid
singapore nursing board exam questions sample
B.O.B - The Adventures Of Bobby Ray [New Album].zip
jolly days kannada movie hd download
newstar jimmy tonik nude 58
badal movie songs hd 1080p blu ray torrent

-

Pixar Short Films Collection Vol.1 720p BluRay x264-DON
6 Temporada Bob Esponja 11
Gay Preteen Russian Flowers 2 Blue Orchid 2000 Boys 12 14 Yo Fuck
step up 2 full movie in hindi dubbed free download
Kelly Payne Collection-First Time Spankings 2
doraemon ringtone mp3 free download in hindi
crack optiflasher
mcdonalds bsm exam answers paper zip
renault javitasi kezikonyv 38
free pdf download kamasutra

-

Jab Tak Hai Jaan full movie hd 720p download
The Founding Of A Republic (2009) 1080p BluRay x264-Japhson
Sharp Wireless LAN adapter Wn8522b Driver
Naam Gum Jayega full movie download hd kickass
abacre restaurant point of sale crack
ayah perkosa anak kandung video porn xxx
eJay Dance 6 Reloaded eng crack
Esoft Ditec Online Exam Papers Free Download
derecho mercantil 7ma edicion pdf 14
300 spartan hd utorrent torrent

-

Integra payroll master 2009 crack
mathrubhumi malayalam calendar 1994 with stars
Adobe Photoshop Lightroom Classic CC 2018 v10.0.1.13 utorrent
leap office 2000 download free crack for windows
Fluenz Version F2 - French 1 [Cracked] 64 bit
fifa street 2012 password.rar
nikon total station dtm-322 software download
Vod 3967 71avi
download film Ishqedarriyaan sub indonesia movie
Visualsvn Server License Key 116

-

81865gme download to my
Device Driver Manager Debian
The Social Network Hindi Movie Free Download Mp4l
Orthographes Menant A La Pornographie
My Chemical Romance Guitar Signatures Shirt
Free full green screen software download
Xforce Keygen 32bits Or 64bits Version Point Layout 2012
Doraemon Cartoon In Hindi Movie Dailymotionl
Sexy Nue Jeunes Filles Dans Un Ranch
[ACTUALITE] Daedalic annonce Unrailed! le rogue-like des chemins de fer

-

Red Giant Trapcode Suite 15.1.2 For Adobe (Windows 64-bit) Serial Key Keygenl
Xforce Keygen 32bits Or 64bits Version Revit 2017 Activation
Parle-moi Cosplay 221 : PakuPaku Ru Cosplay
Car Maintenance 1.8.22 Free Download For Mac
HerunterladenVRED Presenter 2017 Activator 64 Bits
directory services client update windows 98 download
Neram Malayalam Full Movie Hd Download
couples making love sex
hq porn movies for free
Telugu Vyakaranam Pdf Free Downloadl

-

Adobe Audition 1 5 Download For Mac
free adult java movie clips downloads
Love sunflower American flag shirt
Non Woven Tape Market expected to grow at a CAGR of 7.46% with Industry Size, Share and Rising Trend till 2023
Free Audio Converter V.2.3.2 Build 815.epubl
Simgolf Crack
Iphone 5c model a1456 unlock
Download Buku Referensi Untuk Jurusan Teknik Industri
naked photos of katrina
Trust In Me J Lynn Pdf Descargar

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Parinaam The Result Movie In Hindi Download 3gp The Story of a Mans Quest for Justice.md b/spaces/cihyFjudo/fairness-paper-search/Parinaam The Result Movie In Hindi Download 3gp The Story of a Mans Quest for Justice.md deleted file mode 100644 index 0a0f8f61155aa07d3a0eb5109436d7a81a14600a..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Parinaam The Result Movie In Hindi Download 3gp The Story of a Mans Quest for Justice.md +++ /dev/null @@ -1,6 +0,0 @@ -

Parinaam The Result Movie In Hindi Download 3gp


Download ••• https://tinurli.com/2uwjp9



- - aaccfb2cb3
-
-
-

diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/ffmpeg/video.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/ffmpeg/video.py deleted file mode 100644 index f0c8cb1c4f1778163caec73acaf856f114f15de3..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/ffmpeg/video.py +++ /dev/null @@ -1,302 +0,0 @@ -#!/usr/local/bin/python3 -# module sys - - -import subprocess - - -def ins_img(input_file, img_data, out_file): - try: - if len(img_data) <= 0: - return False - - img_list = [] - img_list_str = " -i " - png_complex = [] - complex_png_str = "," - - for img in img_data: - if len(img["x"]) == 0: - img["x"] = "0" - - if len(img["y"]) == 0: - img["y"] = "0" - img_list.append(img["img"]) - - if len(img["str_time"]) > 0: - if len(img["end_time"]) > 0: - cmp_str = "overlay=x=%s:y=%s:enable='if(gt(t,%s),lt(t,%s))'" % (img["x"], img["y"], img["str_time"], img["end_time"]) - else: - cmp_str = "overlay=x=%s:y=%s:enable='if(gt(t,%s))'" % (img["x"], img["y"], img["str_time"]) - else: - cmp_str = "overlay=x=%s:y=%s" % (img["x"], img["y"]) - - png_complex.append(cmp_str) - - img_str_list = img_list_str.join(img_list) - complex_png_str = complex_png_str.join(png_complex) - - cmd = "ffmpeg -i %s -i %s -filter_complex \"%s\" -y %s" % (input_file, img_str_list, complex_png_str, out_file) - - res = subprocess.call(cmd, shell=True) - - if res != 0: - return False - return True - - except Exception: - return False - - -# 视频添加动图 gif apng -def ins_dynamic_img(input_file, img_data, out_file): - try: - if img_data["img"] == "": - return False - - if img_data["x"] == "": - img_data["x"] = 0 - - if img_data["y"] == "": - img_data["y"] = 0 - - if img_data["str_time"] != "": - if img_data["end_time"] != "": - comp = "overlay=x=%s:y=%s:shortest=1:enable='if(gt(t,%s), lt(t,%s))'" % (img_data["x"], img_data["y"], - img_data["str_time"], - img_data["end_time"]) - else: - comp = "overlay=x=%s:y=%s:shortest=1:enable='if(gt(t,%s)'" % (img_data["x"], img_data["y"], - img_data["str_time"]) - else: - comp = "overlay=x=%s:y=%s:shortest=1" - - cmd = "ffmpeg -i %s -ignore_loop 0 -i %s -filter_complex \"%s\" -y %s" % (input_file, img_data["img"], comp, - out_file) - - res = subprocess.call(cmd, shell=True) - - if res != 0: - return False - return True - except Exception: - return False - - -# 视频静音 分离音频流 - -def separate_audio(input_file, out_file): - try: - cmd = "ffmpeg -y -i %s -vcodec copy -an %s" % (input_file, out_file) - res = subprocess.call(cmd, shell=True) - - if res != 0: - return False - return True - except Exception: - return False - - -# 视频静音 使用静音帧 为视频静音 -def video_ins_mute_audio(input_file, mute_mp3_file, out_file): - try: - cmd = "ffmpeg -y -i %s -filter_complex '[1:0]apad' -shortest %s" % (input_file, mute_mp3_file, out_file) - res = subprocess.call(cmd, shell=True) - - if res != 0: - return False - return True - except Exception: - return False - - -# 视频设置分辨率 及 码率 -def trans_code(input_file, width, height, rate, out_file): - try: - cmd = "ffmpeg -y -i %s -s %sx%s -b %sk -acodec copy %s" % (input_file, width, height, rate, out_file) - res = subprocess.call(cmd, shell=True) - - if res != 0: - return False - return True - except Exception: - return False - - -# 视频添加弹幕 -def ins_barrage(input_file, barrage, out_file): - try: - if len(barrage) == 0: - return False - - bag = [] - bag_str = ", " - vf_str = "" - - for val in barrage: - if val["fontsize"] == "": - val["fontsize"] = 40 - - if val["fontcolor"] == "": - val["fontcolor"] = "white" - - if val["y"] == "": - val["y"] = "100" - - if val["str_time"] == "": - val["str_time"] = 0 - else: - val["str_time"] = int(val["str_time"]) - - if val["speet"] == "": - val["speet"] = 150 - else: - val["speet"] = int(val["speet"]) - - txt = "drawtext=text='%s':fontcolor=%s:fontsize=%s:fontfile=%s:y=%s:x=w-(t-%d)*%d:enable='gte(t,%d)'" % ( - val["context"], - val["fontcolor"], - val["fontsize"], - val["fontfile"], - val["y"], - val["str_time"], - val["speet"], - val["str_time"] - ) - bag.append(txt) - - vf_str = bag_str.join(bag) - - cmd = "ffmpeg -y -i %s -vf \"%s\" %s" % (input_file, vf_str, out_file) - res = subprocess.call(cmd, shell=True) - - if res != 0: - return False - return True - except Exception: - return False - - -# 调整视频速率 speed 小于 1 减速,大于 1 加速 1 等速 -def playback_speed(input_file, speed, out_file): - try: - if speed == "": - speed = "1" - cmd = "ffmpeg -y -i %s -filter_complex \"setpts=PTS/%s\" %s" % (input_file, speed, out_file) - res = subprocess.call(cmd, shell=True) - - if res != 0: - return False - return True - - except Exception: - return False - - -# 视频倒放 ( 视频 + 音频 ) -def a_v_reverse(input_file, out_file): - try: - cmd = "ffmpeg -y -i %s -vf vf reverse -af areverse %s " % (input_file, out_file) - res = subprocess.call(cmd, shell=True) - - if res != 0: - return False - return True - except Exception: - return False - - -# 视频倒放 (视频) -def v_reverse(input_file, out_file): - try: - cmd = "ffmpeg -y -i %s -vf vf reverse %s " % (input_file, out_file) - res = subprocess.call(cmd, shell=True) - - if res != 0: - return False - return True - except Exception: - return False - - -# 视频截取 截取 duration 时长的视频 从 str_second 开始截取 -def v_intercept(input_file, str_second, duration, out_file): - try: - cmd = "ffmpeg -y -i %s -ss %s -t %s -f mp4 %s" % (input_file, str_second, duration, out_file) - res = subprocess.call(cmd, shell=True) - - if res != 0: - return False - return True - except Exception: - return False - - -# 视频合并 严格模式 文件协议合并 -def strict_v_merge(input_file, out_file): - try: - cmd = "ffmpeg -y -f concat -safe 0 -i %s -acodec copy %s" % (input_file, out_file) - res = subprocess.call(cmd, shell=True) - - if res != 0: - return False - return True - except Exception: - return False - - -# 视频合并 有损模式 input_file_list = ["1.mp4", "2.ts", "3.flv"] -def damage_v_merge(input_file_list, out_file): - try: - if len(input_file_list) < 2: - return False - - video = [] - video_n = len(input_file_list) - video_str = " -i " - - comp_list = [] - comp_str = " " - i = 0 - for val in input_file_list: - video.append(val) - v_str = "[%s:a][%s:v]" % (i, i) - comp_list.append(v_str) - - i += 1 - - video_list = video_str.join(video) - com_list_str = comp_str.join(comp_list) - - cmd = "ffmpeg -y -i %s -filter_complex \"%s concat=n=%d:v=1:a=1\" -vcodec h264_nvenc %s" % ( - video_list, - com_list_str, - video_n, - out_file - ) - - res = subprocess.call(cmd, shell=True) - - if res != 0: - return False - return True - except Exception: - return False - - -# 视频转 图片 -def video_trans_img(input_file, out_path, img_prefix, category="png"): - try: - out_path = out_path.rstrip("/") - img = img_prefix + "_%d" - - out_img = "%s/%s.%s" % (out_path, img, category) - cmd = "ffmpeg -i %s -f image2 %s" % (input_file, out_img) - - res = subprocess.call(cmd, shell=True) - - if res != 0: - return False - return True - except Exception: - return False diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/_g_a_s_p.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/_g_a_s_p.py deleted file mode 100644 index 10c32a87f4b2cbedac5e346c6f5d578cb7a6b65d..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/_g_a_s_p.py +++ /dev/null @@ -1,55 +0,0 @@ -from fontTools.misc.textTools import safeEval -from . import DefaultTable -import struct - - -GASP_SYMMETRIC_GRIDFIT = 0x0004 -GASP_SYMMETRIC_SMOOTHING = 0x0008 -GASP_DOGRAY = 0x0002 -GASP_GRIDFIT = 0x0001 - - -class table__g_a_s_p(DefaultTable.DefaultTable): - def decompile(self, data, ttFont): - self.version, numRanges = struct.unpack(">HH", data[:4]) - assert 0 <= self.version <= 1, "unknown 'gasp' format: %s" % self.version - data = data[4:] - self.gaspRange = {} - for i in range(numRanges): - rangeMaxPPEM, rangeGaspBehavior = struct.unpack(">HH", data[:4]) - self.gaspRange[int(rangeMaxPPEM)] = int(rangeGaspBehavior) - data = data[4:] - assert not data, "too much data" - - def compile(self, ttFont): - version = 0 # ignore self.version - numRanges = len(self.gaspRange) - data = b"" - items = sorted(self.gaspRange.items()) - for rangeMaxPPEM, rangeGaspBehavior in items: - data = data + struct.pack(">HH", rangeMaxPPEM, rangeGaspBehavior) - if rangeGaspBehavior & ~(GASP_GRIDFIT | GASP_DOGRAY): - version = 1 - data = struct.pack(">HH", version, numRanges) + data - return data - - def toXML(self, writer, ttFont): - items = sorted(self.gaspRange.items()) - for rangeMaxPPEM, rangeGaspBehavior in items: - writer.simpletag( - "gaspRange", - [ - ("rangeMaxPPEM", rangeMaxPPEM), - ("rangeGaspBehavior", rangeGaspBehavior), - ], - ) - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if name != "gaspRange": - return - if not hasattr(self, "gaspRange"): - self.gaspRange = {} - self.gaspRange[safeEval(attrs["rangeMaxPPEM"])] = safeEval( - attrs["rangeGaspBehavior"] - ) diff --git a/spaces/cncn102/bingo1/src/components/providers.tsx b/spaces/cncn102/bingo1/src/components/providers.tsx deleted file mode 100644 index 892226412d80fe0b05211911b9e245cd22876460..0000000000000000000000000000000000000000 --- a/spaces/cncn102/bingo1/src/components/providers.tsx +++ /dev/null @@ -1,15 +0,0 @@ -'use client' - -import * as React from 'react' -import { ThemeProvider as NextThemesProvider } from 'next-themes' -import { ThemeProviderProps } from 'next-themes/dist/types' - -import { TooltipProvider } from '@/components/ui/tooltip' - -export function Providers({ children, ...props }: ThemeProviderProps) { - return ( - - {children} - - ) -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/adpcmenc.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/adpcmenc.c deleted file mode 100644 index 63afffc58f78ad7dc70aa54a9e601dc6d580fa40..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/adpcmenc.c +++ /dev/null @@ -1,1033 +0,0 @@ -/* - * Copyright (c) 2001-2003 The FFmpeg project - * - * first version by Francois Revol (revol@free.fr) - * fringe ADPCM codecs (e.g., DK3, DK4, Westwood) - * by Mike Melanson (melanson@pcisys.net) - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "config_components.h" - -#include "libavutil/opt.h" - -#include "avcodec.h" -#include "put_bits.h" -#include "bytestream.h" -#include "adpcm.h" -#include "adpcm_data.h" -#include "codec_internal.h" -#include "encode.h" - -/** - * @file - * ADPCM encoders - * See ADPCM decoder reference documents for codec information. - */ - -#define CASE_0(codec_id, ...) -#define CASE_1(codec_id, ...) \ - case codec_id: \ - { __VA_ARGS__ } \ - break; -#define CASE_2(enabled, codec_id, ...) \ - CASE_ ## enabled(codec_id, __VA_ARGS__) -#define CASE_3(config, codec_id, ...) \ - CASE_2(config, codec_id, __VA_ARGS__) -#define CASE(codec, ...) \ - CASE_3(CONFIG_ ## codec ## _ENCODER, AV_CODEC_ID_ ## codec, __VA_ARGS__) - -typedef struct TrellisPath { - int nibble; - int prev; -} TrellisPath; - -typedef struct TrellisNode { - uint32_t ssd; - int path; - int sample1; - int sample2; - int step; -} TrellisNode; - -typedef struct ADPCMEncodeContext { - AVClass *class; - int block_size; - - ADPCMChannelStatus status[6]; - TrellisPath *paths; - TrellisNode *node_buf; - TrellisNode **nodep_buf; - uint8_t *trellis_hash; -} ADPCMEncodeContext; - -#define FREEZE_INTERVAL 128 - -static av_cold int adpcm_encode_init(AVCodecContext *avctx) -{ - ADPCMEncodeContext *s = avctx->priv_data; - int channels = avctx->ch_layout.nb_channels; - - /* - * AMV's block size has to match that of the corresponding video - * stream. Relax the POT requirement. - */ - if (avctx->codec->id != AV_CODEC_ID_ADPCM_IMA_AMV && - (s->block_size & (s->block_size - 1))) { - av_log(avctx, AV_LOG_ERROR, "block size must be power of 2\n"); - return AVERROR(EINVAL); - } - - if (avctx->trellis) { - int frontier, max_paths; - - if ((unsigned)avctx->trellis > 16U) { - av_log(avctx, AV_LOG_ERROR, "invalid trellis size\n"); - return AVERROR(EINVAL); - } - - if (avctx->codec->id == AV_CODEC_ID_ADPCM_IMA_SSI || - avctx->codec->id == AV_CODEC_ID_ADPCM_IMA_APM || - avctx->codec->id == AV_CODEC_ID_ADPCM_ARGO || - avctx->codec->id == AV_CODEC_ID_ADPCM_IMA_WS) { - /* - * The current trellis implementation doesn't work for extended - * runs of samples without periodic resets. Disallow it. - */ - av_log(avctx, AV_LOG_ERROR, "trellis not supported\n"); - return AVERROR_PATCHWELCOME; - } - - frontier = 1 << avctx->trellis; - max_paths = frontier * FREEZE_INTERVAL; - if (!FF_ALLOC_TYPED_ARRAY(s->paths, max_paths) || - !FF_ALLOC_TYPED_ARRAY(s->node_buf, 2 * frontier) || - !FF_ALLOC_TYPED_ARRAY(s->nodep_buf, 2 * frontier) || - !FF_ALLOC_TYPED_ARRAY(s->trellis_hash, 65536)) - return AVERROR(ENOMEM); - } - - avctx->bits_per_coded_sample = av_get_bits_per_sample(avctx->codec->id); - - switch (avctx->codec->id) { - CASE(ADPCM_IMA_WAV, - /* each 16 bits sample gives one nibble - and we have 4 bytes per channel overhead */ - avctx->frame_size = (s->block_size - 4 * channels) * 8 / - (4 * channels) + 1; - /* seems frame_size isn't taken into account... - have to buffer the samples :-( */ - avctx->block_align = s->block_size; - avctx->bits_per_coded_sample = 4; - ) /* End of CASE */ - CASE(ADPCM_IMA_QT, - avctx->frame_size = 64; - avctx->block_align = 34 * channels; - ) /* End of CASE */ - CASE(ADPCM_MS, - uint8_t *extradata; - /* each 16 bits sample gives one nibble - and we have 7 bytes per channel overhead */ - avctx->frame_size = (s->block_size - 7 * channels) * 2 / channels + 2; - avctx->bits_per_coded_sample = 4; - avctx->block_align = s->block_size; - if (!(avctx->extradata = av_malloc(32 + AV_INPUT_BUFFER_PADDING_SIZE))) - return AVERROR(ENOMEM); - avctx->extradata_size = 32; - extradata = avctx->extradata; - bytestream_put_le16(&extradata, avctx->frame_size); - bytestream_put_le16(&extradata, 7); /* wNumCoef */ - for (int i = 0; i < 7; i++) { - bytestream_put_le16(&extradata, ff_adpcm_AdaptCoeff1[i] * 4); - bytestream_put_le16(&extradata, ff_adpcm_AdaptCoeff2[i] * 4); - } - ) /* End of CASE */ - CASE(ADPCM_YAMAHA, - avctx->frame_size = s->block_size * 2 / channels; - avctx->block_align = s->block_size; - ) /* End of CASE */ - CASE(ADPCM_SWF, - if (avctx->sample_rate != 11025 && - avctx->sample_rate != 22050 && - avctx->sample_rate != 44100) { - av_log(avctx, AV_LOG_ERROR, "Sample rate must be 11025, " - "22050 or 44100\n"); - return AVERROR(EINVAL); - } - avctx->frame_size = 4096; /* Hardcoded according to the SWF spec. */ - avctx->block_align = (2 + channels * (22 + 4 * (avctx->frame_size - 1)) + 7) / 8; - ) /* End of CASE */ - case AV_CODEC_ID_ADPCM_IMA_SSI: - case AV_CODEC_ID_ADPCM_IMA_ALP: - avctx->frame_size = s->block_size * 2 / channels; - avctx->block_align = s->block_size; - break; - CASE(ADPCM_IMA_AMV, - if (avctx->sample_rate != 22050) { - av_log(avctx, AV_LOG_ERROR, "Sample rate must be 22050\n"); - return AVERROR(EINVAL); - } - - if (channels != 1) { - av_log(avctx, AV_LOG_ERROR, "Only mono is supported\n"); - return AVERROR(EINVAL); - } - - avctx->frame_size = s->block_size; - avctx->block_align = 8 + (FFALIGN(avctx->frame_size, 2) / 2); - ) /* End of CASE */ - CASE(ADPCM_IMA_APM, - avctx->frame_size = s->block_size * 2 / channels; - avctx->block_align = s->block_size; - - if (!(avctx->extradata = av_mallocz(28 + AV_INPUT_BUFFER_PADDING_SIZE))) - return AVERROR(ENOMEM); - avctx->extradata_size = 28; - ) /* End of CASE */ - CASE(ADPCM_ARGO, - avctx->frame_size = 32; - avctx->block_align = 17 * channels; - ) /* End of CASE */ - CASE(ADPCM_IMA_WS, - /* each 16 bits sample gives one nibble */ - avctx->frame_size = s->block_size * 2 / channels; - avctx->block_align = s->block_size; - ) /* End of CASE */ - default: - return AVERROR(EINVAL); - } - - return 0; -} - -static av_cold int adpcm_encode_close(AVCodecContext *avctx) -{ - ADPCMEncodeContext *s = avctx->priv_data; - av_freep(&s->paths); - av_freep(&s->node_buf); - av_freep(&s->nodep_buf); - av_freep(&s->trellis_hash); - - return 0; -} - - -static inline uint8_t adpcm_ima_compress_sample(ADPCMChannelStatus *c, - int16_t sample) -{ - int delta = sample - c->prev_sample; - int nibble = FFMIN(7, abs(delta) * 4 / - ff_adpcm_step_table[c->step_index]) + (delta < 0) * 8; - c->prev_sample += ((ff_adpcm_step_table[c->step_index] * - ff_adpcm_yamaha_difflookup[nibble]) / 8); - c->prev_sample = av_clip_int16(c->prev_sample); - c->step_index = av_clip(c->step_index + ff_adpcm_index_table[nibble], 0, 88); - return nibble; -} - -static inline uint8_t adpcm_ima_alp_compress_sample(ADPCMChannelStatus *c, int16_t sample) -{ - const int delta = sample - c->prev_sample; - const int step = ff_adpcm_step_table[c->step_index]; - const int sign = (delta < 0) * 8; - - int nibble = FFMIN(abs(delta) * 4 / step, 7); - int diff = (step * nibble) >> 2; - if (sign) - diff = -diff; - - nibble = sign | nibble; - - c->prev_sample += diff; - c->prev_sample = av_clip_int16(c->prev_sample); - c->step_index = av_clip(c->step_index + ff_adpcm_index_table[nibble], 0, 88); - return nibble; -} - -static inline uint8_t adpcm_ima_qt_compress_sample(ADPCMChannelStatus *c, - int16_t sample) -{ - int delta = sample - c->prev_sample; - int diff, step = ff_adpcm_step_table[c->step_index]; - int nibble = 8*(delta < 0); - - delta= abs(delta); - diff = delta + (step >> 3); - - if (delta >= step) { - nibble |= 4; - delta -= step; - } - step >>= 1; - if (delta >= step) { - nibble |= 2; - delta -= step; - } - step >>= 1; - if (delta >= step) { - nibble |= 1; - delta -= step; - } - diff -= delta; - - if (nibble & 8) - c->prev_sample -= diff; - else - c->prev_sample += diff; - - c->prev_sample = av_clip_int16(c->prev_sample); - c->step_index = av_clip(c->step_index + ff_adpcm_index_table[nibble], 0, 88); - - return nibble; -} - -static inline uint8_t adpcm_ms_compress_sample(ADPCMChannelStatus *c, - int16_t sample) -{ - int predictor, nibble, bias; - - predictor = (((c->sample1) * (c->coeff1)) + - (( c->sample2) * (c->coeff2))) / 64; - - nibble = sample - predictor; - if (nibble >= 0) - bias = c->idelta / 2; - else - bias = -c->idelta / 2; - - nibble = (nibble + bias) / c->idelta; - nibble = av_clip_intp2(nibble, 3) & 0x0F; - - predictor += ((nibble & 0x08) ? (nibble - 0x10) : nibble) * c->idelta; - - c->sample2 = c->sample1; - c->sample1 = av_clip_int16(predictor); - - c->idelta = (ff_adpcm_AdaptationTable[nibble] * c->idelta) >> 8; - if (c->idelta < 16) - c->idelta = 16; - - return nibble; -} - -static inline uint8_t adpcm_yamaha_compress_sample(ADPCMChannelStatus *c, - int16_t sample) -{ - int nibble, delta; - - if (!c->step) { - c->predictor = 0; - c->step = 127; - } - - delta = sample - c->predictor; - - nibble = FFMIN(7, abs(delta) * 4 / c->step) + (delta < 0) * 8; - - c->predictor += ((c->step * ff_adpcm_yamaha_difflookup[nibble]) / 8); - c->predictor = av_clip_int16(c->predictor); - c->step = (c->step * ff_adpcm_yamaha_indexscale[nibble]) >> 8; - c->step = av_clip(c->step, 127, 24576); - - return nibble; -} - -static void adpcm_compress_trellis(AVCodecContext *avctx, - const int16_t *samples, uint8_t *dst, - ADPCMChannelStatus *c, int n, int stride) -{ - //FIXME 6% faster if frontier is a compile-time constant - ADPCMEncodeContext *s = avctx->priv_data; - const int frontier = 1 << avctx->trellis; - const int version = avctx->codec->id; - TrellisPath *paths = s->paths, *p; - TrellisNode *node_buf = s->node_buf; - TrellisNode **nodep_buf = s->nodep_buf; - TrellisNode **nodes = nodep_buf; // nodes[] is always sorted by .ssd - TrellisNode **nodes_next = nodep_buf + frontier; - int pathn = 0, froze = -1, i, j, k, generation = 0; - uint8_t *hash = s->trellis_hash; - memset(hash, 0xff, 65536 * sizeof(*hash)); - - memset(nodep_buf, 0, 2 * frontier * sizeof(*nodep_buf)); - nodes[0] = node_buf + frontier; - nodes[0]->ssd = 0; - nodes[0]->path = 0; - nodes[0]->step = c->step_index; - nodes[0]->sample1 = c->sample1; - nodes[0]->sample2 = c->sample2; - if (version == AV_CODEC_ID_ADPCM_IMA_WAV || - version == AV_CODEC_ID_ADPCM_IMA_QT || - version == AV_CODEC_ID_ADPCM_IMA_AMV || - version == AV_CODEC_ID_ADPCM_SWF) - nodes[0]->sample1 = c->prev_sample; - if (version == AV_CODEC_ID_ADPCM_MS) - nodes[0]->step = c->idelta; - if (version == AV_CODEC_ID_ADPCM_YAMAHA) { - if (c->step == 0) { - nodes[0]->step = 127; - nodes[0]->sample1 = 0; - } else { - nodes[0]->step = c->step; - nodes[0]->sample1 = c->predictor; - } - } - - for (i = 0; i < n; i++) { - TrellisNode *t = node_buf + frontier*(i&1); - TrellisNode **u; - int sample = samples[i * stride]; - int heap_pos = 0; - memset(nodes_next, 0, frontier * sizeof(TrellisNode*)); - for (j = 0; j < frontier && nodes[j]; j++) { - // higher j have higher ssd already, so they're likely - // to yield a suboptimal next sample too - const int range = (j < frontier / 2) ? 1 : 0; - const int step = nodes[j]->step; - int nidx; - if (version == AV_CODEC_ID_ADPCM_MS) { - const int predictor = ((nodes[j]->sample1 * c->coeff1) + - (nodes[j]->sample2 * c->coeff2)) / 64; - const int div = (sample - predictor) / step; - const int nmin = av_clip(div-range, -8, 6); - const int nmax = av_clip(div+range, -7, 7); - for (nidx = nmin; nidx <= nmax; nidx++) { - const int nibble = nidx & 0xf; - int dec_sample = predictor + nidx * step; -#define STORE_NODE(NAME, STEP_INDEX)\ - int d;\ - uint32_t ssd;\ - int pos;\ - TrellisNode *u;\ - uint8_t *h;\ - dec_sample = av_clip_int16(dec_sample);\ - d = sample - dec_sample;\ - ssd = nodes[j]->ssd + d*(unsigned)d;\ - /* Check for wraparound, skip such samples completely. \ - * Note, changing ssd to a 64 bit variable would be \ - * simpler, avoiding this check, but it's slower on \ - * x86 32 bit at the moment. */\ - if (ssd < nodes[j]->ssd)\ - goto next_##NAME;\ - /* Collapse any two states with the same previous sample value. \ - * One could also distinguish states by step and by 2nd to last - * sample, but the effects of that are negligible. - * Since nodes in the previous generation are iterated - * through a heap, they're roughly ordered from better to - * worse, but not strictly ordered. Therefore, an earlier - * node with the same sample value is better in most cases - * (and thus the current is skipped), but not strictly - * in all cases. Only skipping samples where ssd >= - * ssd of the earlier node with the same sample gives - * slightly worse quality, though, for some reason. */ \ - h = &hash[(uint16_t) dec_sample];\ - if (*h == generation)\ - goto next_##NAME;\ - if (heap_pos < frontier) {\ - pos = heap_pos++;\ - } else {\ - /* Try to replace one of the leaf nodes with the new \ - * one, but try a different slot each time. */\ - pos = (frontier >> 1) +\ - (heap_pos & ((frontier >> 1) - 1));\ - if (ssd > nodes_next[pos]->ssd)\ - goto next_##NAME;\ - heap_pos++;\ - }\ - *h = generation;\ - u = nodes_next[pos];\ - if (!u) {\ - av_assert1(pathn < FREEZE_INTERVAL << avctx->trellis);\ - u = t++;\ - nodes_next[pos] = u;\ - u->path = pathn++;\ - }\ - u->ssd = ssd;\ - u->step = STEP_INDEX;\ - u->sample2 = nodes[j]->sample1;\ - u->sample1 = dec_sample;\ - paths[u->path].nibble = nibble;\ - paths[u->path].prev = nodes[j]->path;\ - /* Sift the newly inserted node up in the heap to \ - * restore the heap property. */\ - while (pos > 0) {\ - int parent = (pos - 1) >> 1;\ - if (nodes_next[parent]->ssd <= ssd)\ - break;\ - FFSWAP(TrellisNode*, nodes_next[parent], nodes_next[pos]);\ - pos = parent;\ - }\ - next_##NAME:; - STORE_NODE(ms, FFMAX(16, - (ff_adpcm_AdaptationTable[nibble] * step) >> 8)); - } - } else if (version == AV_CODEC_ID_ADPCM_IMA_WAV || - version == AV_CODEC_ID_ADPCM_IMA_QT || - version == AV_CODEC_ID_ADPCM_IMA_AMV || - version == AV_CODEC_ID_ADPCM_SWF) { -#define LOOP_NODES(NAME, STEP_TABLE, STEP_INDEX)\ - const int predictor = nodes[j]->sample1;\ - const int div = (sample - predictor) * 4 / STEP_TABLE;\ - int nmin = av_clip(div - range, -7, 6);\ - int nmax = av_clip(div + range, -6, 7);\ - if (nmin <= 0)\ - nmin--; /* distinguish -0 from +0 */\ - if (nmax < 0)\ - nmax--;\ - for (nidx = nmin; nidx <= nmax; nidx++) {\ - const int nibble = nidx < 0 ? 7 - nidx : nidx;\ - int dec_sample = predictor +\ - (STEP_TABLE *\ - ff_adpcm_yamaha_difflookup[nibble]) / 8;\ - STORE_NODE(NAME, STEP_INDEX);\ - } - LOOP_NODES(ima, ff_adpcm_step_table[step], - av_clip(step + ff_adpcm_index_table[nibble], 0, 88)); - } else { //AV_CODEC_ID_ADPCM_YAMAHA - LOOP_NODES(yamaha, step, - av_clip((step * ff_adpcm_yamaha_indexscale[nibble]) >> 8, - 127, 24576)); -#undef LOOP_NODES -#undef STORE_NODE - } - } - - u = nodes; - nodes = nodes_next; - nodes_next = u; - - generation++; - if (generation == 255) { - memset(hash, 0xff, 65536 * sizeof(*hash)); - generation = 0; - } - - // prevent overflow - if (nodes[0]->ssd > (1 << 28)) { - for (j = 1; j < frontier && nodes[j]; j++) - nodes[j]->ssd -= nodes[0]->ssd; - nodes[0]->ssd = 0; - } - - // merge old paths to save memory - if (i == froze + FREEZE_INTERVAL) { - p = &paths[nodes[0]->path]; - for (k = i; k > froze; k--) { - dst[k] = p->nibble; - p = &paths[p->prev]; - } - froze = i; - pathn = 0; - // other nodes might use paths that don't coincide with the frozen one. - // checking which nodes do so is too slow, so just kill them all. - // this also slightly improves quality, but I don't know why. - memset(nodes + 1, 0, (frontier - 1) * sizeof(TrellisNode*)); - } - } - - p = &paths[nodes[0]->path]; - for (i = n - 1; i > froze; i--) { - dst[i] = p->nibble; - p = &paths[p->prev]; - } - - c->predictor = nodes[0]->sample1; - c->sample1 = nodes[0]->sample1; - c->sample2 = nodes[0]->sample2; - c->step_index = nodes[0]->step; - c->step = nodes[0]->step; - c->idelta = nodes[0]->step; -} - -#if CONFIG_ADPCM_ARGO_ENCODER -static inline int adpcm_argo_compress_nibble(const ADPCMChannelStatus *cs, int16_t s, - int shift, int flag) -{ - int nibble; - - if (flag) - nibble = 4 * s - 8 * cs->sample1 + 4 * cs->sample2; - else - nibble = 4 * s - 4 * cs->sample1; - - return (nibble >> shift) & 0x0F; -} - -static int64_t adpcm_argo_compress_block(ADPCMChannelStatus *cs, PutBitContext *pb, - const int16_t *samples, int nsamples, - int shift, int flag) -{ - int64_t error = 0; - - if (pb) { - put_bits(pb, 4, shift - 2); - put_bits(pb, 1, 0); - put_bits(pb, 1, !!flag); - put_bits(pb, 2, 0); - } - - for (int n = 0; n < nsamples; n++) { - /* Compress the nibble, then expand it to see how much precision we've lost. */ - int nibble = adpcm_argo_compress_nibble(cs, samples[n], shift, flag); - int16_t sample = ff_adpcm_argo_expand_nibble(cs, nibble, shift, flag); - - error += abs(samples[n] - sample); - - if (pb) - put_bits(pb, 4, nibble); - } - - return error; -} -#endif - -static int adpcm_encode_frame(AVCodecContext *avctx, AVPacket *avpkt, - const AVFrame *frame, int *got_packet_ptr) -{ - int st, pkt_size, ret; - const int16_t *samples; - const int16_t *const *samples_p; - uint8_t *dst; - ADPCMEncodeContext *c = avctx->priv_data; - int channels = avctx->ch_layout.nb_channels; - - samples = (const int16_t *)frame->data[0]; - samples_p = (const int16_t *const *)frame->extended_data; - st = channels == 2; - - if (avctx->codec_id == AV_CODEC_ID_ADPCM_IMA_SSI || - avctx->codec_id == AV_CODEC_ID_ADPCM_IMA_ALP || - avctx->codec_id == AV_CODEC_ID_ADPCM_IMA_APM || - avctx->codec_id == AV_CODEC_ID_ADPCM_IMA_WS) - pkt_size = (frame->nb_samples * channels + 1) / 2; - else - pkt_size = avctx->block_align; - if ((ret = ff_get_encode_buffer(avctx, avpkt, pkt_size, 0)) < 0) - return ret; - dst = avpkt->data; - - switch(avctx->codec->id) { - CASE(ADPCM_IMA_WAV, - int blocks = (frame->nb_samples - 1) / 8; - - for (int ch = 0; ch < channels; ch++) { - ADPCMChannelStatus *status = &c->status[ch]; - status->prev_sample = samples_p[ch][0]; - /* status->step_index = 0; - XXX: not sure how to init the state machine */ - bytestream_put_le16(&dst, status->prev_sample); - *dst++ = status->step_index; - *dst++ = 0; /* unknown */ - } - - /* stereo: 4 bytes (8 samples) for left, 4 bytes for right */ - if (avctx->trellis > 0) { - uint8_t *buf; - if (!FF_ALLOC_TYPED_ARRAY(buf, channels * blocks * 8)) - return AVERROR(ENOMEM); - for (int ch = 0; ch < channels; ch++) { - adpcm_compress_trellis(avctx, &samples_p[ch][1], - buf + ch * blocks * 8, &c->status[ch], - blocks * 8, 1); - } - for (int i = 0; i < blocks; i++) { - for (int ch = 0; ch < channels; ch++) { - uint8_t *buf1 = buf + ch * blocks * 8 + i * 8; - for (int j = 0; j < 8; j += 2) - *dst++ = buf1[j] | (buf1[j + 1] << 4); - } - } - av_free(buf); - } else { - for (int i = 0; i < blocks; i++) { - for (int ch = 0; ch < channels; ch++) { - ADPCMChannelStatus *status = &c->status[ch]; - const int16_t *smp = &samples_p[ch][1 + i * 8]; - for (int j = 0; j < 8; j += 2) { - uint8_t v = adpcm_ima_compress_sample(status, smp[j ]); - v |= adpcm_ima_compress_sample(status, smp[j + 1]) << 4; - *dst++ = v; - } - } - } - } - ) /* End of CASE */ - CASE(ADPCM_IMA_QT, - PutBitContext pb; - init_put_bits(&pb, dst, pkt_size); - - for (int ch = 0; ch < channels; ch++) { - ADPCMChannelStatus *status = &c->status[ch]; - put_bits(&pb, 9, (status->prev_sample & 0xFFFF) >> 7); - put_bits(&pb, 7, status->step_index); - if (avctx->trellis > 0) { - uint8_t buf[64]; - adpcm_compress_trellis(avctx, &samples_p[ch][0], buf, status, - 64, 1); - for (int i = 0; i < 64; i++) - put_bits(&pb, 4, buf[i ^ 1]); - status->prev_sample = status->predictor; - } else { - for (int i = 0; i < 64; i += 2) { - int t1, t2; - t1 = adpcm_ima_qt_compress_sample(status, samples_p[ch][i ]); - t2 = adpcm_ima_qt_compress_sample(status, samples_p[ch][i + 1]); - put_bits(&pb, 4, t2); - put_bits(&pb, 4, t1); - } - } - } - - flush_put_bits(&pb); - ) /* End of CASE */ - CASE(ADPCM_IMA_SSI, - PutBitContext pb; - init_put_bits(&pb, dst, pkt_size); - - av_assert0(avctx->trellis == 0); - - for (int i = 0; i < frame->nb_samples; i++) { - for (int ch = 0; ch < channels; ch++) { - put_bits(&pb, 4, adpcm_ima_qt_compress_sample(c->status + ch, *samples++)); - } - } - - flush_put_bits(&pb); - ) /* End of CASE */ - CASE(ADPCM_IMA_ALP, - PutBitContext pb; - init_put_bits(&pb, dst, pkt_size); - - av_assert0(avctx->trellis == 0); - - for (int n = frame->nb_samples / 2; n > 0; n--) { - for (int ch = 0; ch < channels; ch++) { - put_bits(&pb, 4, adpcm_ima_alp_compress_sample(c->status + ch, *samples++)); - put_bits(&pb, 4, adpcm_ima_alp_compress_sample(c->status + ch, samples[st])); - } - samples += channels; - } - - flush_put_bits(&pb); - ) /* End of CASE */ - CASE(ADPCM_SWF, - const int n = frame->nb_samples - 1; - PutBitContext pb; - init_put_bits(&pb, dst, pkt_size); - - /* NB: This is safe as we don't have AV_CODEC_CAP_SMALL_LAST_FRAME. */ - av_assert0(n == 4095); - - // store AdpcmCodeSize - put_bits(&pb, 2, 2); // set 4-bit flash adpcm format - - // init the encoder state - for (int i = 0; i < channels; i++) { - // clip step so it fits 6 bits - c->status[i].step_index = av_clip_uintp2(c->status[i].step_index, 6); - put_sbits(&pb, 16, samples[i]); - put_bits(&pb, 6, c->status[i].step_index); - c->status[i].prev_sample = samples[i]; - } - - if (avctx->trellis > 0) { - uint8_t buf[8190 /* = 2 * n */]; - adpcm_compress_trellis(avctx, samples + channels, buf, - &c->status[0], n, channels); - if (channels == 2) - adpcm_compress_trellis(avctx, samples + channels + 1, - buf + n, &c->status[1], n, - channels); - for (int i = 0; i < n; i++) { - put_bits(&pb, 4, buf[i]); - if (channels == 2) - put_bits(&pb, 4, buf[n + i]); - } - } else { - for (int i = 1; i < frame->nb_samples; i++) { - put_bits(&pb, 4, adpcm_ima_compress_sample(&c->status[0], - samples[channels * i])); - if (channels == 2) - put_bits(&pb, 4, adpcm_ima_compress_sample(&c->status[1], - samples[2 * i + 1])); - } - } - flush_put_bits(&pb); - ) /* End of CASE */ - CASE(ADPCM_MS, - for (int i = 0; i < channels; i++) { - int predictor = 0; - *dst++ = predictor; - c->status[i].coeff1 = ff_adpcm_AdaptCoeff1[predictor]; - c->status[i].coeff2 = ff_adpcm_AdaptCoeff2[predictor]; - } - for (int i = 0; i < channels; i++) { - if (c->status[i].idelta < 16) - c->status[i].idelta = 16; - bytestream_put_le16(&dst, c->status[i].idelta); - } - for (int i = 0; i < channels; i++) - c->status[i].sample2= *samples++; - for (int i = 0; i < channels; i++) { - c->status[i].sample1 = *samples++; - bytestream_put_le16(&dst, c->status[i].sample1); - } - for (int i = 0; i < channels; i++) - bytestream_put_le16(&dst, c->status[i].sample2); - - if (avctx->trellis > 0) { - const int n = avctx->block_align - 7 * channels; - uint8_t *buf = av_malloc(2 * n); - if (!buf) - return AVERROR(ENOMEM); - if (channels == 1) { - adpcm_compress_trellis(avctx, samples, buf, &c->status[0], n, - channels); - for (int i = 0; i < n; i += 2) - *dst++ = (buf[i] << 4) | buf[i + 1]; - } else { - adpcm_compress_trellis(avctx, samples, buf, - &c->status[0], n, channels); - adpcm_compress_trellis(avctx, samples + 1, buf + n, - &c->status[1], n, channels); - for (int i = 0; i < n; i++) - *dst++ = (buf[i] << 4) | buf[n + i]; - } - av_free(buf); - } else { - for (int i = 7 * channels; i < avctx->block_align; i++) { - int nibble; - nibble = adpcm_ms_compress_sample(&c->status[ 0], *samples++) << 4; - nibble |= adpcm_ms_compress_sample(&c->status[st], *samples++); - *dst++ = nibble; - } - } - ) /* End of CASE */ - CASE(ADPCM_YAMAHA, - int n = frame->nb_samples / 2; - if (avctx->trellis > 0) { - uint8_t *buf = av_malloc(2 * n * 2); - if (!buf) - return AVERROR(ENOMEM); - n *= 2; - if (channels == 1) { - adpcm_compress_trellis(avctx, samples, buf, &c->status[0], n, - channels); - for (int i = 0; i < n; i += 2) - *dst++ = buf[i] | (buf[i + 1] << 4); - } else { - adpcm_compress_trellis(avctx, samples, buf, - &c->status[0], n, channels); - adpcm_compress_trellis(avctx, samples + 1, buf + n, - &c->status[1], n, channels); - for (int i = 0; i < n; i++) - *dst++ = buf[i] | (buf[n + i] << 4); - } - av_free(buf); - } else - for (n *= channels; n > 0; n--) { - int nibble; - nibble = adpcm_yamaha_compress_sample(&c->status[ 0], *samples++); - nibble |= adpcm_yamaha_compress_sample(&c->status[st], *samples++) << 4; - *dst++ = nibble; - } - ) /* End of CASE */ - CASE(ADPCM_IMA_APM, - PutBitContext pb; - init_put_bits(&pb, dst, pkt_size); - - av_assert0(avctx->trellis == 0); - - for (int n = frame->nb_samples / 2; n > 0; n--) { - for (int ch = 0; ch < channels; ch++) { - put_bits(&pb, 4, adpcm_ima_qt_compress_sample(c->status + ch, *samples++)); - put_bits(&pb, 4, adpcm_ima_qt_compress_sample(c->status + ch, samples[st])); - } - samples += channels; - } - - flush_put_bits(&pb); - ) /* End of CASE */ - CASE(ADPCM_IMA_AMV, - av_assert0(channels == 1); - - c->status[0].prev_sample = *samples; - bytestream_put_le16(&dst, c->status[0].prev_sample); - bytestream_put_byte(&dst, c->status[0].step_index); - bytestream_put_byte(&dst, 0); - bytestream_put_le32(&dst, avctx->frame_size); - - if (avctx->trellis > 0) { - const int n = frame->nb_samples >> 1; - uint8_t *buf = av_malloc(2 * n); - - if (!buf) - return AVERROR(ENOMEM); - - adpcm_compress_trellis(avctx, samples, buf, &c->status[0], 2 * n, channels); - for (int i = 0; i < n; i++) - bytestream_put_byte(&dst, (buf[2 * i] << 4) | buf[2 * i + 1]); - - samples += 2 * n; - av_free(buf); - } else for (int n = frame->nb_samples >> 1; n > 0; n--) { - int nibble; - nibble = adpcm_ima_compress_sample(&c->status[0], *samples++) << 4; - nibble |= adpcm_ima_compress_sample(&c->status[0], *samples++) & 0x0F; - bytestream_put_byte(&dst, nibble); - } - - if (avctx->frame_size & 1) { - int nibble = adpcm_ima_compress_sample(&c->status[0], *samples++) << 4; - bytestream_put_byte(&dst, nibble); - } - ) /* End of CASE */ - CASE(ADPCM_ARGO, - PutBitContext pb; - init_put_bits(&pb, dst, pkt_size); - - av_assert0(frame->nb_samples == 32); - - for (int ch = 0; ch < channels; ch++) { - int64_t error = INT64_MAX, tmperr = INT64_MAX; - int shift = 2, flag = 0; - int saved1 = c->status[ch].sample1; - int saved2 = c->status[ch].sample2; - - /* Find the optimal coefficients, bail early if we find a perfect result. */ - for (int s = 2; s < 18 && tmperr != 0; s++) { - for (int f = 0; f < 2 && tmperr != 0; f++) { - c->status[ch].sample1 = saved1; - c->status[ch].sample2 = saved2; - tmperr = adpcm_argo_compress_block(c->status + ch, NULL, samples_p[ch], - frame->nb_samples, s, f); - if (tmperr < error) { - shift = s; - flag = f; - error = tmperr; - } - } - } - - /* Now actually do the encode. */ - c->status[ch].sample1 = saved1; - c->status[ch].sample2 = saved2; - adpcm_argo_compress_block(c->status + ch, &pb, samples_p[ch], - frame->nb_samples, shift, flag); - } - - flush_put_bits(&pb); - ) /* End of CASE */ - CASE(ADPCM_IMA_WS, - PutBitContext pb; - init_put_bits(&pb, dst, pkt_size); - - av_assert0(avctx->trellis == 0); - for (int n = frame->nb_samples / 2; n > 0; n--) { - /* stereo: 1 byte (2 samples) for left, 1 byte for right */ - for (int ch = 0; ch < channels; ch++) { - int t1, t2; - t1 = adpcm_ima_compress_sample(&c->status[ch], *samples++); - t2 = adpcm_ima_compress_sample(&c->status[ch], samples[st]); - put_bits(&pb, 4, t2); - put_bits(&pb, 4, t1); - } - samples += channels; - } - flush_put_bits(&pb); - ) /* End of CASE */ - default: - return AVERROR(EINVAL); - } - - *got_packet_ptr = 1; - return 0; -} - -static const enum AVSampleFormat sample_fmts[] = { - AV_SAMPLE_FMT_S16, AV_SAMPLE_FMT_NONE -}; - -static const enum AVSampleFormat sample_fmts_p[] = { - AV_SAMPLE_FMT_S16P, AV_SAMPLE_FMT_NONE -}; - -static const AVChannelLayout ch_layouts[] = { - AV_CHANNEL_LAYOUT_MONO, - AV_CHANNEL_LAYOUT_STEREO, - { 0 }, -}; - -static const AVOption options[] = { - { - .name = "block_size", - .help = "set the block size", - .offset = offsetof(ADPCMEncodeContext, block_size), - .type = AV_OPT_TYPE_INT, - .default_val = {.i64 = 1024}, - .min = 32, - .max = 8192, /* Is this a reasonable upper limit? */ - .flags = AV_OPT_FLAG_ENCODING_PARAM | AV_OPT_FLAG_AUDIO_PARAM - }, - { NULL } -}; - -static const AVClass adpcm_encoder_class = { - .class_name = "ADPCM encoder", - .item_name = av_default_item_name, - .option = options, - .version = LIBAVUTIL_VERSION_INT, -}; - -#define ADPCM_ENCODER_0(id_, name_, sample_fmts_, capabilities_, long_name_) -#define ADPCM_ENCODER_1(id_, name_, sample_fmts_, capabilities_, long_name_) \ -const FFCodec ff_ ## name_ ## _encoder = { \ - .p.name = #name_, \ - CODEC_LONG_NAME(long_name_), \ - .p.type = AVMEDIA_TYPE_AUDIO, \ - .p.id = id_, \ - .p.sample_fmts = sample_fmts_, \ - .p.ch_layouts = ch_layouts, \ - .p.capabilities = capabilities_ | AV_CODEC_CAP_DR1 | \ - AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE, \ - .p.priv_class = &adpcm_encoder_class, \ - .priv_data_size = sizeof(ADPCMEncodeContext), \ - .init = adpcm_encode_init, \ - FF_CODEC_ENCODE_CB(adpcm_encode_frame), \ - .close = adpcm_encode_close, \ - .caps_internal = FF_CODEC_CAP_INIT_CLEANUP, \ -}; -#define ADPCM_ENCODER_2(enabled, codec_id, name, sample_fmts, capabilities, long_name) \ - ADPCM_ENCODER_ ## enabled(codec_id, name, sample_fmts, capabilities, long_name) -#define ADPCM_ENCODER_3(config, codec_id, name, sample_fmts, capabilities, long_name) \ - ADPCM_ENCODER_2(config, codec_id, name, sample_fmts, capabilities, long_name) -#define ADPCM_ENCODER(codec, name, sample_fmts, capabilities, long_name) \ - ADPCM_ENCODER_3(CONFIG_ ## codec ## _ENCODER, AV_CODEC_ID_ ## codec, \ - name, sample_fmts, capabilities, long_name) - -ADPCM_ENCODER(ADPCM_ARGO, adpcm_argo, sample_fmts_p, 0, "ADPCM Argonaut Games") -ADPCM_ENCODER(ADPCM_IMA_AMV, adpcm_ima_amv, sample_fmts, 0, "ADPCM IMA AMV") -ADPCM_ENCODER(ADPCM_IMA_APM, adpcm_ima_apm, sample_fmts, AV_CODEC_CAP_SMALL_LAST_FRAME, "ADPCM IMA Ubisoft APM") -ADPCM_ENCODER(ADPCM_IMA_ALP, adpcm_ima_alp, sample_fmts, AV_CODEC_CAP_SMALL_LAST_FRAME, "ADPCM IMA High Voltage Software ALP") -ADPCM_ENCODER(ADPCM_IMA_QT, adpcm_ima_qt, sample_fmts_p, 0, "ADPCM IMA QuickTime") -ADPCM_ENCODER(ADPCM_IMA_SSI, adpcm_ima_ssi, sample_fmts, AV_CODEC_CAP_SMALL_LAST_FRAME, "ADPCM IMA Simon & Schuster Interactive") -ADPCM_ENCODER(ADPCM_IMA_WAV, adpcm_ima_wav, sample_fmts_p, 0, "ADPCM IMA WAV") -ADPCM_ENCODER(ADPCM_IMA_WS, adpcm_ima_ws, sample_fmts, AV_CODEC_CAP_SMALL_LAST_FRAME, "ADPCM IMA Westwood") -ADPCM_ENCODER(ADPCM_MS, adpcm_ms, sample_fmts, 0, "ADPCM Microsoft") -ADPCM_ENCODER(ADPCM_SWF, adpcm_swf, sample_fmts, 0, "ADPCM Shockwave Flash") -ADPCM_ENCODER(ADPCM_YAMAHA, adpcm_yamaha, sample_fmts, 0, "ADPCM Yamaha") diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aliaspixdec.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aliaspixdec.c deleted file mode 100644 index 45155d79cde0a9b3e93143f4c68d84308aae7237..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aliaspixdec.c +++ /dev/null @@ -1,131 +0,0 @@ -/* - * Alias PIX image decoder - * Copyright (C) 2014 Vittorio Giovara - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "libavutil/intreadwrite.h" - -#include "avcodec.h" -#include "bytestream.h" -#include "codec_internal.h" -#include "decode.h" - -#define ALIAS_HEADER_SIZE 10 - -static int decode_frame(AVCodecContext *avctx, AVFrame *f, - int *got_frame, AVPacket *avpkt) -{ - GetByteContext gb; - int width, height, ret, bits_pixel, pixel; - uint8_t *out_buf; - uint8_t count; - int x, y; - - bytestream2_init(&gb, avpkt->data, avpkt->size); - - if (bytestream2_get_bytes_left(&gb) < ALIAS_HEADER_SIZE) { - av_log(avctx, AV_LOG_ERROR, "Header too small %d.\n", avpkt->size); - return AVERROR_INVALIDDATA; - } - - width = bytestream2_get_be16u(&gb); - height = bytestream2_get_be16u(&gb); - bytestream2_skipu(&gb, 4); // obsolete X, Y offset - bits_pixel = bytestream2_get_be16u(&gb); - - if (bits_pixel == 24) - avctx->pix_fmt = AV_PIX_FMT_BGR24; - else if (bits_pixel == 8) - avctx->pix_fmt = AV_PIX_FMT_GRAY8; - else { - av_log(avctx, AV_LOG_ERROR, "Invalid pixel format.\n"); - return AVERROR_INVALIDDATA; - } - - ret = ff_set_dimensions(avctx, width, height); - if (ret < 0) - return ret; - - if (bytestream2_get_bytes_left(&gb) < width*height / 255) - return AVERROR_INVALIDDATA; - - ret = ff_get_buffer(avctx, f, 0); - if (ret < 0) - return ret; - - f->pict_type = AV_PICTURE_TYPE_I; - f->key_frame = 1; - - x = 0; - y = 1; - out_buf = f->data[0]; - while (bytestream2_get_bytes_left(&gb) > 0) { - int i; - - /* set buffer at the right position at every new line */ - if (x == avctx->width) { - x = 0; - out_buf = f->data[0] + f->linesize[0] * y++; - if (y > avctx->height) { - av_log(avctx, AV_LOG_ERROR, - "Ended frame decoding with %d bytes left.\n", - bytestream2_get_bytes_left(&gb)); - return AVERROR_INVALIDDATA; - } - } - - /* read packet and copy data */ - count = bytestream2_get_byteu(&gb); - if (!count || x + count > avctx->width) { - av_log(avctx, AV_LOG_ERROR, "Invalid run length %d.\n", count); - return AVERROR_INVALIDDATA; - } - - if (avctx->pix_fmt == AV_PIX_FMT_BGR24) { - pixel = bytestream2_get_be24(&gb); - for (i = 0; i < count; i++) { - AV_WB24(out_buf, pixel); - out_buf += 3; - } - } else { // AV_PIX_FMT_GRAY8 - pixel = bytestream2_get_byte(&gb); - for (i = 0; i < count; i++) - *out_buf++ = pixel; - } - - x += i; - } - - if (x != width || y != height) { - av_log(avctx, AV_LOG_ERROR, "Picture stopped at %d,%d.\n", x, y); - return AVERROR_INVALIDDATA; - } - - *got_frame = 1; - return avpkt->size; -} - -const FFCodec ff_alias_pix_decoder = { - .p.name = "alias_pix", - CODEC_LONG_NAME("Alias/Wavefront PIX image"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_ALIAS_PIX, - .p.capabilities = AV_CODEC_CAP_DR1, - FF_CODEC_DECODE_CB(decode_frame), -}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/eatgq.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/eatgq.c deleted file mode 100644 index 01e1acd4e42feeb50e2304442e96e04a1b540fca..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/eatgq.c +++ /dev/null @@ -1,262 +0,0 @@ -/* - * Electronic Arts TGQ Video Decoder - * Copyright (c) 2007-2008 Peter Ross - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * Electronic Arts TGQ Video Decoder - * @author Peter Ross - * - * Technical details here: - * http://wiki.multimedia.cx/index.php?title=Electronic_Arts_TGQ - */ - -#define BITSTREAM_READER_LE - -#include "libavutil/mem_internal.h" - -#include "aandcttab.h" -#include "avcodec.h" -#include "bytestream.h" -#include "codec_internal.h" -#include "decode.h" -#include "eaidct.h" -#include "get_bits.h" - -typedef struct TgqContext { - AVCodecContext *avctx; - int width, height; - int qtable[64]; - DECLARE_ALIGNED(16, int16_t, block)[6][64]; -} TgqContext; - -static av_cold int tgq_decode_init(AVCodecContext *avctx) -{ - TgqContext *s = avctx->priv_data; - s->avctx = avctx; - avctx->framerate = (AVRational){ 15, 1 }; - avctx->pix_fmt = AV_PIX_FMT_YUV420P; - return 0; -} - -static int tgq_decode_block(TgqContext *s, int16_t block[64], GetBitContext *gb) -{ - const uint8_t *scantable = ff_zigzag_direct; - int i, j, value; - block[0] = get_sbits(gb, 8) * s->qtable[0]; - for (i = 1; i < 64;) { - switch (show_bits(gb, 3)) { - case 4: - if (i >= 63) - return AVERROR_INVALIDDATA; - block[scantable[i++]] = 0; - case 0: - block[scantable[i++]] = 0; - skip_bits(gb, 3); - break; - case 5: - case 1: - skip_bits(gb, 2); - value = get_bits(gb, 6); - if (value > 64 - i) - return AVERROR_INVALIDDATA; - for (j = 0; j < value; j++) - block[scantable[i++]] = 0; - break; - case 6: - skip_bits(gb, 3); - block[scantable[i]] = -s->qtable[scantable[i]]; - i++; - break; - case 2: - skip_bits(gb, 3); - block[scantable[i]] = s->qtable[scantable[i]]; - i++; - break; - case 7: // 111b - case 3: // 011b - skip_bits(gb, 2); - if (show_bits(gb, 6) == 0x3F) { - skip_bits(gb, 6); - block[scantable[i]] = get_sbits(gb, 8) * s->qtable[scantable[i]]; - } else { - block[scantable[i]] = get_sbits(gb, 6) * s->qtable[scantable[i]]; - } - i++; - break; - } - } - block[0] += 128 << 4; - return 0; -} - -static void tgq_idct_put_mb(TgqContext *s, int16_t (*block)[64], AVFrame *frame, - int mb_x, int mb_y) -{ - ptrdiff_t linesize = frame->linesize[0]; - uint8_t *dest_y = frame->data[0] + (mb_y * 16 * linesize) + mb_x * 16; - uint8_t *dest_cb = frame->data[1] + (mb_y * 8 * frame->linesize[1]) + mb_x * 8; - uint8_t *dest_cr = frame->data[2] + (mb_y * 8 * frame->linesize[2]) + mb_x * 8; - - ff_ea_idct_put_c(dest_y , linesize, block[0]); - ff_ea_idct_put_c(dest_y + 8, linesize, block[1]); - ff_ea_idct_put_c(dest_y + 8 * linesize , linesize, block[2]); - ff_ea_idct_put_c(dest_y + 8 * linesize + 8, linesize, block[3]); - if (!(s->avctx->flags & AV_CODEC_FLAG_GRAY)) { - ff_ea_idct_put_c(dest_cb, frame->linesize[1], block[4]); - ff_ea_idct_put_c(dest_cr, frame->linesize[2], block[5]); - } -} - -static inline void tgq_dconly(TgqContext *s, unsigned char *dst, - ptrdiff_t dst_stride, int dc) -{ - int level = av_clip_uint8((dc*s->qtable[0] + 2056) >> 4); - int j; - for (j = 0; j < 8; j++) - memset(dst + j * dst_stride, level, 8); -} - -static void tgq_idct_put_mb_dconly(TgqContext *s, AVFrame *frame, - int mb_x, int mb_y, const int8_t *dc) -{ - ptrdiff_t linesize = frame->linesize[0]; - uint8_t *dest_y = frame->data[0] + (mb_y * 16 * linesize) + mb_x * 16; - uint8_t *dest_cb = frame->data[1] + (mb_y * 8 * frame->linesize[1]) + mb_x * 8; - uint8_t *dest_cr = frame->data[2] + (mb_y * 8 * frame->linesize[2]) + mb_x * 8; - tgq_dconly(s, dest_y, linesize, dc[0]); - tgq_dconly(s, dest_y + 8, linesize, dc[1]); - tgq_dconly(s, dest_y + 8 * linesize, linesize, dc[2]); - tgq_dconly(s, dest_y + 8 * linesize + 8, linesize, dc[3]); - if (!(s->avctx->flags & AV_CODEC_FLAG_GRAY)) { - tgq_dconly(s, dest_cb, frame->linesize[1], dc[4]); - tgq_dconly(s, dest_cr, frame->linesize[2], dc[5]); - } -} - -static int tgq_decode_mb(TgqContext *s, GetByteContext *gbyte, - AVFrame *frame, int mb_y, int mb_x) -{ - int mode; - int i; - int8_t dc[6]; - - mode = bytestream2_get_byte(gbyte); - if (mode > 12) { - GetBitContext gb; - int ret = init_get_bits8(&gb, gbyte->buffer, FFMIN(bytestream2_get_bytes_left(gbyte), mode)); - if (ret < 0) - return ret; - - for (i = 0; i < 6; i++) { - int ret = tgq_decode_block(s, s->block[i], &gb); - if (ret < 0) - return ret; - } - tgq_idct_put_mb(s, s->block, frame, mb_x, mb_y); - bytestream2_skip(gbyte, mode); - } else { - if (mode == 3) { - memset(dc, bytestream2_get_byte(gbyte), 4); - dc[4] = bytestream2_get_byte(gbyte); - dc[5] = bytestream2_get_byte(gbyte); - } else if (mode == 6) { - bytestream2_get_buffer(gbyte, dc, 6); - } else if (mode == 12) { - for (i = 0; i < 6; i++) { - dc[i] = bytestream2_get_byte(gbyte); - bytestream2_skip(gbyte, 1); - } - } else { - av_log(s->avctx, AV_LOG_ERROR, "unsupported mb mode %i\n", mode); - return -1; - } - tgq_idct_put_mb_dconly(s, frame, mb_x, mb_y, dc); - } - return 0; -} - -static void tgq_calculate_qtable(TgqContext *s, int quant) -{ - int i, j; - const int a = (14 * (100 - quant)) / 100 + 1; - const int b = (11 * (100 - quant)) / 100 + 4; - for (j = 0; j < 8; j++) - for (i = 0; i < 8; i++) - s->qtable[j * 8 + i] = ((a * (j + i) / (7 + 7) + b) * - ff_inv_aanscales[j * 8 + i]) >> (14 - 4); -} - -static int tgq_decode_frame(AVCodecContext *avctx, AVFrame *frame, - int *got_frame, AVPacket *avpkt) -{ - const uint8_t *buf = avpkt->data; - int buf_size = avpkt->size; - TgqContext *s = avctx->priv_data; - GetByteContext gbyte; - int x, y, ret; - int big_endian; - - if (buf_size < 16) { - av_log(avctx, AV_LOG_WARNING, "truncated header\n"); - return AVERROR_INVALIDDATA; - } - big_endian = AV_RL32(&buf[4]) > 0x000FFFFF; - bytestream2_init(&gbyte, buf + 8, buf_size - 8); - if (big_endian) { - s->width = bytestream2_get_be16u(&gbyte); - s->height = bytestream2_get_be16u(&gbyte); - } else { - s->width = bytestream2_get_le16u(&gbyte); - s->height = bytestream2_get_le16u(&gbyte); - } - - ret = ff_set_dimensions(s->avctx, s->width, s->height); - if (ret < 0) - return ret; - - tgq_calculate_qtable(s, bytestream2_get_byteu(&gbyte)); - bytestream2_skipu(&gbyte, 3); - - if ((ret = ff_get_buffer(avctx, frame, 0)) < 0) - return ret; - frame->key_frame = 1; - frame->pict_type = AV_PICTURE_TYPE_I; - - for (y = 0; y < FFALIGN(avctx->height, 16) >> 4; y++) - for (x = 0; x < FFALIGN(avctx->width, 16) >> 4; x++) - if (tgq_decode_mb(s, &gbyte, frame, y, x) < 0) - return AVERROR_INVALIDDATA; - - *got_frame = 1; - - return avpkt->size; -} - -const FFCodec ff_eatgq_decoder = { - .p.name = "eatgq", - CODEC_LONG_NAME("Electronic Arts TGQ video"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_TGQ, - .priv_data_size = sizeof(TgqContext), - .init = tgq_decode_init, - FF_CODEC_DECODE_CB(tgq_decode_frame), - .p.capabilities = AV_CODEC_CAP_DR1, -}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/mpegvideo_mmi.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/mpegvideo_mmi.c deleted file mode 100644 index 3d5b5e20ab7b464db47eda327feb17e2e4aa52ab..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/mpegvideo_mmi.c +++ /dev/null @@ -1,508 +0,0 @@ -/* - * Loongson SIMD optimized mpegvideo - * - * Copyright (c) 2015 Loongson Technology Corporation Limited - * Copyright (c) 2015 Zhou Xiaoyong - * Zhang Shuangshuang - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "mpegvideo_mips.h" -#include "libavutil/mips/mmiutils.h" - -void ff_dct_unquantize_h263_intra_mmi(MpegEncContext *s, int16_t *block, - int n, int qscale) -{ - int64_t level, nCoeffs; - double ftmp[6]; - mips_reg addr[1]; - union mmi_intfloat64 qmul_u, qadd_u; - DECLARE_VAR_ALL64; - - qmul_u.i = qscale << 1; - av_assert2(s->block_last_index[n]>=0 || s->h263_aic); - - if (!s->h263_aic) { - if (n<4) - level = block[0] * s->y_dc_scale; - else - level = block[0] * s->c_dc_scale; - qadd_u.i = (qscale-1) | 1; - } else { - qadd_u.i = 0; - level = block[0]; - } - - if(s->ac_pred) - nCoeffs = 63; - else - nCoeffs = s->inter_scantable.raster_end[s->block_last_index[n]]; - - __asm__ volatile ( - "pxor %[ftmp0], %[ftmp0], %[ftmp0] \n\t" - "packsswh %[qmul], %[qmul], %[qmul] \n\t" - "packsswh %[qmul], %[qmul], %[qmul] \n\t" - "packsswh %[qadd], %[qadd], %[qadd] \n\t" - "packsswh %[qadd], %[qadd], %[qadd] \n\t" - "psubh %[ftmp0], %[ftmp0], %[qadd] \n\t" - "pxor %[ftmp5], %[ftmp5], %[ftmp5] \n\t" - ".p2align 4 \n\t" - - "1: \n\t" - PTR_ADDU "%[addr0], %[block], %[nCoeffs] \n\t" - MMI_LDC1(%[ftmp1], %[addr0], 0x00) - MMI_LDC1(%[ftmp2], %[addr0], 0x08) - "mov.d %[ftmp3], %[ftmp1] \n\t" - "mov.d %[ftmp4], %[ftmp2] \n\t" - "pmullh %[ftmp1], %[ftmp1], %[qmul] \n\t" - "pmullh %[ftmp2], %[ftmp2], %[qmul] \n\t" - "pcmpgth %[ftmp3], %[ftmp3], %[ftmp5] \n\t" - "pcmpgth %[ftmp4], %[ftmp4], %[ftmp5] \n\t" - "pxor %[ftmp1], %[ftmp1], %[ftmp3] \n\t" - "pxor %[ftmp2], %[ftmp2], %[ftmp4] \n\t" - "paddh %[ftmp1], %[ftmp1], %[ftmp0] \n\t" - "paddh %[ftmp2], %[ftmp2], %[ftmp0] \n\t" - "pxor %[ftmp3], %[ftmp3], %[ftmp1] \n\t" - "pxor %[ftmp4], %[ftmp4], %[ftmp2] \n\t" - "pcmpeqh %[ftmp1], %[ftmp1], %[ftmp0] \n\t" - "pcmpeqh %[ftmp2], %[ftmp2], %[ftmp0] \n\t" - "pandn %[ftmp1], %[ftmp1], %[ftmp3] \n\t" - "pandn %[ftmp2], %[ftmp2], %[ftmp4] \n\t" - PTR_ADDIU "%[nCoeffs], %[nCoeffs], 0x10 \n\t" - MMI_SDC1(%[ftmp1], %[addr0], 0x00) - MMI_SDC1(%[ftmp2], %[addr0], 0x08) - "blez %[nCoeffs], 1b \n\t" - : [ftmp0]"=&f"(ftmp[0]), [ftmp1]"=&f"(ftmp[1]), - [ftmp2]"=&f"(ftmp[2]), [ftmp3]"=&f"(ftmp[3]), - [ftmp4]"=&f"(ftmp[4]), [ftmp5]"=&f"(ftmp[5]), - RESTRICT_ASM_ALL64 - [addr0]"=&r"(addr[0]) - : [block]"r"((mips_reg)(block+nCoeffs)), - [nCoeffs]"r"((mips_reg)(2*(-nCoeffs))), - [qmul]"f"(qmul_u.f), [qadd]"f"(qadd_u.f) - : "memory" - ); - - block[0] = level; -} - -void ff_dct_unquantize_h263_inter_mmi(MpegEncContext *s, int16_t *block, - int n, int qscale) -{ - int64_t nCoeffs; - double ftmp[6]; - mips_reg addr[1]; - union mmi_intfloat64 qmul_u, qadd_u; - DECLARE_VAR_ALL64; - - qmul_u.i = qscale << 1; - qadd_u.i = (qscale - 1) | 1; - av_assert2(s->block_last_index[n]>=0 || s->h263_aic); - nCoeffs = s->inter_scantable.raster_end[s->block_last_index[n]]; - - __asm__ volatile ( - "packsswh %[qmul], %[qmul], %[qmul] \n\t" - "packsswh %[qmul], %[qmul], %[qmul] \n\t" - "pxor %[ftmp0], %[ftmp0], %[ftmp0] \n\t" - "packsswh %[qadd], %[qadd], %[qadd] \n\t" - "packsswh %[qadd], %[qadd], %[qadd] \n\t" - "psubh %[ftmp0], %[ftmp0], %[qadd] \n\t" - "pxor %[ftmp5], %[ftmp5], %[ftmp5] \n\t" - ".p2align 4 \n\t" - "1: \n\t" - PTR_ADDU "%[addr0], %[block], %[nCoeffs] \n\t" - MMI_LDC1(%[ftmp1], %[addr0], 0x00) - MMI_LDC1(%[ftmp2], %[addr0], 0x08) - "mov.d %[ftmp3], %[ftmp1] \n\t" - "mov.d %[ftmp4], %[ftmp2] \n\t" - "pmullh %[ftmp1], %[ftmp1], %[qmul] \n\t" - "pmullh %[ftmp2], %[ftmp2], %[qmul] \n\t" - "pcmpgth %[ftmp3], %[ftmp3], %[ftmp5] \n\t" - "pcmpgth %[ftmp4], %[ftmp4], %[ftmp5] \n\t" - "pxor %[ftmp1], %[ftmp1], %[ftmp3] \n\t" - "pxor %[ftmp2], %[ftmp2], %[ftmp4] \n\t" - "paddh %[ftmp1], %[ftmp1], %[ftmp0] \n\t" - "paddh %[ftmp2], %[ftmp2], %[ftmp0] \n\t" - "pxor %[ftmp3], %[ftmp3], %[ftmp1] \n\t" - "pxor %[ftmp4], %[ftmp4], %[ftmp2] \n\t" - "pcmpeqh %[ftmp1], %[ftmp1], %[ftmp0] \n\t" - "pcmpeqh %[ftmp2], %[ftmp2], %[ftmp0] \n\t" - "pandn %[ftmp1], %[ftmp1], %[ftmp3] \n\t" - "pandn %[ftmp2], %[ftmp2], %[ftmp4] \n\t" - PTR_ADDIU "%[nCoeffs], %[nCoeffs], 0x10 \n\t" - MMI_SDC1(%[ftmp1], %[addr0], 0x00) - MMI_SDC1(%[ftmp2], %[addr0], 0x08) - "blez %[nCoeffs], 1b \n\t" - : [ftmp0]"=&f"(ftmp[0]), [ftmp1]"=&f"(ftmp[1]), - [ftmp2]"=&f"(ftmp[2]), [ftmp3]"=&f"(ftmp[3]), - [ftmp4]"=&f"(ftmp[4]), [ftmp5]"=&f"(ftmp[5]), - RESTRICT_ASM_ALL64 - [addr0]"=&r"(addr[0]) - : [block]"r"((mips_reg)(block+nCoeffs)), - [nCoeffs]"r"((mips_reg)(2*(-nCoeffs))), - [qmul]"f"(qmul_u.f), [qadd]"f"(qadd_u.f) - : "memory" - ); -} - -void ff_dct_unquantize_mpeg1_intra_mmi(MpegEncContext *s, int16_t *block, - int n, int qscale) -{ - int64_t nCoeffs; - const uint16_t *quant_matrix; - int block0; - double ftmp[10]; - uint64_t tmp[1]; - mips_reg addr[1]; - DECLARE_VAR_ALL64; - DECLARE_VAR_ADDRT; - - av_assert2(s->block_last_index[n]>=0); - nCoeffs = s->intra_scantable.raster_end[s->block_last_index[n]] + 1; - - if (n<4) - block0 = block[0] * s->y_dc_scale; - else - block0 = block[0] * s->c_dc_scale; - - /* XXX: only mpeg1 */ - quant_matrix = s->intra_matrix; - - __asm__ volatile ( - "dli %[tmp0], 0x0f \n\t" - "pcmpeqh %[ftmp0], %[ftmp0], %[ftmp0] \n\t" - "dmtc1 %[tmp0], %[ftmp4] \n\t" - "dmtc1 %[qscale], %[ftmp1] \n\t" - "psrlh %[ftmp0], %[ftmp0], %[ftmp4] \n\t" - "packsswh %[ftmp1], %[ftmp1], %[ftmp1] \n\t" - "packsswh %[ftmp1], %[ftmp1], %[ftmp1] \n\t" - "or %[addr0], %[nCoeffs], $0 \n\t" - ".p2align 4 \n\t" - - "1: \n\t" - MMI_LDXC1(%[ftmp2], %[addr0], %[block], 0x00) - MMI_LDXC1(%[ftmp3], %[addr0], %[block], 0x08) - "mov.d %[ftmp4], %[ftmp2] \n\t" - "mov.d %[ftmp5], %[ftmp3] \n\t" - MMI_LDXC1(%[ftmp6], %[addr0], %[quant], 0x00) - MMI_LDXC1(%[ftmp7], %[addr0], %[quant], 0x08) - "pmullh %[ftmp6], %[ftmp6], %[ftmp1] \n\t" - "pmullh %[ftmp7], %[ftmp7], %[ftmp1] \n\t" - "pxor %[ftmp8], %[ftmp8], %[ftmp8] \n\t" - "pxor %[ftmp9], %[ftmp9], %[ftmp9] \n\t" - "pcmpgth %[ftmp8], %[ftmp8], %[ftmp2] \n\t" - "pcmpgth %[ftmp9], %[ftmp9], %[ftmp3] \n\t" - "pxor %[ftmp2], %[ftmp2], %[ftmp8] \n\t" - "pxor %[ftmp3], %[ftmp3], %[ftmp9] \n\t" - "psubh %[ftmp2], %[ftmp2], %[ftmp8] \n\t" - "psubh %[ftmp3], %[ftmp3], %[ftmp9] \n\t" - "pmullh %[ftmp2], %[ftmp2], %[ftmp6] \n\t" - "pmullh %[ftmp3], %[ftmp3], %[ftmp7] \n\t" - "pxor %[ftmp6], %[ftmp6], %[ftmp6] \n\t" - "pxor %[ftmp7], %[ftmp7], %[ftmp7] \n\t" - "pcmpeqh %[ftmp6], %[ftmp6], %[ftmp4] \n\t" - "dli %[tmp0], 0x03 \n\t" - "pcmpeqh %[ftmp7], %[ftmp7], %[ftmp5] \n\t" - "dmtc1 %[tmp0], %[ftmp4] \n\t" - "psrah %[ftmp2], %[ftmp2], %[ftmp4] \n\t" - "psrah %[ftmp3], %[ftmp3], %[ftmp4] \n\t" - "psubh %[ftmp2], %[ftmp2], %[ftmp0] \n\t" - "psubh %[ftmp3], %[ftmp3], %[ftmp0] \n\t" - "por %[ftmp2], %[ftmp2], %[ftmp0] \n\t" - "por %[ftmp3], %[ftmp3], %[ftmp0] \n\t" - "pxor %[ftmp2], %[ftmp2], %[ftmp8] \n\t" - "pxor %[ftmp3], %[ftmp3], %[ftmp9] \n\t" - "psubh %[ftmp2], %[ftmp2], %[ftmp8] \n\t" - "psubh %[ftmp3], %[ftmp3], %[ftmp9] \n\t" - "pandn %[ftmp6], %[ftmp6], %[ftmp2] \n\t" - "pandn %[ftmp7], %[ftmp7], %[ftmp3] \n\t" - MMI_SDXC1(%[ftmp6], %[addr0], %[block], 0x00) - MMI_SDXC1(%[ftmp7], %[addr0], %[block], 0x08) - PTR_ADDIU "%[addr0], %[addr0], 0x10 \n\t" - "bltz %[addr0], 1b \n\t" - : [ftmp0]"=&f"(ftmp[0]), [ftmp1]"=&f"(ftmp[1]), - [ftmp2]"=&f"(ftmp[2]), [ftmp3]"=&f"(ftmp[3]), - [ftmp4]"=&f"(ftmp[4]), [ftmp5]"=&f"(ftmp[5]), - [ftmp6]"=&f"(ftmp[6]), [ftmp7]"=&f"(ftmp[7]), - [ftmp8]"=&f"(ftmp[8]), [ftmp9]"=&f"(ftmp[9]), - [tmp0]"=&r"(tmp[0]), - RESTRICT_ASM_ALL64 - RESTRICT_ASM_ADDRT - [addr0]"=&r"(addr[0]) - : [block]"r"((mips_reg)(block+nCoeffs)), - [quant]"r"((mips_reg)(quant_matrix+nCoeffs)), - [nCoeffs]"r"((mips_reg)(2*(-nCoeffs))), - [qscale]"r"(qscale) - : "memory" - ); - - block[0] = block0; -} - -void ff_dct_unquantize_mpeg1_inter_mmi(MpegEncContext *s, int16_t *block, - int n, int qscale) -{ - int64_t nCoeffs; - const uint16_t *quant_matrix; - double ftmp[10]; - uint64_t tmp[1]; - mips_reg addr[1]; - DECLARE_VAR_ALL64; - DECLARE_VAR_ADDRT; - - av_assert2(s->block_last_index[n] >= 0); - nCoeffs = s->intra_scantable.raster_end[s->block_last_index[n]] + 1; - quant_matrix = s->inter_matrix; - - __asm__ volatile ( - "dli %[tmp0], 0x0f \n\t" - "pcmpeqh %[ftmp0], %[ftmp0], %[ftmp0] \n\t" - "dmtc1 %[tmp0], %[ftmp4] \n\t" - "dmtc1 %[qscale], %[ftmp1] \n\t" - "psrlh %[ftmp0], %[ftmp0], %[ftmp4] \n\t" - "packsswh %[ftmp1], %[ftmp1], %[ftmp1] \n\t" - "packsswh %[ftmp1], %[ftmp1], %[ftmp1] \n\t" - "or %[addr0], %[nCoeffs], $0 \n\t" - ".p2align 4 \n\t" - - "1: \n\t" - MMI_LDXC1(%[ftmp2], %[addr0], %[block], 0x00) - MMI_LDXC1(%[ftmp3], %[addr0], %[block], 0x08) - "mov.d %[ftmp4], %[ftmp2] \n\t" - "mov.d %[ftmp5], %[ftmp3] \n\t" - MMI_LDXC1(%[ftmp6], %[addr0], %[quant], 0x00) - MMI_LDXC1(%[ftmp7], %[addr0], %[quant], 0x08) - "pmullh %[ftmp6], %[ftmp6], %[ftmp1] \n\t" - "pmullh %[ftmp7], %[ftmp7], %[ftmp1] \n\t" - "pxor %[ftmp8], %[ftmp8], %[ftmp8] \n\t" - "pxor %[ftmp9], %[ftmp9], %[ftmp9] \n\t" - "pcmpgth %[ftmp8], %[ftmp8], %[ftmp2] \n\t" - "pcmpgth %[ftmp9], %[ftmp9], %[ftmp3] \n\t" - "pxor %[ftmp2], %[ftmp2], %[ftmp8] \n\t" - "pxor %[ftmp3], %[ftmp3], %[ftmp9] \n\t" - "psubh %[ftmp2], %[ftmp2], %[ftmp8] \n\t" - "psubh %[ftmp3], %[ftmp3], %[ftmp9] \n\t" - "paddh %[ftmp2], %[ftmp2], %[ftmp2] \n\t" - "paddh %[ftmp3], %[ftmp3], %[ftmp3] \n\t" - "paddh %[ftmp2], %[ftmp2], %[ftmp0] \n\t" - "paddh %[ftmp3], %[ftmp3], %[ftmp0] \n\t" - "pmullh %[ftmp2], %[ftmp2], %[ftmp6] \n\t" - "pmullh %[ftmp3], %[ftmp3], %[ftmp7] \n\t" - "pxor %[ftmp6], %[ftmp6], %[ftmp6] \n\t" - "pxor %[ftmp7], %[ftmp7], %[ftmp7] \n\t" - "pcmpeqh %[ftmp6], %[ftmp6], %[ftmp4] \n\t" - "dli %[tmp0], 0x04 \n\t" - "pcmpeqh %[ftmp7], %[ftmp7], %[ftmp5] \n\t" - "dmtc1 %[tmp0], %[ftmp4] \n\t" - "psrah %[ftmp2], %[ftmp2], %[ftmp4] \n\t" - "psrah %[ftmp3], %[ftmp3], %[ftmp4] \n\t" - "psubh %[ftmp2], %[ftmp2], %[ftmp0] \n\t" - "psubh %[ftmp3], %[ftmp3], %[ftmp0] \n\t" - "por %[ftmp2], %[ftmp2], %[ftmp0] \n\t" - "por %[ftmp3], %[ftmp3], %[ftmp0] \n\t" - "pxor %[ftmp2], %[ftmp2], %[ftmp8] \n\t" - "pxor %[ftmp3], %[ftmp3], %[ftmp9] \n\t" - "psubh %[ftmp2], %[ftmp2], %[ftmp8] \n\t" - "psubh %[ftmp3], %[ftmp3], %[ftmp9] \n\t" - "pandn %[ftmp6], %[ftmp6], %[ftmp2] \n\t" - "pandn %[ftmp7], %[ftmp7], %[ftmp3] \n\t" - MMI_SDXC1(%[ftmp6], %[addr0], %[block], 0x00) - MMI_SDXC1(%[ftmp7], %[addr0], %[block], 0x08) - PTR_ADDIU "%[addr0], %[addr0], 0x10 \n\t" - "bltz %[addr0], 1b \n\t" - : [ftmp0]"=&f"(ftmp[0]), [ftmp1]"=&f"(ftmp[1]), - [ftmp2]"=&f"(ftmp[2]), [ftmp3]"=&f"(ftmp[3]), - [ftmp4]"=&f"(ftmp[4]), [ftmp5]"=&f"(ftmp[5]), - [ftmp6]"=&f"(ftmp[6]), [ftmp7]"=&f"(ftmp[7]), - [ftmp8]"=&f"(ftmp[8]), [ftmp9]"=&f"(ftmp[9]), - [tmp0]"=&r"(tmp[0]), - RESTRICT_ASM_ALL64 - RESTRICT_ASM_ADDRT - [addr0]"=&r"(addr[0]) - : [block]"r"((mips_reg)(block+nCoeffs)), - [quant]"r"((mips_reg)(quant_matrix+nCoeffs)), - [nCoeffs]"r"((mips_reg)(2*(-nCoeffs))), - [qscale]"r"(qscale) - : "memory" - ); -} - -void ff_dct_unquantize_mpeg2_intra_mmi(MpegEncContext *s, int16_t *block, - int n, int qscale) -{ - uint64_t nCoeffs; - const uint16_t *quant_matrix; - int block0; - double ftmp[10]; - uint64_t tmp[1]; - mips_reg addr[1]; - DECLARE_VAR_ALL64; - DECLARE_VAR_ADDRT; - - assert(s->block_last_index[n]>=0); - - if (s->alternate_scan) - nCoeffs = 63; - else - nCoeffs = s->intra_scantable.raster_end[s->block_last_index[n]]; - - if (n < 4) - block0 = block[0] * s->y_dc_scale; - else - block0 = block[0] * s->c_dc_scale; - - quant_matrix = s->intra_matrix; - - __asm__ volatile ( - "dli %[tmp0], 0x0f \n\t" - "pcmpeqh %[ftmp0], %[ftmp0], %[ftmp0] \n\t" - "mtc1 %[tmp0], %[ftmp3] \n\t" - "mtc1 %[qscale], %[ftmp9] \n\t" - "psrlh %[ftmp0], %[ftmp0], %[ftmp3] \n\t" - "packsswh %[ftmp9], %[ftmp9], %[ftmp9] \n\t" - "packsswh %[ftmp9], %[ftmp9], %[ftmp9] \n\t" - "or %[addr0], %[nCoeffs], $0 \n\t" - ".p2align 4 \n\t" - - "1: \n\t" - MMI_LDXC1(%[ftmp1], %[addr0], %[block], 0x00) - MMI_LDXC1(%[ftmp2], %[addr0], %[block], 0x08) - "mov.d %[ftmp3], %[ftmp1] \n\t" - "mov.d %[ftmp4], %[ftmp2] \n\t" - MMI_LDXC1(%[ftmp5], %[addr0], %[quant], 0x00) - MMI_LDXC1(%[ftmp6], %[addr0], %[quant], 0x08) - "pmullh %[ftmp5], %[ftmp5], %[ftmp9] \n\t" - "pmullh %[ftmp6], %[ftmp6], %[ftmp9] \n\t" - "pxor %[ftmp7], %[ftmp7], %[ftmp7] \n\t" - "pxor %[ftmp8], %[ftmp8], %[ftmp8] \n\t" - "pcmpgth %[ftmp7], %[ftmp7], %[ftmp1] \n\t" - "pcmpgth %[ftmp8], %[ftmp8], %[ftmp2] \n\t" - "pxor %[ftmp1], %[ftmp1], %[ftmp7] \n\t" - "pxor %[ftmp2], %[ftmp2], %[ftmp8] \n\t" - "psubh %[ftmp1], %[ftmp1], %[ftmp7] \n\t" - "psubh %[ftmp2], %[ftmp2], %[ftmp8] \n\t" - "pmullh %[ftmp1], %[ftmp1], %[ftmp5] \n\t" - "pmullh %[ftmp2], %[ftmp2], %[ftmp6] \n\t" - "pxor %[ftmp5], %[ftmp5], %[ftmp5] \n\t" - "pxor %[ftmp6], %[ftmp6], %[ftmp6] \n\t" - "pcmpeqh %[ftmp5], %[ftmp5], %[ftmp3] \n\t" - "dli %[tmp0], 0x03 \n\t" - "pcmpeqh %[ftmp6] , %[ftmp6], %[ftmp4] \n\t" - "mtc1 %[tmp0], %[ftmp3] \n\t" - "psrah %[ftmp1], %[ftmp1], %[ftmp3] \n\t" - "psrah %[ftmp2], %[ftmp2], %[ftmp3] \n\t" - "pxor %[ftmp1], %[ftmp1], %[ftmp7] \n\t" - "pxor %[ftmp2], %[ftmp2], %[ftmp8] \n\t" - "psubh %[ftmp1], %[ftmp1], %[ftmp7] \n\t" - "psubh %[ftmp2], %[ftmp2], %[ftmp8] \n\t" - "pandn %[ftmp5], %[ftmp5], %[ftmp1] \n\t" - "pandn %[ftmp6], %[ftmp6], %[ftmp2] \n\t" - MMI_SDXC1(%[ftmp5], %[addr0], %[block], 0x00) - MMI_SDXC1(%[ftmp6], %[addr0], %[block], 0x08) - PTR_ADDIU "%[addr0], %[addr0], 0x10 \n\t" - "blez %[addr0], 1b \n\t" - : [ftmp0]"=&f"(ftmp[0]), [ftmp1]"=&f"(ftmp[1]), - [ftmp2]"=&f"(ftmp[2]), [ftmp3]"=&f"(ftmp[3]), - [ftmp4]"=&f"(ftmp[4]), [ftmp5]"=&f"(ftmp[5]), - [ftmp6]"=&f"(ftmp[6]), [ftmp7]"=&f"(ftmp[7]), - [ftmp8]"=&f"(ftmp[8]), [ftmp9]"=&f"(ftmp[9]), - [tmp0]"=&r"(tmp[0]), - RESTRICT_ASM_ALL64 - RESTRICT_ASM_ADDRT - [addr0]"=&r"(addr[0]) - : [block]"r"((mips_reg)(block+nCoeffs)), - [quant]"r"((mips_reg)(quant_matrix+nCoeffs)), - [nCoeffs]"r"((mips_reg)(2*(-nCoeffs))), - [qscale]"r"(qscale) - : "memory" - ); - - block[0]= block0; -} - -void ff_denoise_dct_mmi(MpegEncContext *s, int16_t *block) -{ - const int intra = s->mb_intra; - int *sum = s->dct_error_sum[intra]; - uint16_t *offset = s->dct_offset[intra]; - double ftmp[8]; - mips_reg addr[1]; - DECLARE_VAR_ALL64; - - s->dct_count[intra]++; - - __asm__ volatile( - "pxor %[ftmp0], %[ftmp0], %[ftmp0] \n\t" - "1: \n\t" - MMI_LDC1(%[ftmp1], %[block], 0x00) - "pxor %[ftmp2], %[ftmp2], %[ftmp2] \n\t" - MMI_LDC1(%[ftmp3], %[block], 0x08) - "pxor %[ftmp4], %[ftmp4], %[ftmp4] \n\t" - "pcmpgth %[ftmp2], %[ftmp2], %[ftmp1] \n\t" - "pcmpgth %[ftmp4], %[ftmp4], %[ftmp3] \n\t" - "pxor %[ftmp1], %[ftmp1], %[ftmp2] \n\t" - "pxor %[ftmp3], %[ftmp3], %[ftmp4] \n\t" - "psubh %[ftmp1], %[ftmp1], %[ftmp2] \n\t" - "psubh %[ftmp3], %[ftmp3], %[ftmp4] \n\t" - MMI_LDC1(%[ftmp6], %[offset], 0x00) - "mov.d %[ftmp5], %[ftmp1] \n\t" - "psubush %[ftmp1], %[ftmp1], %[ftmp6] \n\t" - MMI_LDC1(%[ftmp6], %[offset], 0x08) - "mov.d %[ftmp7], %[ftmp3] \n\t" - "psubush %[ftmp3], %[ftmp3], %[ftmp6] \n\t" - "pxor %[ftmp1], %[ftmp1], %[ftmp2] \n\t" - "pxor %[ftmp3], %[ftmp3], %[ftmp4] \n\t" - "psubh %[ftmp1], %[ftmp1], %[ftmp2] \n\t" - "psubh %[ftmp3], %[ftmp3], %[ftmp4] \n\t" - MMI_SDC1(%[ftmp1], %[block], 0x00) - MMI_SDC1(%[ftmp3], %[block], 0x08) - "mov.d %[ftmp1], %[ftmp5] \n\t" - "mov.d %[ftmp3], %[ftmp7] \n\t" - "punpcklhw %[ftmp5], %[ftmp5], %[ftmp0] \n\t" - "punpckhhw %[ftmp1], %[ftmp1], %[ftmp0] \n\t" - "punpcklhw %[ftmp7], %[ftmp7], %[ftmp0] \n\t" - "punpckhhw %[ftmp3], %[ftmp3], %[ftmp0] \n\t" - MMI_LDC1(%[ftmp2], %[sum], 0x00) - "paddw %[ftmp5], %[ftmp5], %[ftmp2] \n\t" - MMI_LDC1(%[ftmp2], %[sum], 0x08) - "paddw %[ftmp1], %[ftmp1], %[ftmp2] \n\t" - MMI_LDC1(%[ftmp2], %[sum], 0x10) - "paddw %[ftmp7], %[ftmp7], %[ftmp2] \n\t" - MMI_LDC1(%[ftmp2], %[sum], 0x18) - "paddw %[ftmp3], %[ftmp3], %[ftmp2] \n\t" - MMI_SDC1(%[ftmp5], %[sum], 0x00) - MMI_SDC1(%[ftmp1], %[sum], 0x08) - MMI_SDC1(%[ftmp7], %[sum], 0x10) - MMI_SDC1(%[ftmp3], %[sum], 0x18) - PTR_ADDIU "%[block], %[block], 0x10 \n\t" - PTR_ADDIU "%[sum], %[sum], 0x20 \n\t" - PTR_SUBU "%[addr0], %[block1], %[block] \n\t" - PTR_ADDIU "%[offset], %[offset], 0x10 \n\t" - "bgtz %[addr0], 1b \n\t" - : [ftmp0]"=&f"(ftmp[0]), [ftmp1]"=&f"(ftmp[1]), - [ftmp2]"=&f"(ftmp[2]), [ftmp3]"=&f"(ftmp[3]), - [ftmp4]"=&f"(ftmp[4]), [ftmp5]"=&f"(ftmp[5]), - [ftmp6]"=&f"(ftmp[6]), [ftmp7]"=&f"(ftmp[7]), - RESTRICT_ASM_ALL64 - [addr0]"=&r"(addr[0]), - [block]"+&r"(block), [sum]"+&r"(sum), - [offset]"+&r"(offset) - : [block1]"r"(block+64) - : "memory" - ); -} diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Download Mighty Party MOD and Unlock All Heroes.md b/spaces/congsaPfin/Manga-OCR/logs/How to Download Mighty Party MOD and Unlock All Heroes.md deleted file mode 100644 index dcf314595874116d096c3fb5bef18f0b72ccbe41..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Download Mighty Party MOD and Unlock All Heroes.md +++ /dev/null @@ -1,91 +0,0 @@ - -

Download Mighty Party Mod: A Fun and Strategic RPG Game

-

If you are looking for a game that combines strategy, action, and fantasy, then you should try Mighty Party. This is a game that will challenge your skills and creativity as you build your own team of heroes and fight against other players in real-time battles. But what if you want to enjoy the game without any limitations or restrictions? Well, there is a solution for that: download Mighty Party mod. In this article, we will tell you what Mighty Party is, why you should download its mod version, and how to do it easily and safely.

-

What is Mighty Party?

-

Mighty Party is a mobile game that was released in 2017 by Panoramik Games. It is a role-playing game (RPG) that features chess-like mechanics and card-based gameplay. In this game, you can create your own squad of heroes from different classes, such as warriors, mages, archers, healers, and more. You can also customize your heroes with various skills, items, and outfits. The game has several modes, such as campaign, arena, guild wars, raids, and events. You can play solo or with friends in online multiplayer matches.

-

download mighty party mod


Download File ✯✯✯ https://urlca.com/2uO51g



-

A fast-paced and tactical game

-

One of the main attractions of Mighty Party is its fast-paced and tactical gameplay. Each match lasts only three minutes, so you have to think quickly and act smartly. You have to place your heroes on the board strategically, using their abilities and synergies to defeat your opponents. You also have to manage your resources, such as mana and cards, to optimize your performance. The game is easy to learn but hard to master, so you will never get bored of it.

-

A diverse and colorful world

-

Another thing that makes Mighty Party stand out is its diverse and colorful world. The game has a fantasy theme that is inspired by various myths, legends, and cultures. You can explore different regions, such as the Kingdom of Loyal, the Dark Forest, the Frozen Lands, and the Underworld. You can also encounter different creatures, such as dragons, unicorns, zombies, demons, and gods. The game has a vibrant and cartoonish art style that is appealing to both young and old players.

-

A social and competitive experience

-

The last thing that we want to mention about Mighty Party is its social and competitive experience. The game has a large and active community of players from all over the world. You can chat with them, join guilds with them, or challenge them in duels or tournaments. You can also rank up in the leaderboards and earn rewards for your achievements. The game is constantly updated with new content and features to keep you entertained.

-

Why download Mighty Party mod?

-

Now that you know what Mighty Party is, you might be wondering why you should download its mod version. Well, there are many reasons for that. Here are some of them:

-

Unlimited resources and features

-

The first reason to download Mighty Party mod is that it gives you unlimited resources and features. This means that you can get unlimited gems, gold, chests, cards, heroes, skins, and more. You can also unlock all the modes, levels, events, and achievements in the game. This way, you can enjoy the game without any limitations or restrictions. You can also experiment with different strategies and combinations without worrying about wasting your resources.

-

download mighty party mod apk latest version
-download mighty party mod unlimited resources
-download mighty party mod for android
-download mighty party mod free gems and gold
-download mighty party mod menu
-download mighty party mod ios
-download mighty party mod online
-download mighty party mod no root
-download mighty party mod apk 2023
-download mighty party mod apk modyolo[^1^]
-download mighty party mod apk revdl
-download mighty party mod apk rexdl
-download mighty party mod apk happymod
-download mighty party mod apk an1
-download mighty party mod apk android 1
-download mighty party mod apk offline
-download mighty party mod apk obb
-download mighty party mod apk unlimited money
-download mighty party mod apk unlimited everything
-download mighty party mod apk unlimited gems and coins
-download mighty party mod apk hack
-download mighty party mod apk cheat
-download mighty party mod apk vip
-download mighty party mod apk pro
-download mighty party mod apk premium
-download mighty party mod apk cracked
-download mighty party mod apk unlocked
-download mighty party mod apk full version
-download mighty party mod apk mega mod
-download mighty party mod apk god mode
-download mighty party mod apk one hit kill
-download mighty party mod apk high damage
-download mighty party mod apk instant win
-download mighty party mod apk auto win
-download mighty party mod apk easy win
-download mighty party mod apk anti ban
-download mighty party mod apk no ads
-download mighty party mod apk free shopping
-download mighty party mod apk free upgrade
-download mighty party mod apk free summon
-download mighty par

-

Easy and safe installation

-

The second reason to download Mighty Party mod is that it has an easy and safe installation process. You don't need to root or jailbreak your device to use it. You also don't need to download any additional files or apps to use it. You just need to download the mod apk file from a reliable website and install it on your device. The mod apk file is scanned and tested for viruses and malware, so you don't have to worry about your device's security.

-

Compatible with most devices

-

The third reason to download Mighty Party mod is that it is compatible with most devices. You can use it on any Android or iOS device that supports the original game. You don't need to have a high-end device or a strong internet connection to play the game smoothly. The mod apk file is optimized and compressed to reduce its size and improve its performance. You can also update the mod apk file whenever there is a new version of the game available.

-

How to download Mighty Party mod?

-

Now that you know why you should download Mighty Party mod, you might be wondering how to do it. Well, it is very simple and easy. Just follow these steps:

-

Step 1: Visit the mod website

-

The first step is to visit the mod website where you can find the latest version of Mighty Party mod. There are many websites that offer this mod, but not all of them are trustworthy and safe. We recommend you to use [this website], which is one of the most popular and reliable sources of mods for various games. You can also read the reviews and ratings of other users who have downloaded and used the mod from this website.

-

Step 2: Choose your preferred version

-

The second step is to choose your preferred version of Mighty Party mod. There are different versions of the mod that offer different features and options. For example, some versions may have more resources and items than others, or some may have more cheats and hacks than others. You can compare the features and options of each version and select the one that suits your needs and preferences.

-

Step 3: Download and install the mod apk file

-

The third step is to download and install the mod apk file on your device. Once you have chosen your preferred version of Mighty Party mod, you can click on the download button and wait for the file to be downloaded. The file size may vary depending on the version, but it should not take too long to download. After the file is downloaded, you can open it and follow the instructions to install it on your device. You may need to enable unknown sources in your device settings to allow the installation of the mod apk file.

-

Conclusion

-

Mighty Party is a fun and strategic RPG game that will keep you entertained for hours. You can create your own team of heroes and fight against other players in real-time battles. You can also explore a diverse and colorful world, join a social and competitive community, and enjoy various modes and events. However, if you want to have more fun and freedom in the game, you should download Mighty Party mod. This will give you unlimited resources and features, easy and safe installation, and compatibility with most devices. You can download Mighty Party mod from [this website] by following these simple steps:

-
    -
  • Visit the mod website
  • -
  • Choose your preferred version
  • -
  • Download and install the mod apk file
  • -
-

We hope this article was helpful for you. If you have any questions or feedback, please let us know in the comments section below. Thank you for reading!

-

FAQs

-

Here are some frequently asked questions about Mighty Party mod:

-

Q: Is Mighty Party mod free?

-

A: Yes, Mighty Party mod is free to download and use. You don't need to pay anything to enjoy its unlimited resources and features.

-

Q: Is Mighty Party mod legal?

-

A: No, Mighty Party mod is not legal. It violates the terms and conditions of the original game developer and publisher. It also infringes their intellectual property rights. Therefore, using Mighty Party mod may result in legal actions or penalties.

-

Q: Is Mighty Party mod safe?

-

A: Yes, Mighty Party mod is safe to use as long as you download it from a reliable website like [this one]. The mod apk file is scanned and tested for viruses and malware before being uploaded on the website. However, you should always be careful when downloading any files or apps from unknown sources on the internet.

-

Q: Will Mighty Party mod work on my device?

-

A: Yes, Mighty Party mod will work on any Android or iOS device that supports the original game. You don't need to have a high-end device or a strong internet connection to play the game smoothly. The mod apk file is optimized and compressed to reduce its size and improve its performance. You can also update the mod apk file whenever there is a new version of the game available.

-

Q: How can I uninstall Mighty Party mod?

-

A: If you want to uninstall Mighty Party mod, you can do it easily and quickly. You just need to go to your device settings, find the app manager, and select Mighty Party mod. Then, you can tap on the uninstall button and confirm your action. The mod apk file will be removed from your device along with its data and cache.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/SDS Grand Cross Mod APK Everything You Need to Know.md b/spaces/congsaPfin/Manga-OCR/logs/SDS Grand Cross Mod APK Everything You Need to Know.md deleted file mode 100644 index c9b896d6cd2a6b8e8ea992b84dfcb768e6f04359..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/SDS Grand Cross Mod APK Everything You Need to Know.md +++ /dev/null @@ -1,94 +0,0 @@ - -

SDS Grand Cross Mod APK: A Guide for Anime Fans

-

If you are a fan of the anime series The Seven Deadly Sins, you might have heard of or played SDS Grand Cross, a mobile RPG game based on the popular manga and anime. The game features stunning graphics, dynamic combat, faithful storylines and a plethora of characters from the original series. You can join Meliodas, Elizabeth, Ban, King and other heroes in their quest to save Britannia from the demons.

-

sds grand cross mod apk


DOWNLOADhttps://urlca.com/2uOeXl



-

But what if you want to enjoy the game even more? What if you want to unlock all the features, customize your heroes, breeze through the battles and experience the ultimate anime adventure? Well, that's where SDS Grand Cross mod apk comes in handy. A mod apk is a modified version of the original game that allows you to access various cheats and hacks that enhance your gameplay. In this article, we will tell you everything you need to know about SDS Grand Cross mod apk, including how to download it, how to play it, what are its features, pros and cons, tips and tricks, and FAQs.

-

How to Download and Install SDS Grand Cross Mod APK

-

The first step to enjoy SDS Grand Cross mod apk is to download it from a reliable source. There are many websites that claim to offer the mod apk file, but not all of them are safe or trustworthy. Some of them might contain viruses, malware or spyware that can harm your device or steal your personal information. Therefore, you need to be careful when choosing where to download the mod apk file.

-

One of the best sources for SDS Grand Cross mod apk is [The Seven Deadly Sins Grand Cross Mod APK 1.3.2 (Unlimited money)](^3^). This website offers a verified and updated version of the mod apk file that works on most Android devices. You can also check out other reviews and ratings from other users before downloading it.

-3^), you need to enable unknown sources on your device. This is a security setting that allows you to install apps from sources other than the official Google Play Store. To do this, go to your device settings, then security, then toggle on the unknown sources option. You might see a warning message that says installing apps from unknown sources can harm your device, but don't worry, as long as you download the mod apk file from a trusted source, you should be fine.

-

After enabling unknown sources, you can proceed to install the mod apk file. To do this, locate the file in your device storage, then tap on it to start the installation process. You might see some prompts asking you to grant permissions to the app, such as access to your storage, contacts, phone and location. You need to allow these permissions for the mod apk to work properly. Once the installation is complete, you can open the app and enjoy SDS Grand Cross mod apk.

-

sds grand cross mod apk unlimited money
-sds grand cross mod apk damage
-sds grand cross mod apk god mode
-sds grand cross mod apk android republic
-sds grand cross mod apk latest version
-sds grand cross mod apk free download
-sds grand cross mod apk no root
-sds grand cross mod apk ios
-sds grand cross mod apk 1.3.2
-sds grand cross mod apk 7ds
-sds grand cross mod apk hack
-sds grand cross mod apk offline
-sds grand cross mod apk global
-sds grand cross mod apk jp
-sds grand cross mod apk reddit
-sds grand cross mod apk unlimited stamina
-sds grand cross mod apk aoe skills
-sds grand cross mod apk exclusive modding team
-sds grand cross mod apk youtube
-sds grand cross mod apk apps.satsds.com
-sds grand cross modded apk download
-sds grand cross hacked apk android
-sds grand cross cheat apk ios
-sds grand cross premium apk free
-sds grand cross cracked apk latest
-sds grand cross unlocked apk 2023
-sds grand cross patched apk no ban
-sds grand cross full apk online
-sds grand cross updated apk 1.3.3
-sds grand cross mega mod apk 7.0.0

-

How to Play SDS Grand Cross Mod APK

-

Now that you have installed SDS Grand Cross mod apk, you are ready to play the game and experience its amazing features. The game is very easy to play and has a user-friendly interface that guides you through the basics. Here are some steps on how to play SDS Grand Cross mod apk:

-
    -
  • First, you need to create your account and choose your server. You can use your email, Facebook or Google account to sign up for the game. You can also choose between different servers based on your region and language preference. The game will automatically assign you a random username, but you can change it later if you want.
  • -
  • Next, you need to navigate the game interface and access different modes. The game has a main menu that lets you access various features such as story mode, battle mode, tavern, shop, draw, hero box and settings. You can also tap on the icons on the screen to interact with different characters and objects. The game also has a chat system that lets you communicate with other players and join guilds.
  • -
  • Then, you need to build your team of heroes and equip them with gear and costumes. The game allows you to collect over 100 heroes from the anime series, each with their own skills, stats and personalities. You can form a team of up to four heroes and assign them different roles such as attack, defense, support or recovery. You can also equip them with various gear and costumes that enhance their abilities and appearance.
  • -
  • Finally, you need to use skills, ultimate moves and passive abilities in combat. The game has a turn-based combat system that requires strategy and timing. You can use skills by tapping on the cards at the bottom of the screen, which have different effects such as damage, heal, buff or debuff. You can also use ultimate moves by filling up the gauge at the top of the screen, which unleash powerful attacks that can change the tide of battle. You can also activate passive abilities by meeting certain conditions, such as having a certain amount of HP or allies.
  • -
-

By following these steps, you can play SDS Grand Cross mod apk and enjoy its exciting gameplay.

-

Tips and Tricks for SDS Grand Cross Mod APK

-

While playing SDS Grand Cross mod apk is fun and easy, there are some tips and tricks that can help you improve your performance and progress faster in the game. Here are some of them:

-
    -
  • Use skill synthesis and card fusion to optimize your strategy. Skill synthesis is when you combine two cards of the same skill type and rank to create a higher rank skill that has more power and effect. Card fusion is when you combine two cards of different skill types but same rank to create a new skill that has both effects. These techniques can help you create more versatile and effective skills for your team.
  • -
  • Earn diamonds, gold, stamina and other resources by completing quests, events and achievements. Diamonds are the premium currency of the game that can be used to draw new heroes, gear and costumes. Gold is the basic currency of the game that can be used to upgrade your heroes, gear and costumes. Stamina is the energy of the game that limits how much you can play per day. Other resources include hero coins, anvils, awakening stones and more. You can earn these resources by completing various quests, events and achievements that reward you with generous amounts of them.
  • -
  • Use the mod apk features such as damage hack, god mode, unlimited stamina and AOE skills to dominate the game. The mod apk features are the main reason why you would want to use SDS Grand Cross mod apk. They allow you to access various cheats and hacks that make the game easier and more fun. For example, you can use damage hack to increase your damage output, god mode to become invincible, unlimited stamina to play as much as you want, and AOE skills to hit all enemies at once. These features can help you win any battle and progress faster in the game.
  • -
-

By following these tips and tricks, you can improve your skills and knowledge of SDS Grand Cross mod apk and enjoy the game even more.

-

Pros and Cons of SDS Grand Cross Mod APK

-

As with any mod apk, SDS Grand Cross mod apk has its pros and cons that you should be aware of before using it. Here are some of them:

- - - - - - - - - - - - - - - - - -
ProsCons
- Enhanced gameplay: The mod apk features make the game more exciting and enjoyable, as you can customize your heroes, use powerful skills, and win any battle with ease.- Security risks: The mod apk file might contain viruses, malware or spyware that can harm your device or steal your personal information. You should always download the mod apk file from a trusted source and scan it with an antivirus before installing it.
- Faster progress: The mod apk features help you progress faster in the game, as you can earn more resources, rank up your heroes, unlock new features and complete quests and events quickly.- Compatibility issues: The mod apk file might not work on some devices or with some updates of the original game. You should always check the compatibility of the mod apk file with your device and the game version before installing it.
- More fun: The mod apk features make the game more fun and entertaining, as you can experience the anime adventure in a new way, explore different modes and scenarios, and interact with other players and characters.- Possible bans: The mod apk file might violate the terms and conditions of the original game and get detected by the anti-cheat system. You might face consequences such as account suspension or deletion if you use the mod apk file.
-

These are some of the pros and cons of SDS Grand Cross mod apk that you should consider before using it.

-

Conclusion

-

In conclusion, SDS Grand Cross mod apk is a modified version of the original game that allows you to access various cheats and hacks that enhance your gameplay. It is a great option for anime fans who want to enjoy the game even more, unlock all the features, customize their heroes, breeze through the battles and experience the ultimate anime adventure. However, it also has some drawbacks such as security risks, compatibility issues and possible bans that you should be careful of before using it.

-

If you are interested in trying SDS Grand Cross mod apk, you can download it from [The Seven Deadly Sins Grand Cross Mod APK 1.3.2 (Unlimited money)], a reliable source that offers a verified and updated version of the mod apk file. You can also follow our guide on how to download it, how to play it, what are its features, pros and cons, tips and tricks, and FAQs. We hope that this article has helped you learn more about SDS Grand Cross mod apk and decide whether it is worth trying or not.

-

We would love to hear your feedback and questions about SDS Grand Cross mod apk. Feel free to leave a comment below or contact us via email. Thank you for reading!

-

FAQs

-

Here are some common questions that readers might have about SDS Grand Cross mod apk:

-
    -
  1. Is SDS Grand Cross mod apk safe to use?
    SDS Grand Cross mod apk is safe to use as long as you download it from a trusted source such as [The Seven Deadly Sins Grand Cross Mod APK 1.3.2 (Unlimited money)] and scan it with an antivirus before installing it. However, you should also be aware of the security risks that come with using any mod apk file such as viruses, malware or spyware that can harm your device or steal your personal information.
  2. -you are using unauthorized cheats and hacks that give you an unfair advantage over other players and affect the game balance and economy. You might face consequences such as account suspension or deletion if you use SDS Grand Cross mod apk and get detected by the anti-cheat system. -
  3. Does SDS Grand Cross mod apk work on iOS devices?
    SDS Grand Cross mod apk does not work on iOS devices as it is only compatible with Android devices. If you want to use SDS Grand Cross mod apk on your iOS device, you will need to use an Android emulator such as BlueStacks or NoxPlayer that allows you to run Android apps on your PC or Mac. However, this might affect the performance and quality of the game and the mod apk features.
  4. -
  5. Can I play SDS Grand Cross mod apk online with other players?
    SDS Grand Cross mod apk allows you to play online with other players who are also using the mod apk file. You can join guilds, chat, cooperate and compete with them in various modes such as guild boss, guild wars, death match and PvP. However, you cannot play online with players who are using the original game as they are on different servers and versions of the game.
  6. -
  7. Can I update SDS Grand Cross mod apk to the latest version of the game?
    SDS Grand Cross mod apk is not updated automatically to the latest version of the game. You will need to download and install the new version of the mod apk file from [The Seven Deadly Sins Grand Cross Mod APK 1.3.2 (Unlimited money)] or another trusted source whenever there is an update of the original game. You should also backup your game data before updating to avoid losing your progress and settings.
  8. -
  9. Can I use SDS Grand Cross mod apk with my existing account?
    SDS Grand Cross mod apk allows you to use your existing account that you created with the original game. You can log in with your email, Facebook or Google account and access your game data and progress. However, you should also be careful of using your existing account with SDS Grand Cross mod apk as you might risk losing it or getting banned if you use the mod apk features too much or too obviously.
  10. -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Subway Surfers APK Download Whats New in the 2023 Version of the Fun and Addictive Game.md b/spaces/congsaPfin/Manga-OCR/logs/Subway Surfers APK Download Whats New in the 2023 Version of the Fun and Addictive Game.md deleted file mode 100644 index 754f82e6e11bd221a7f93bddfc5d350d604a34b9..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Subway Surfers APK Download Whats New in the 2023 Version of the Fun and Addictive Game.md +++ /dev/null @@ -1,124 +0,0 @@ -
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- -

Subway Surfers Download APK 2023: How to Get the Latest Version of the Popular Arcade Game

-

Do you love arcade games that are fast-paced, colorful, and addictive? If so, you might have heard of Subway Surfers, one of the most popular games in this genre. Subway Surfers is a game where you have to run away from the grumpy inspector and his dog while dodging trains, obstacles, and collecting coins and power-ups

Subway Surfers is a game that has been around since 2012, but it is still updated regularly with new features, locations, and characters. It has over 1 billion downloads on Google Play and is rated 4.5 out of 5 stars by more than 35 million users. It is also available for iOS, Windows Phone, and Amazon Kindle devices.

-

But what if you want to get the latest version of Subway Surfers without waiting for the official update on your device? Or what if you want to access some of the features that are not available in your region or device? Or what if you want to enjoy Subway Surfers without any ads or in-app purchases?

-

subway surfers download apk 2023


Download File –––––>>> https://urlca.com/2uOdig



-

If you answered yes to any of these questions, then you might be interested in downloading Subway Surfers APK 2023. APK stands for Android Package Kit, and it is a file format that allows you to install apps on your Android device from sources other than Google Play. By downloading Subway Surfers APK 2023, you can get the latest version of the game with all the new features, locations, and characters. You can also customize your game experience by modifying some of the settings and options.

-

However, downloading Subway Surfers APK 2023 is not as simple as downloading any other app from Google Play. There are some challenges and risks involved in this process, such as finding a reliable source for the APK file, enabling unknown sources on your device, and installing the APK file correctly. If you are not careful, you might end up with a fake or malicious APK file that could harm your device or compromise your privacy.

-

That's why we have created this guide to help you download Subway Surfers APK 2023 safely and easily. We will show you how to find a trustworthy source for Subway Surfers APK 2023, how to enable unknown sources on your device, how to download and install Subway Surfers APK 2023 on your device, and how to play Subway Surfers on your device. We will also answer some of the frequently asked questions about Subway Surfers APK 2023. So, let's get started!

-

How to Download Subway Surfers APK 2023

-

The first step to download Subway Surfers APK 2023 is to find a reliable source for the APK file. There are many websites that offer APK files for various apps and games, but not all of them are trustworthy. Some of them might provide fake or outdated APK files that could cause errors or crashes on your device. Some of them might even contain malware or viruses that could steal your personal information or damage your device.

-

So, how do you avoid these risks and find a reputable source for Subway Surfers APK 2023? Here are some tips:

-

Step 1: Find a Reliable Source for Subway Surfers APK

-
    -
  • Check the reviews and ratings of the APK websites. One of the easiest ways to determine the credibility of an APK website is to look at the reviews and ratings from other users. You can read the comments and feedback from people who have downloaded and used the APK files from the website. You can also check the ratings and scores given by different review platforms, such as Trustpilot, Sitejabber, or Scamadviser. If the website has mostly positive reviews and ratings, it is likely to be safe and reliable.
  • -
  • Compare different versions and updates of Subway Surfers APK. Another way to verify the authenticity of an APK website is to compare the different versions and updates of Subway Surfers APK offered by the website. You can check the release date, file size, version number, and changelog of each Subway Surfers APK file on the website. You can also compare them with the official version of Subway Surfers on Google Play. If the website has the latest version of Subway Surfers APK with all the new features, locations, and characters, it is likely to be legitimate.
  • -
  • Avoid fake or malicious APK files. Finally, you should be careful about fake or malicious APK files that might look like Subway Surfers APK but are actually harmful. Some of these files might have similar names or icons as Subway Surfers but are actually different apps or games. Some of them might even ask for unnecessary permissions or access to your device's data or functions. To avoid these files, you should always check the details and information of each Subway Surfers APK file before downloading it. You should also scan the file with an antivirus or malware scanner before installing it.
  • -
-

By following these tips, you can find a reliable source for Subway Surfers APK 2023 that will provide you with a safe and enjoyable gaming experience.

-

Step 2: Enable Unknown Sources on Your Android Device

-

The next step to download Subway Surfers APK 2023 is to enable unknown sources on your Android device. Unknown sources are sources that are not verified by Google Play, such as APK websites, third-party app stores, or file-sharing platforms. By default, Android devices do not allow the installation of apps from unknown sources, as they might pose a security risk. However, if you want to download Subway Surfers APK 2023 from a reliable source, you need to enable unknown sources on your device.

-

subway surfers apk latest version 2023
-subway surfers game download for android 2023
-subway surfers mod apk unlimited coins and keys 2023
-subway surfers hack apk download 2023
-subway surfers free download for android phone 2023
-subway surfers world tour 2023 apk
-subway surfers update 2023 download
-subway surfers offline apk download 2023
-subway surfers cheats apk 2023
-subway surfers old version apk download 2023
-subway surfers new version 2023 download
-subway surfers online play on pc 2023
-subway surfers unlimited money and keys apk 2023
-subway surfers game install free download 2023
-subway surfers android tv apk 2023
-subway surfers sybo games apk download 2023
-subway surfers kiloo games apk download 2023
-subway surfers apk for android 4.4 kitkat 2023
-subway surfers apk for android 5.0 lollipop 2023
-subway surfers apk for android 6.0 marshmallow 2023
-subway surfers apk for android 7.0 nougat 2023
-subway surfers apk for android 8.0 oreo 2023
-subway surfers apk for android 9.0 pie 2023
-subway surfers apk for android 10.0 q 2023
-subway surfers apk for android 11.0 r 2023
-subway surfers apk for android 12.0 s 2023
-subway surfers apk for pc windows xp/7/8/10/11/12/13/14/15/16/17/18/19/20/21/22/23
-subway surfers apk for tablet samsung galaxy tab s7/s6/s5/s4/s3/s2/s1/a7/a6/a5/a4/a3/a2/a1/e7/e6/e5/e4/e3/e2/e1/note pro/note/nook/fire hd/kindle fire/huawei mediapad/microsoft surface pro/ipad pro/ipad air/ipad mini/ipad/asus zenpad/lenovo tab/moto tab/lg g pad/acer iconia/alcatel onetouch pixi/amazon fire hd/amazon fire hd kids edition/amazon fire hd plus/amazon fire hd8 plus/amazon fire hdx/amazon fire hdx kids edition/amazon fire hdx8.9/amazon fire hdx8.9 kids edition/amazon kindle fire hd/amazon kindle fire hd kids edition/amazon kindle fire hd8.9/amazon kindle fire hd8.9 kids edition/archos cobalt/archos copper/archos diamond/archos elements/archos gamepad/archos helium/archos neon/archos oxygen/archos platinum/archos titanium/asus eee pad transformer/asus eee pad transformer prime/asus fonepad/asus memo pad/asus memo pad fhd/asus memo pad hd/asus memo pad smart/asus transformer pad/asus transformer pad infinity/asus transformer pad tf300t/asus vivotab/asus vivotab note/asus vivotab rt/asus vivotab smart/asus zenpad c/asus zenpad s/asus zenpad z/blackberry playbook/dell latitude/dell streak/dell venue/hp elite x2/hp elitebook/hp envy/hp pavilion/hp pro x2/hp slate/hp slatebook/hp spectre/hp stream/huawei ideos s7/huawei mediapad m1/huawei mediapad m2/huawei mediapad m5/huawei mediapad m6/huawei mediapad t1/huawei mediapad t2/huawei mediapad t5/huawei mediapad x1/huawei mediapad x2/kobo arc/kobo arc hd/kobo arc hd7/kobo arc hd10/lg g pad f/lg g pad ii/lg g pad iii/lg g pad iv/lg g pad x/lg optimus pad/lenovo ideapad miix/lenovo ideapad yoga/lenovo miix/lenovo phab plus/lenovo tab a10/lenovo tab a7/lenovo tab a8/lenovo tab e7/lenovo tab e8/lenovo tab e10/lenovo tab m10 fhd plus/lenovo tab m10 fhd rel/lenovo tab m7/lenovo tab m8 fhd/lenovo tab m8 hd/lenovo tab

-

Here is how you can do that:

-
    -
  • Access the security settings on your device. Depending on your device model and Android version, you can access the security settings on your device in different ways. One of the common ways is to go to Settings > Security > Unknown Sources. Another way is to go to Settings > Apps > Special Access > Install Unknown Apps. You can also search for "unknown sources" or "install unknown apps" in the settings search bar.
  • -
  • Enable unknown sources or allow installation from other sources. Once you find the option for unknown sources or install unknown apps, you need to enable it or allow it for the browser or app that you will use to download Subway Surfers APK 2023. For example, if you are using Chrome, you need to enable unknown sources or allow installation from Chrome. You might see a warning message that installing apps from unknown sources could harm your device or data. You need to accept the risk and proceed with the installation.
  • -
  • Disable unknown sources after installing Subway Surfers APK. After you have downloaded and installed Subway Surfers APK 2023 on your device, you should disable unknown sources or revoke the permission for installation from other sources. This is to prevent any unwanted or malicious apps from installing on your device without your knowledge or consent. You can follow the same steps as above but toggle off the option for unknown sources or install unknown apps.
  • -
-

By enabling unknown sources on your device, you can download Subway Surfers APK 2023 from a trusted website and enjoy the latest version of the game.

-

Step 3: Download and Install Subway Surfers APK on Your Device

-

The final step to download Subway Surfers APK 2023 is to download and install the APK file on your device. This is similar to downloading and installing any other app from Google Play, but with some minor differences. Here is how you can do that:

-
    -
  • Download Subway Surfers APK from a trusted website. Once you have enabled unknown sources on your device, you can go to the website that offers Subway Surfers APK 2023 and click on the download button. You might see a pop-up window asking you to confirm the download or choose a location for saving the file. You can choose a convenient location for the file, such as your downloads folder or your SD card.
  • -
  • Locate and open the downloaded APK file on your device. After the download is complete, you need to locate and open the downloaded APK file on your device. You can use a file manager app or a browser app to find the file. You might see a notification that shows the progress and status of the download. You can tap on the notification to open the file directly. Alternatively, you can go to the location where you saved the file and tap on it to open it.
  • -
  • Install Subway Surfers APK and grant necessary permissions. When you open the downloaded APK file, you will see a screen that shows the details and information of Subway Surfers APK 2023, such as its name, icon, version number, file size, and permissions. You will also see an option to install or cancel the installation. You need to tap on the install button to start the installation process. You might see a prompt asking you to grant certain permissions or access to Subway Surfers APK 2023, such as storage, network, location, etc. You need to accept these permissions or access for Subway Surfers APK 2023 to function properly.
  • -
-

By downloading and installing Subway Surfers APK 2023 on your device, you can launch and play Subway Surfers with all the new features, locations, and characters.

-

How to Play Subway Surfers on Your Device

-

Now that you have downloaded and installed Subway Surfers APK 2023 on your device, you are ready to play Subway Surfers on your device. Playing Subway Surfers is easy and fun, but it also requires some skills and strategies. Here are some steps to help you play Subway Surfers on your device:

-

Step 1: Launch Subway Surfers from Your App Drawer or Home Screen

-
    -
  • Find and open Subway Surfers on your device. After installing Subway Surfers APK 2023, you will see a new icon for Subway Surfers on your app drawer or home screen. You can tap on the icon to launch Subway Surfers on your device. You will see a loading screen that shows the logo and name of Subway Surfers, followed by a splash screen that shows the current location and theme of Subway Surfers.
  • -
  • Adjust the settings and preferences of Subway Surfers. Before you start playing Subway Surfers, you might want to adjust some of the settings and preferences of Subway Surfers to suit your needs and preferences. You can access the settings menu by tapping on the gear icon on the top right corner of the screen. You can change various options, such as sound, music, language, graphics, notifications, etc. You can also enable or disable some features, such as cloud save, daily challenges, leaderboard, etc.
  • -
  • Sign in with your Google Play account or Facebook account. If you want to save your progress and achievements in Subway Surfers, you need to sign in with your Google Play account or Facebook account. You can do this by tapping on the Google Play icon or the Facebook icon on the bottom left corner of the screen. You will see a pop-up window that asks you to sign in with your account details. You can also skip this step if you don't want to sign in or create an account.
  • -
-

By launching Subway Surfers from your app drawer or home screen, you can access and customize Subway Surfers on your device.

-

Step 2: Learn the Basics of Subway Surfers Gameplay

-
    -
  • Control your character and dodge obstacles. The main objective of Subway Surfers is to run away from the inspector and his dog while dodging trains, barriers, signs, and other obstacles. You can control your character by swiping left or right on the screen to move sideways, swiping up to jump over obstacles, swiping down to roll under obstacles, and double-tapping to activate power-ups. You can also tilt your device to collect coins and items on the sides of the tracks.
  • -
  • Collect coins and power-ups. As you run along the tracks, you will see coins and power-ups that you can collect by running over them or jumping to reach them. Coins are used to buy new characters, outfits, hoverboards, jetpacks, and other items in the shop. Power-ups are used to enhance your gameplay and give you an edge over the inspector and his dog. Some of the power-ups are jetpacks, magnets, score multipliers, coin multipliers, sneakers, etc.
  • -
  • Complete missions and challenges. To make Subway Surfers more fun and rewarding, you can complete missions and challenges that are given to you throughout the game. Missions are tasks that require you to perform certain actions or achieve certain goals in a single run or multiple runs. For example, a mission might ask you to collect a specific number of coins, power-ups, letters, or tokens in a single run or multiple runs. Challenges are tasks that require you to compete with other players or yourself in a limited time period. For example, a challenge might ask you to beat a certain score or distance in a single run or multiple runs within a day or a week. Completing missions and challenges will give you rewards such as coins, keys, items, trophies, etc.
  • -
-

By learning the basics of Subway Surfers gameplay, you can enjoy running away from the inspector and his dog while collecting coins and power-ups.

-

Step 3: Explore Different Locations and Characters in Subway Surfers

-
    -
  • Unlock new locations and themes in Subway Surfers. One of the most exciting features of Subway Surfers is that it changes its location and theme every few weeks or months. This means that you can explore different cities and countries around the world with different sceneries, cultures, and landmarks. Some of the locations and themes that have been featured in Subway Surfers are New York City, Paris, Tokyo, Rio de Janeiro, London, Cairo, Beijing, etc. You can unlock new locations and themes by updating Subway Surfers APK 2023 regularly or by collecting special tokens or items that are related to the current location and theme. For example, you might need to collect Eiffel Towers in Paris, Samba Feathers in Rio de Janeiro, or Lanterns in Beijing. Unlocking new locations and themes will give you a fresh and diverse gaming experience.
  • -
  • Unlock new characters and outfits in Subway Surfers. Another exciting feature of Subway Surfers is that it has a variety of characters and outfits that you can choose from. Each character has a unique personality, style, and backstory. Some of the characters are Jake, Tricky, Fresh, Spike, Yutani, Frank, Frizzy, King, Lucy, Ninja, Tasha, Zoe, etc. You can unlock new characters by buying them with coins or keys in the shop or by collecting special items or tokens that are related to the character. For example, you might need to collect Guitars for Fresh, Spray Cans for Tricky, or Boomboxes for Spike. Unlocking new characters will give you a chance to play as different personas and express yourself.
  • -
  • Use hoverboards and jetpacks in Subway Surfers. One of the most fun features of Subway Surfers is that it has hoverboards and jetpacks that you can use to enhance your gameplay. Hoverboards are devices that allow you to glide over the tracks and avoid obstacles. Jetpacks are devices that allow you to fly over the tracks and collect coins and power-ups. You can use hoverboards and jetpacks by tapping on the screen when they appear or by activating them from your inventory. You can unlock new hoverboards and jetpacks by buying them with coins or keys in the shop or by collecting special items or tokens that are related to the hoverboard or jetpack. For example, you might need to collect Hot Rods for the Hot Rod hoverboard, Rockets for the Rocket jetpack, or Gears for the Daredevil hoverboard. Using hoverboards and jetpacks will give you a thrilling and exhilarating gaming experience.
  • -
-

By exploring different locations and characters in Subway Surfers, you can discover new aspects and dimensions of Subway Surfers.

-

Conclusion

-

Subway Surfers is a game that will keep you entertained and engaged for hours. It is a game that combines arcade action, endless running, and colorful graphics. It is a game that lets you travel around the world, meet new characters, and use cool gadgets. It is a game that challenges your skills, reflexes, and strategies.

-

But if you want to enjoy Subway Surfers to the fullest, you need to download Subway Surfers APK 2023. By downloading Subway Surfers APK 2023, you can get the latest version of Subway Surfers with all the new features, locations, and characters. You can also customize your game experience by modifying some of the settings and options.

-

However, downloading Subway Surfers APK 2023 is not as easy as downloading any other app from Google Play. You need to find a reliable source for Subway Surfers APK 2023, enable unknown sources on your device, and install Subway Surfers APK 2023 correctly. You also need to be careful about fake or malicious APK files that could harm your device or data.

-

That's why we have created this guide to help you download Subway Surfers APK 2023 safely and easily. We have shown you how to find a trustworthy source for Subway Surfers APK 2023, how to enable unknown sources on your device, how to download and install Subway Surfers APK 2023 on your device, and how to play Subway Surfers on your device. We have also answered some of the frequently asked questions about Subway Surfers APK 2023.

-

So what are you waiting for? Download Subway Surfers APK 2023 today and enjoy the ultimate arcade game on your device!

-

Frequently Asked Questions

-

Here are some of the frequently asked questions about Subway Surfers APK 2023:

-

Q: Is Subway Surfers APK 2023 safe?

-

A: Yes, Subway Surfers APK 2023 is safe if you download it from a reliable source and follow the steps in this guide. However, you should always be careful about fake or malicious APK files that could harm your device or data.

-

Q: Is Subway Surfers APK 2023 free?

-

A: Yes, Subway Surfers APK 2023 is free to download and play. However, some of the features and items in the game might require coins or keys that you can earn by playing the game or buy with real money.

Q: How do I update Subway Surfers APK 2023?

-

A: You can update Subway Surfers APK 2023 by downloading and installing the latest version of Subway Surfers APK from the same source that you used before. You can also check the website for any news or announcements about new updates or features.

-

Q: What are the differences between Subway Surfers APK 2023 and Subway Surfers on Google Play?

-

A: The main difference between Subway Surfers APK 2023 and Subway Surfers on Google Play is that Subway Surfers APK 2023 is not verified by Google Play and might have some features or options that are not available in Subway Surfers on Google Play. For example, Subway Surfers APK 2023 might have new locations and characters that are not yet released in Subway Surfers on Google Play. However, both versions of Subway Surfers have the same gameplay and graphics.

-

Q: Can I play Subway Surfers APK 2023 with my friends?

-

A: Yes, you can play Subway Surfers APK 2023 with your friends by connecting your game with your Facebook account. You can see your friends' scores and achievements on the leaderboard and challenge them to beat your score. You can also send and receive gifts and messages from your friends in the game.

-

I hope you enjoyed this article and learned how to download Subway Surfers APK 2023. If you have any questions or feedback, please leave a comment below. Thank you for reading!

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Amnesia A Machine for Pigs [PATCH 2.0.1.4][GOG] - How to Get Infinite Gems in the Horror Game.md b/spaces/contluForse/HuggingGPT/assets/Amnesia A Machine for Pigs [PATCH 2.0.1.4][GOG] - How to Get Infinite Gems in the Horror Game.md deleted file mode 100644 index 519272d8e0c24b8e83393ceeffe098a70438ac6b..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Amnesia A Machine for Pigs [PATCH 2.0.1.4][GOG] - How to Get Infinite Gems in the Horror Game.md +++ /dev/null @@ -1,6 +0,0 @@ -

Amnesia: A Machine for Pigs [PATCH 2.0.1.4][GOG] unlimited gems


Download Zip ✏ ✏ ✏ https://ssurll.com/2uzxhT



- - aaccfb2cb3
-
-
-

diff --git a/spaces/course-demos/generate-tone/app.py b/spaces/course-demos/generate-tone/app.py deleted file mode 100644 index 10076174a4bf87c040763fcbdd066fa704cf548b..0000000000000000000000000000000000000000 --- a/spaces/course-demos/generate-tone/app.py +++ /dev/null @@ -1,24 +0,0 @@ -import numpy as np -import gradio as gr - -notes = ["C", "C#", "D", "D#", "E", "F", "F#", "G", "G#", "A", "A#", "B"] - -def generate_tone(note, octave, duration): - sr = 48000 - a4_freq, tones_from_a4 = 440, 12 * (octave - 4) + (note - 9) - frequency = a4_freq * 2 ** (tones_from_a4 / 12) - duration = int(duration) - audio = np.linspace(0, duration, duration * sr) - audio = (20000 * np.sin(audio * (2 * np.pi * frequency))).astype(np.int16) - return (sr, audio) - -gr.Interface( - generate_tone, - [ - gr.Dropdown(notes, type="index"), - gr.Slider(minimum=4, maximum=6, step=1), - gr.Textbox(type="number", value=1, label="Duration in seconds"), - ], - "audio", - css=".footer{display:none !important}", -).launch() \ No newline at end of file diff --git a/spaces/cruxx/ssyoutube/Dockerfile b/spaces/cruxx/ssyoutube/Dockerfile deleted file mode 100644 index 88c41ce22312ba1e3d46d06a535d8f0544adce94..0000000000000000000000000000000000000000 --- a/spaces/cruxx/ssyoutube/Dockerfile +++ /dev/null @@ -1,20 +0,0 @@ -FROM node:latest - -RUN apt-get update && apt-get install -y \ - chromium \ - libnss3-dev \ - && rm -rf /var/lib/apt/lists/* - -ENV CHROME_BIN=/usr/bin/chromium - -WORKDIR /app - -COPY package*.json ./ - -RUN npm install - -COPY . . - -EXPOSE 7860 - -CMD ["node", "index.js"] \ No newline at end of file diff --git a/spaces/dakaiye/dky_xuexi/crazy_functions/test_project/latex/attention/introduction.tex b/spaces/dakaiye/dky_xuexi/crazy_functions/test_project/latex/attention/introduction.tex deleted file mode 100644 index 1baa8915f4cf7aec2520894a87470fc9436d954b..0000000000000000000000000000000000000000 --- a/spaces/dakaiye/dky_xuexi/crazy_functions/test_project/latex/attention/introduction.tex +++ /dev/null @@ -1,18 +0,0 @@ -Recurrent neural networks, long short-term memory \citep{hochreiter1997} and gated recurrent \citep{gruEval14} neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation \citep{sutskever14, bahdanau2014neural, cho2014learning}. Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures \citep{wu2016google,luong2015effective,jozefowicz2016exploring}. - -Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states $h_t$, as a function of the previous hidden state $h_{t-1}$ and the input for position $t$. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples. -%\marginpar{not sure if the memory constraints are understandable here} -Recent work has achieved significant improvements in computational efficiency through factorization tricks \citep{Kuchaiev2017Factorization} and conditional computation \citep{shazeer2017outrageously}, while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains. - -%\marginpar{@all: there is work on analyzing what attention really does in seq2seq models, couldn't find it right away} - -Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences \citep{bahdanau2014neural, structuredAttentionNetworks}. In all but a few cases \citep{decomposableAttnModel}, however, such attention mechanisms are used in conjunction with a recurrent network. - -%\marginpar{not sure if "cross-positional communication" is understandable without explanation} -%\marginpar{insert exact training times and stats for the model that reaches sota earliest, maybe even a single GPU model?} - -In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs. -%\marginpar{you removed the constant number of repetitions part. I wrote it because I wanted to make it clear that the model does not only perform attention once, while it's also not recurrent. I thought that might be important to get across early.} - -% Just a standard paragraph with citations, rewrite. -%After the seminal papers of \citep{sutskever14}, \citep{bahdanau2014neural}, and \citep{cho2014learning}, recurrent models have become the dominant solution for both sequence modeling and sequence-to-sequence transduction. Many efforts such as \citep{wu2016google,luong2015effective,jozefowicz2016exploring} have pushed the boundaries of machine translation and language modeling with recurrent sequence models. Recent effort \citep{shazeer2017outrageously} has combined the power of conditional computation with sequence models to train very large models for machine translation, pushing SOTA at lower computational cost. Recurrent models compute a vector of hidden states $h_t$, for each time step $t$ of computation. $h_t$ is a function of both the input at time $t$ and the previous hidden state $h_t$. This dependence on the previous hidden state encumbers recurrnet models to process multiple inputs at once, and their time complexity is a linear function of the length of the input and output, both during training and inference. [What I want to say here is that although this is fine during decoding, at training time, we are given both input and output and this linear nature does not allow the RNN to process all inputs and outputs simultaneously and haven't been used on datasets that are the of the scale of the web. What's the largest dataset we have ? . Talk about Nividia and possibly other's effors to speed up things, and possibly other efforts that alleviate this, but are still limited by it's comptuational nature]. Rest of the intro: What if you could construct the state based on the actual inputs and outputs, then you could construct them all at once. This has been the foundation of many promising recent efforts, bytenet,facenet (Also talk about quasi rnn here). Now we talk about attention!! Along with cell architectures such as long short-term meory (LSTM) \citep{hochreiter1997}, and gated recurrent units (GRUs) \citep{cho2014learning}, attention has emerged as an essential ingredient in successful sequence models, in particular for machine translation. In recent years, many, if not all, state-of-the-art (SOTA) results in machine translation have been achieved with attention-based sequence models \citep{wu2016google,luong2015effective,jozefowicz2016exploring}. Talk about the neon work on how it played with attention to do self attention! Then talk about what we do. \ No newline at end of file diff --git a/spaces/davda54/chat-nort5/keyword_generation_nort5_small/modeling_nort5.py b/spaces/davda54/chat-nort5/keyword_generation_nort5_small/modeling_nort5.py deleted file mode 100644 index e359e5aa7dbcee0c041794c06257697a2c3d3200..0000000000000000000000000000000000000000 --- a/spaces/davda54/chat-nort5/keyword_generation_nort5_small/modeling_nort5.py +++ /dev/null @@ -1,709 +0,0 @@ -import math -from typing import List, Optional, Tuple, Union - -import torch -import torch.nn as nn -import torch.nn.functional as F -from transformers.pytorch_utils import softmax_backward_data -from torch.utils import checkpoint - -from configuration_nort5 import NorT5Config -from transformers.modeling_utils import PreTrainedModel -from transformers.activations import gelu_new -from transformers.modeling_outputs import ( - Seq2SeqModelOutput, Seq2SeqLMOutput, BaseModelOutput, BaseModelOutputWithPastAndCrossAttentions -) - - -class Encoder(nn.Module): - def __init__(self, config, activation_checkpointing=False): - super().__init__() - self.main_input_name = "input_ids" - - self.relative_embedding = RelativeEmbedding(config) - self.layers = nn.ModuleList([EncoderLayer(config) for _ in range(config.num_hidden_layers)]) - - for i, layer in enumerate(self.layers): - layer.mlp.mlp[1].weight.data *= math.sqrt(1.0 / (2.0 * (1 + i))) - layer.mlp.mlp[-2].weight.data *= math.sqrt(1.0 / (2.0 * (1 + i))) - - self.activation_checkpointing = activation_checkpointing - - def forward(self, hidden_states, attention_mask): - relative_embedding = self.relative_embedding() - hidden_states, attention_probs = [hidden_states], [] - - for layer in self.layers: - if self.activation_checkpointing: - hidden_state, attention_p = checkpoint.checkpoint(layer, hidden_states[-1], attention_mask, relative_embedding) - else: - hidden_state, attention_p = layer(hidden_states[-1], attention_mask, relative_embedding) - - hidden_states.append(hidden_state) - attention_probs.append(attention_p) - - return hidden_states, attention_probs - - -class Decoder(nn.Module): - def __init__(self, config, activation_checkpointing=False): - super().__init__() - self.self_relative_embedding = RelativeEmbedding(config) - self.cross_relative_embedding = RelativeEmbedding(config) - self.layers = nn.ModuleList([DecoderLayer(config) for _ in range(config.num_hidden_layers)]) - - for i, layer in enumerate(self.layers): - layer.mlp.mlp[1].weight.data *= math.sqrt(1.0 / (2.0 * (1 + i))) - layer.mlp.mlp[-2].weight.data *= math.sqrt(1.0 / (2.0 * (1 + i))) - - self.activation_checkpointing = activation_checkpointing - - def forward(self, x, encoder_output, encoder_padding_mask, past_key_values=None): - self_relative_embedding = self.self_relative_embedding() - cross_relative_embedding = self.cross_relative_embedding() - - if past_key_values is None: - autoreg_mask = torch.triu( - torch.full((x.size(0), x.size(0)), True, device=x.device), - diagonal=1 - ) - else: - autoreg_mask = None - - # initialize past_key_values with `None` if past does not exist - if past_key_values is None: - past_key_values = [None] * len(self.layers) - - hidden_states, self_attention_probs, cross_attention_probs, key_value_states = [x], [], [], [] - for layer, past_key_value in zip(self.layers, past_key_values): - if self.activation_checkpointing: - hidden_state, self_attention_p, cross_attention_p, key_value_state = checkpoint.checkpoint(layer, hidden_states[-1], autoreg_mask, encoder_output, encoder_padding_mask, self_relative_embedding, cross_relative_embedding, past_key_value=None) - else: - hidden_state, self_attention_p, cross_attention_p, key_value_state = layer(hidden_states[-1], autoreg_mask, encoder_output, encoder_padding_mask, self_relative_embedding, cross_relative_embedding, past_key_value=past_key_value) - - hidden_states.append(hidden_state) - self_attention_probs.append(self_attention_p) - cross_attention_probs.append(cross_attention_p) - key_value_states.append(key_value_state) - - return hidden_states, self_attention_probs, cross_attention_probs, key_value_states - - -class MaskClassifier(nn.Module): - def __init__(self, config): - super().__init__() - self.nonlinearity = nn.Sequential( - nn.LayerNorm(config.hidden_size, config.layer_norm_eps, elementwise_affine=False), - nn.Dropout(config.hidden_dropout_prob), - nn.Linear(config.hidden_size, config.vocab_size) - ) - self.initialize(config.hidden_size) - - def initialize(self, hidden_size): - std = math.sqrt(2.0 / (5.0 * hidden_size)) - nn.init.trunc_normal_(self.nonlinearity[-1].weight, mean=0.0, std=std, a=-2*std, b=2*std) - self.nonlinearity[-1].bias.data.zero_() - - def forward(self, x): - x = self.nonlinearity(x) - return x - - -class EncoderLayer(nn.Module): - def __init__(self, config): - super().__init__() - self.attention = Attention(config, is_cross_attention=False) - self.mlp = FeedForward(config) - - def forward(self, x, padding_mask, relative_embedding): - attention_output, attention_probs, _ = self.attention(x, x, padding_mask, relative_embedding) - x = x + attention_output - x = x + self.mlp(x) - return x, attention_probs - - -class DecoderLayer(nn.Module): - def __init__(self, config): - super().__init__() - self.self_attention = Attention(config, is_cross_attention=False) - self.cross_attention = Attention(config, is_cross_attention=True) - self.mlp = FeedForward(config) - - def forward(self, x, autoreg_mask, encoder_output, encoder_padding_mask, self_relative_embedding, cross_relative_embedding, past_key_value=None): - query_offset = 0 - if past_key_value is not None: - self_attn_past_key_value = past_key_value[:2] - cross_attn_past_key_value = past_key_value[2:] - query_offset = self_attn_past_key_value[0].size(2) - else: - self_attn_past_key_value, cross_attn_past_key_value = None, None - - x_, self_attention_probs, self_key_value_state = self.self_attention(x, x, autoreg_mask, self_relative_embedding, past_key_value=self_attn_past_key_value, query_offset=query_offset) - x = x + x_ - x_, cross_attention_probs, cross_key_value_state = self.cross_attention(x, encoder_output, encoder_padding_mask, cross_relative_embedding, past_key_value=cross_attn_past_key_value, query_offset=query_offset) - x = x + x_ - x = x + self.mlp(x) - - return x, self_attention_probs, cross_attention_probs, self_key_value_state + cross_key_value_state - - -class GeGLU(nn.Module): - def forward(self, x): - x, gate = x.chunk(2, dim=-1) - x = x * gelu_new(gate) - return x - - -class FeedForward(nn.Module): - def __init__(self, config): - super().__init__() - self.mlp = nn.Sequential( - nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps, elementwise_affine=False), - nn.Linear(config.hidden_size, 2*config.intermediate_size, bias=False), - GeGLU(), - nn.LayerNorm(config.intermediate_size, eps=config.layer_norm_eps, elementwise_affine=False), - nn.Linear(config.intermediate_size, config.hidden_size, bias=False), - nn.Dropout(config.hidden_dropout_prob) - ) - self.initialize(config.hidden_size) - - def initialize(self, hidden_size): - std = math.sqrt(2.0 / (5.0 * hidden_size)) - nn.init.trunc_normal_(self.mlp[1].weight, mean=0.0, std=std, a=-2*std, b=2*std) - nn.init.trunc_normal_(self.mlp[-2].weight, mean=0.0, std=std, a=-2*std, b=2*std) - - def forward(self, x): - return self.mlp(x) - - -class MaskedSoftmax(torch.autograd.Function): - @staticmethod - def forward(self, x, mask, dim): - self.dim = dim - if mask is not None: - x.masked_fill_(mask, float('-inf')) - x = torch.softmax(x, self.dim) - if mask is not None: - x.masked_fill_(mask, 0.0) - self.save_for_backward(x) - return x - - @staticmethod - def backward(self, grad_output): - output, = self.saved_tensors - input_grad = softmax_backward_data(self, grad_output, output, self.dim, output) - return input_grad, None, None - - -class Attention(nn.Module): - def __init__(self, config, is_cross_attention=False): - super().__init__() - - self.config = config - self.is_cross_attention = is_cross_attention - - if config.hidden_size % config.num_attention_heads != 0: - raise ValueError(f"The hidden size {config.hidden_size} is not a multiple of the number of attention heads {config.num_attention_heads}") - - self.hidden_size = config.hidden_size - self.num_heads = config.num_attention_heads - self.head_size = config.hidden_size // config.num_attention_heads - - self.in_proj_q = nn.Linear(config.hidden_size, config.hidden_size, bias=True) - self.in_proj_k = nn.Linear(config.hidden_size, config.hidden_size, bias=True) - self.in_proj_v = nn.Linear(config.hidden_size, config.hidden_size, bias=True) - self.out_proj = nn.Linear(config.hidden_size, config.hidden_size, bias=True) - - self.pre_layer_norm = nn.LayerNorm(config.hidden_size, config.layer_norm_eps, elementwise_affine=False) - self.post_layer_norm = nn.LayerNorm(config.hidden_size, config.layer_norm_eps, elementwise_affine=True) - - position_indices = torch.arange(512, dtype=torch.long).unsqueeze(1) \ - - torch.arange(512, dtype=torch.long).unsqueeze(0) - position_indices = self.make_log_bucket_position(position_indices, config.position_bucket_size, 512) - position_indices = config.position_bucket_size - 1 + position_indices - self.register_buffer("position_indices", position_indices, persistent=True) - - self.dropout = nn.Dropout(config.attention_probs_dropout_prob) - self.scale = 1.0 / math.sqrt(3 * self.head_size) - self.initialize() - - def make_log_bucket_position(self, relative_pos, bucket_size, max_position): - sign = torch.sign(relative_pos) - mid = bucket_size // 2 - abs_pos = torch.where((relative_pos < mid) & (relative_pos > -mid), mid - 1, torch.abs(relative_pos).clamp(max=max_position - 1)) - log_pos = torch.ceil(torch.log(abs_pos / mid) / math.log((max_position-1) / mid) * (mid - 1)).int() + mid - bucket_pos = torch.where(abs_pos <= mid, relative_pos, log_pos * sign).long() - return bucket_pos - - def initialize(self): - std = math.sqrt(2.0 / (5.0 * self.hidden_size)) - nn.init.trunc_normal_(self.in_proj_q.weight, mean=0.0, std=std, a=-2*std, b=2*std) - nn.init.trunc_normal_(self.in_proj_k.weight, mean=0.0, std=std, a=-2*std, b=2*std) - nn.init.trunc_normal_(self.in_proj_v.weight, mean=0.0, std=std, a=-2*std, b=2*std) - nn.init.trunc_normal_(self.out_proj.weight, mean=0.0, std=std, a=-2*std, b=2*std) - self.in_proj_q.bias.data.zero_() - self.in_proj_k.bias.data.zero_() - self.in_proj_v.bias.data.zero_() - self.out_proj.bias.data.zero_() - - def forward(self, q, kv, attention_mask, relative_embedding, past_key_value=None, query_offset=0): - key_len, batch_size, _ = kv.size() - query_len, _, _ = q.size() - - if not self.is_cross_attention or past_key_value is None or past_key_value[0].size(1) != kv.size(0): - kv = self.pre_layer_norm(kv) - key = self.in_proj_k(kv) # shape: [T, B, D] - value = self.in_proj_v(kv) # shape: [T, B, D] - key = key.reshape(key_len, batch_size * self.num_heads, self.head_size).transpose(0, 1) # shape: [BxH, T, D] - value = value.view(key_len, batch_size * self.num_heads, self.head_size).transpose(0, 1) # shape: [BxH, T, D] - - if past_key_value is not None: - if not self.is_cross_attention: - key = torch.cat([past_key_value[0].flatten(0, 1), key], dim=1) - value = torch.cat([past_key_value[1].flatten(0, 1), value], dim=1) - key_len = key.size(1) - elif past_key_value[0].size(1) == kv.size(0): - key = past_key_value[0].flatten(0, 1) - value = past_key_value[1].flatten(0, 1) - - if self.position_indices.size(0) < max(query_len, key_len): - position_indices = torch.arange(max(query_len, key_len), dtype=torch.long).unsqueeze(1) \ - - torch.arange(max(query_len, key_len), dtype=torch.long).unsqueeze(0) - position_indices = self.make_log_bucket_position(position_indices, self.config.position_bucket_size, 512) - position_indices = self.config.position_bucket_size - 1 + position_indices - self.register_buffer("position_indices", position_indices.to(q.device), persistent=True) - - q = self.pre_layer_norm(q) - query = self.in_proj_q(q) # shape: [T, B, D] - query = query.reshape(query_len, batch_size * self.num_heads, self.head_size).transpose(0, 1) - - attention_scores = torch.bmm(query, key.transpose(1, 2) * self.scale) - - query_pos = self.in_proj_q(self.dropout(relative_embedding)) # shape: [2T-1, D] - query_pos = query_pos.view(-1, self.num_heads, self.head_size) # shape: [2T-1, H, D] - key_pos = self.in_proj_k(self.dropout(relative_embedding)) # shape: [2T-1, D] - key_pos = key_pos.view(-1, self.num_heads, self.head_size) # shape: [2T-1, H, D] - - query_ = query.view(batch_size, self.num_heads, query_len, self.head_size) - key_ = key.view(batch_size, self.num_heads, key_len, self.head_size) - - attention_c_p = torch.einsum("bhqd,khd->bhqk", query_, key_pos.squeeze(1) * self.scale) - attention_p_c = torch.einsum("bhkd,qhd->bhqk", key_ * self.scale, query_pos.squeeze(1)) - position_indices = self.position_indices[query_offset:query_offset+query_len, :key_len].expand(batch_size, self.num_heads, -1, -1) - attention_c_p = attention_c_p.gather(3, position_indices) - attention_p_c = attention_p_c.gather(2, position_indices) - - attention_scores = attention_scores.view(batch_size, self.num_heads, query_len, key_len) - attention_scores.add_(attention_c_p) - attention_scores.add_(attention_p_c) - - attention_probs = MaskedSoftmax.apply(attention_scores, attention_mask, -1) - - attention_probs = self.dropout(attention_probs) - context = torch.bmm(attention_probs.flatten(0, 1), value) # shape: [B*H, Q, D] - context = context.transpose(0, 1).reshape(context.size(1), -1, self.hidden_size) # shape: [Q, B, H*D] - context = self.out_proj(context) - context = self.post_layer_norm(context) - context = self.dropout(context) - - key = key.detach().unflatten(0, (-1, self.num_heads)) - value = value.detach().unflatten(0, (-1, self.num_heads)) - - return context, attention_probs.detach(), (key, value) - - -class WordEmbedding(nn.Module): - def __init__(self, config): - super().__init__() - self.hidden_size = config.hidden_size - - self.word_embedding = nn.Embedding(config.vocab_size, config.hidden_size) - self.word_layer_norm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps, elementwise_affine=False) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - self.initialize() - - def initialize(self): - std = math.sqrt(2.0 / (5.0 * self.hidden_size)) - nn.init.trunc_normal_(self.word_embedding.weight, mean=0.0, std=std, a=-2*std, b=2*std) - - def forward(self, input_ids): - return self.dropout(self.word_layer_norm(self.word_embedding(input_ids))) - - -class RelativeEmbedding(nn.Module): - def __init__(self, config): - super().__init__() - self.relative_embedding = nn.Parameter(torch.empty(2 * config.position_bucket_size - 1, config.hidden_size)) - self.relative_layer_norm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - - self.initialize(config.hidden_size) - - def initialize(self, hidden_size): - std = math.sqrt(2.0 / (5.0 * hidden_size)) - nn.init.trunc_normal_(self.relative_embedding, mean=0.0, std=std, a=-2*std, b=2*std) - - def forward(self): - return self.relative_layer_norm(self.relative_embedding) - - -# -# HuggingFace wrappers -# - -class NorT5PreTrainedModel(PreTrainedModel): - config_class = NorT5Config - base_model_prefix = "norT5" - supports_gradient_checkpointing = True - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, Encoder): - module.activation_checkpointing = value - - def _init_weights(self, module): - pass # everything is already initialized - - -class NorT5Model(NorT5PreTrainedModel): - def __init__(self, config, add_lm_layer=False, add_decoder=True): - super().__init__(config) - self.config = config - - self.cls_token_id = config.cls_token_id - self.sep_token_id = config.sep_token_id - self.bos_token_id = config.bos_token_id - self.eos_token_id = config.eos_token_id - self.pad_token_id = config.pad_token_id - - self.embedding = WordEmbedding(config) - self.encoder = Encoder(config, activation_checkpointing=False) - self.decoder = Decoder(config, activation_checkpointing=False) if add_decoder else None - self.classifier = MaskClassifier(config) if add_lm_layer else None - - def get_input_embeddings(self): - return self.embedding.word_embedding - - def set_input_embeddings(self, value): - self.embedding.word_embedding = value - - def get_encoder(self): - class EncoderWrapper: - def __call__(cls, *args, **kwargs): - return cls.forward(*args, **kwargs) - def forward( - cls, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - output_hidden_states: Optional[bool] = None, - output_attentions: Optional[bool] = None, - return_dict: Optional[bool] = None, - ): - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - return self.get_encoder_output( - input_ids, attention_mask, output_hidden_states, output_attentions, return_dict=return_dict - ) - return EncoderWrapper() - - def get_decoder(self): - return self.get_decoder_output - - def set_decoder_special_tokens(self, target_id): - target_id.masked_fill_(target_id == self.cls_token_id, self.bos_token_id) - target_id.masked_fill_(target_id == self.sep_token_id, self.eos_token_id) - return target_id - - def _shift_right(self, input_ids): - shifted_input_ids = input_ids.new_zeros(input_ids.shape) - shifted_input_ids[..., 1:] = input_ids[..., :-1].clone() - shifted_input_ids[..., 0] = self.bos_token_id - shifted_input_ids.masked_fill_(shifted_input_ids == -100, self.pad_token_id) - - return shifted_input_ids - - def get_encoder_output( - self, - input_ids: torch.Tensor = None, - attention_mask: Optional[torch.Tensor] = None, - output_hidden_states: Optional[bool] = None, - output_attentions: Optional[bool] = None, - return_dict = False - ): - if input_ids is not None: - input_shape = input_ids.size() - else: - raise ValueError("You have to specify input_ids") - - batch_size, seq_length = input_shape - device = input_ids.device - - if attention_mask is None: - attention_mask = torch.zeros(batch_size, seq_length, dtype=torch.bool, device=device) - else: - attention_mask = ~attention_mask.bool() - attention_mask = attention_mask.unsqueeze(1).unsqueeze(2) - - static_embeddings = self.embedding(input_ids.t()) - contextualized_embeddings, attention_probs = self.encoder(static_embeddings, attention_mask) - contextualized_embeddings = [e.transpose(0, 1) for e in contextualized_embeddings] - last_layer = contextualized_embeddings[-1] - contextualized_embeddings = [contextualized_embeddings[0]] + [ - contextualized_embeddings[i] - contextualized_embeddings[i - 1] - for i in range(1, len(contextualized_embeddings)) - ] - - if not return_dict: - return ( - last_layer, - *([contextualized_embeddings] if output_hidden_states else []), - *([attention_probs] if output_attentions else []) - ) - - return BaseModelOutput( - last_hidden_state=last_layer, - hidden_states=contextualized_embeddings if output_hidden_states else None, - attentions=attention_probs if output_attentions else None - ) - - def get_decoder_output( - self, - target_ids: torch.Tensor = None, - encoder_output: torch.Tensor = None, - attention_mask: Optional[torch.Tensor] = None, - past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, - use_cache: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - output_attentions: Optional[bool] = None, - return_dict = False - ): - batch_size, seq_length, _ = encoder_output.shape - device = target_ids.device - - if attention_mask is None: - attention_mask = torch.zeros(batch_size, seq_length, dtype=torch.bool, device=device) - else: - attention_mask = ~attention_mask.bool() - attention_mask = attention_mask.unsqueeze(1).unsqueeze(2) - - hidden_states, self_attention_p, cross_attention_p, key_value_states = self.decoder( - self.embedding(target_ids.t()), - encoder_output.transpose(0, 1), - attention_mask, - past_key_values - ) - - hidden_states = [e.transpose(0, 1) for e in hidden_states] - last_layer = hidden_states[-1] - hidden_states = [hidden_states[0]] + [ - hidden_states[i] - hidden_states[i - 1] - for i in range(1, len(hidden_states)) - ] - - if not return_dict: - return ( - last_layer, - *([key_value_states] if use_cache else []), - *([hidden_states] if output_hidden_states else []), - *([self_attention_p] if output_attentions else []), - *([cross_attention_p] if output_attentions else []), - ) - - return BaseModelOutputWithPastAndCrossAttentions( - last_hidden_state=last_layer, - past_key_values=key_value_states if use_cache else None, - hidden_states=hidden_states if output_hidden_states else None, - attentions=self_attention_p if output_attentions else None, - cross_attentions=cross_attention_p if output_attentions else None - ) - - - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - decoder_input_ids: Optional[torch.LongTensor] = None, - decoder_attention_mask: Optional[torch.BoolTensor] = None, - encoder_outputs: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, - past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None - ): - - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - decoder_input_ids = self.set_decoder_special_tokens(decoder_input_ids) - - if encoder_outputs is None: - encoder_outputs = self.get_encoder_output( - input_ids, attention_mask, output_hidden_states, output_attentions, return_dict - ) - elif return_dict and not isinstance(encoder_outputs, BaseModelOutput): - encoder_outputs = BaseModelOutput( - last_hidden_state=encoder_outputs[0], - hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None, - attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None, - ) - - decoder_outputs = self.get_decoder_output( - decoder_input_ids, encoder_outputs[0], attention_mask, past_key_values, use_cache, output_hidden_states, output_attentions, return_dict - ) - - if not return_dict: - return decoder_outputs + encoder_outputs - - return Seq2SeqModelOutput( - last_hidden_state=decoder_outputs.last_hidden_state, - past_key_values=decoder_outputs.past_key_values, - decoder_hidden_states=decoder_outputs.hidden_states, - decoder_attentions=decoder_outputs.attentions, - cross_attentions=decoder_outputs.cross_attentions, - encoder_last_hidden_state=encoder_outputs.last_hidden_state, - encoder_hidden_states=encoder_outputs.hidden_states, - encoder_attentions=encoder_outputs.attentions, - ) - - -class NorT5ForConditionalGeneration(NorT5Model): - - def __init__(self, config): - super().__init__(config, add_lm_layer=True) - - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - decoder_input_ids: Optional[torch.LongTensor] = None, - decoder_attention_mask: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - decoder_head_mask: Optional[torch.FloatTensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, - encoder_outputs: Optional[Tuple[Tuple[torch.Tensor]]] = None, - past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - decoder_inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ): - use_cache = use_cache if use_cache is not None else getattr(self.config, "use_cache", False) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if encoder_outputs is None: - encoder_outputs = self.get_encoder_output( - input_ids, attention_mask, output_hidden_states, output_attentions, return_dict - ) - elif return_dict and not isinstance(encoder_outputs, BaseModelOutput): - encoder_outputs = BaseModelOutput( - last_hidden_state=encoder_outputs[0], - hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None, - attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None, - ) - - if labels is not None: - labels = self.set_decoder_special_tokens(labels) - - if labels is not None and decoder_input_ids is None and decoder_inputs_embeds is None: - decoder_input_ids = self._shift_right(labels) - elif decoder_input_ids is not None: - decoder_input_ids = self.set_decoder_special_tokens(decoder_input_ids) - - decoder_outputs = self.get_decoder_output( - decoder_input_ids, encoder_outputs[0], attention_mask, past_key_values, use_cache, output_hidden_states, output_attentions, return_dict - ) - lm_logits = self.classifier(decoder_outputs[0]) - - loss = None - if labels is not None: - labels.masked_fill_(labels == self.pad_token_id, -100) - loss_fct = nn.CrossEntropyLoss(ignore_index=-100) - loss = loss_fct(lm_logits.flatten(0, 1), labels.flatten()) - - if not return_dict: - output = (lm_logits,) + decoder_outputs[1:] + encoder_outputs - return ((loss,) + output) if loss is not None else output - - return Seq2SeqLMOutput( - loss=loss, - logits=lm_logits, - past_key_values=decoder_outputs.past_key_values, - decoder_hidden_states=decoder_outputs.hidden_states, - decoder_attentions=decoder_outputs.attentions, - cross_attentions=decoder_outputs.cross_attentions, - encoder_last_hidden_state=encoder_outputs.last_hidden_state, - encoder_hidden_states=encoder_outputs.hidden_states, - encoder_attentions=encoder_outputs.attentions, - ) - - def prepare_inputs_for_generation( - self, - input_ids, - past_key_values=None, - attention_mask=None, - head_mask=None, - decoder_head_mask=None, - cross_attn_head_mask=None, - use_cache=None, - encoder_outputs=None, - **kwargs, - ): - if past_key_values is not None: - input_ids = input_ids[:, -1:] - - return { - "decoder_input_ids": input_ids, - "past_key_values": past_key_values, - "encoder_outputs": encoder_outputs, - "attention_mask": attention_mask, - "head_mask": head_mask, - "decoder_head_mask": decoder_head_mask, - "cross_attn_head_mask": cross_attn_head_mask, - "use_cache": use_cache, - } - - def prepare_decoder_input_ids_from_labels(self, labels: torch.Tensor): - return self._shift_right(labels) - - def _reorder_cache(self, past_key_values, beam_idx): - # if decoder past is not included in output - # speedy decoding is disabled and no need to reorder - if past_key_values is None: - print("You might want to consider setting `use_cache=True` to speed up decoding") - return past_key_values - - reordered_decoder_past = () - for layer_past_states in past_key_values: - # get the correct batch idx from layer past batch dim - # batch dim of `past` is at 2nd position - reordered_layer_past_states = () - for layer_past_state in layer_past_states: - # need to set correct `past` for each of the four key / value states - layer_past_state = layer_past_state.index_select(0, beam_idx.to(layer_past_state.device)) - reordered_layer_past_states = reordered_layer_past_states + (layer_past_state,) - - assert reordered_layer_past_states[0].shape == layer_past_states[0].shape - assert len(reordered_layer_past_states) == len(layer_past_states) - - reordered_decoder_past = reordered_decoder_past + (reordered_layer_past_states,) - return reordered_decoder_past - - -class NorT5Encoder(NorT5Model): - def __init__(self, config): - super().__init__(config, add_lm_layer=False, add_decoder=True) - - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - output_hidden_states: Optional[bool] = None, - output_attentions: Optional[bool] = None, - return_dict: Optional[bool] = None, - ): - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - return self.get_encoder_output( - input_ids, attention_mask, output_hidden_states, output_attentions, return_dict=return_dict - ) diff --git a/spaces/davidwisdom/la-metro/README.md b/spaces/davidwisdom/la-metro/README.md deleted file mode 100644 index a7ce040346668656a2acf7eda7e54ccca041e844..0000000000000000000000000000000000000000 --- a/spaces/davidwisdom/la-metro/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: La Metro -emoji: 🗺️ -colorFrom: yellow -colorTo: purple -sdk: streamlit -app_file: app.py -pinned: false ---- - -This is a small repo I'm using to show a visualization to someone. diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/arrayTools.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/arrayTools.py deleted file mode 100644 index 5fb01a838ae8769809b4f8ab28cb69ea5e84a3dc..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/arrayTools.py +++ /dev/null @@ -1,422 +0,0 @@ -"""Routines for calculating bounding boxes, point in rectangle calculations and -so on. -""" - -from fontTools.misc.roundTools import otRound -from fontTools.misc.vector import Vector as _Vector -import math -import warnings - - -def calcBounds(array): - """Calculate the bounding rectangle of a 2D points array. - - Args: - array: A sequence of 2D tuples. - - Returns: - A four-item tuple representing the bounding rectangle ``(xMin, yMin, xMax, yMax)``. - """ - if not array: - return 0, 0, 0, 0 - xs = [x for x, y in array] - ys = [y for x, y in array] - return min(xs), min(ys), max(xs), max(ys) - - -def calcIntBounds(array, round=otRound): - """Calculate the integer bounding rectangle of a 2D points array. - - Values are rounded to closest integer towards ``+Infinity`` using the - :func:`fontTools.misc.fixedTools.otRound` function by default, unless - an optional ``round`` function is passed. - - Args: - array: A sequence of 2D tuples. - round: A rounding function of type ``f(x: float) -> int``. - - Returns: - A four-item tuple of integers representing the bounding rectangle: - ``(xMin, yMin, xMax, yMax)``. - """ - return tuple(round(v) for v in calcBounds(array)) - - -def updateBounds(bounds, p, min=min, max=max): - """Add a point to a bounding rectangle. - - Args: - bounds: A bounding rectangle expressed as a tuple - ``(xMin, yMin, xMax, yMax)``. - p: A 2D tuple representing a point. - min,max: functions to compute the minimum and maximum. - - Returns: - The updated bounding rectangle ``(xMin, yMin, xMax, yMax)``. - """ - (x, y) = p - xMin, yMin, xMax, yMax = bounds - return min(xMin, x), min(yMin, y), max(xMax, x), max(yMax, y) - - -def pointInRect(p, rect): - """Test if a point is inside a bounding rectangle. - - Args: - p: A 2D tuple representing a point. - rect: A bounding rectangle expressed as a tuple - ``(xMin, yMin, xMax, yMax)``. - - Returns: - ``True`` if the point is inside the rectangle, ``False`` otherwise. - """ - (x, y) = p - xMin, yMin, xMax, yMax = rect - return (xMin <= x <= xMax) and (yMin <= y <= yMax) - - -def pointsInRect(array, rect): - """Determine which points are inside a bounding rectangle. - - Args: - array: A sequence of 2D tuples. - rect: A bounding rectangle expressed as a tuple - ``(xMin, yMin, xMax, yMax)``. - - Returns: - A list containing the points inside the rectangle. - """ - if len(array) < 1: - return [] - xMin, yMin, xMax, yMax = rect - return [(xMin <= x <= xMax) and (yMin <= y <= yMax) for x, y in array] - - -def vectorLength(vector): - """Calculate the length of the given vector. - - Args: - vector: A 2D tuple. - - Returns: - The Euclidean length of the vector. - """ - x, y = vector - return math.sqrt(x**2 + y**2) - - -def asInt16(array): - """Round a list of floats to 16-bit signed integers. - - Args: - array: List of float values. - - Returns: - A list of rounded integers. - """ - return [int(math.floor(i + 0.5)) for i in array] - - -def normRect(rect): - """Normalize a bounding box rectangle. - - This function "turns the rectangle the right way up", so that the following - holds:: - - xMin <= xMax and yMin <= yMax - - Args: - rect: A bounding rectangle expressed as a tuple - ``(xMin, yMin, xMax, yMax)``. - - Returns: - A normalized bounding rectangle. - """ - (xMin, yMin, xMax, yMax) = rect - return min(xMin, xMax), min(yMin, yMax), max(xMin, xMax), max(yMin, yMax) - - -def scaleRect(rect, x, y): - """Scale a bounding box rectangle. - - Args: - rect: A bounding rectangle expressed as a tuple - ``(xMin, yMin, xMax, yMax)``. - x: Factor to scale the rectangle along the X axis. - Y: Factor to scale the rectangle along the Y axis. - - Returns: - A scaled bounding rectangle. - """ - (xMin, yMin, xMax, yMax) = rect - return xMin * x, yMin * y, xMax * x, yMax * y - - -def offsetRect(rect, dx, dy): - """Offset a bounding box rectangle. - - Args: - rect: A bounding rectangle expressed as a tuple - ``(xMin, yMin, xMax, yMax)``. - dx: Amount to offset the rectangle along the X axis. - dY: Amount to offset the rectangle along the Y axis. - - Returns: - An offset bounding rectangle. - """ - (xMin, yMin, xMax, yMax) = rect - return xMin + dx, yMin + dy, xMax + dx, yMax + dy - - -def insetRect(rect, dx, dy): - """Inset a bounding box rectangle on all sides. - - Args: - rect: A bounding rectangle expressed as a tuple - ``(xMin, yMin, xMax, yMax)``. - dx: Amount to inset the rectangle along the X axis. - dY: Amount to inset the rectangle along the Y axis. - - Returns: - An inset bounding rectangle. - """ - (xMin, yMin, xMax, yMax) = rect - return xMin + dx, yMin + dy, xMax - dx, yMax - dy - - -def sectRect(rect1, rect2): - """Test for rectangle-rectangle intersection. - - Args: - rect1: First bounding rectangle, expressed as tuples - ``(xMin, yMin, xMax, yMax)``. - rect2: Second bounding rectangle. - - Returns: - A boolean and a rectangle. - If the input rectangles intersect, returns ``True`` and the intersecting - rectangle. Returns ``False`` and ``(0, 0, 0, 0)`` if the input - rectangles don't intersect. - """ - (xMin1, yMin1, xMax1, yMax1) = rect1 - (xMin2, yMin2, xMax2, yMax2) = rect2 - xMin, yMin, xMax, yMax = ( - max(xMin1, xMin2), - max(yMin1, yMin2), - min(xMax1, xMax2), - min(yMax1, yMax2), - ) - if xMin >= xMax or yMin >= yMax: - return False, (0, 0, 0, 0) - return True, (xMin, yMin, xMax, yMax) - - -def unionRect(rect1, rect2): - """Determine union of bounding rectangles. - - Args: - rect1: First bounding rectangle, expressed as tuples - ``(xMin, yMin, xMax, yMax)``. - rect2: Second bounding rectangle. - - Returns: - The smallest rectangle in which both input rectangles are fully - enclosed. - """ - (xMin1, yMin1, xMax1, yMax1) = rect1 - (xMin2, yMin2, xMax2, yMax2) = rect2 - xMin, yMin, xMax, yMax = ( - min(xMin1, xMin2), - min(yMin1, yMin2), - max(xMax1, xMax2), - max(yMax1, yMax2), - ) - return (xMin, yMin, xMax, yMax) - - -def rectCenter(rect): - """Determine rectangle center. - - Args: - rect: Bounding rectangle, expressed as tuples - ``(xMin, yMin, xMax, yMax)``. - - Returns: - A 2D tuple representing the point at the center of the rectangle. - """ - (xMin, yMin, xMax, yMax) = rect - return (xMin + xMax) / 2, (yMin + yMax) / 2 - - -def rectArea(rect): - """Determine rectangle area. - - Args: - rect: Bounding rectangle, expressed as tuples - ``(xMin, yMin, xMax, yMax)``. - - Returns: - The area of the rectangle. - """ - (xMin, yMin, xMax, yMax) = rect - return (yMax - yMin) * (xMax - xMin) - - -def intRect(rect): - """Round a rectangle to integer values. - - Guarantees that the resulting rectangle is NOT smaller than the original. - - Args: - rect: Bounding rectangle, expressed as tuples - ``(xMin, yMin, xMax, yMax)``. - - Returns: - A rounded bounding rectangle. - """ - (xMin, yMin, xMax, yMax) = rect - xMin = int(math.floor(xMin)) - yMin = int(math.floor(yMin)) - xMax = int(math.ceil(xMax)) - yMax = int(math.ceil(yMax)) - return (xMin, yMin, xMax, yMax) - - -def quantizeRect(rect, factor=1): - """ - >>> bounds = (72.3, -218.4, 1201.3, 919.1) - >>> quantizeRect(bounds) - (72, -219, 1202, 920) - >>> quantizeRect(bounds, factor=10) - (70, -220, 1210, 920) - >>> quantizeRect(bounds, factor=100) - (0, -300, 1300, 1000) - """ - if factor < 1: - raise ValueError(f"Expected quantization factor >= 1, found: {factor!r}") - xMin, yMin, xMax, yMax = normRect(rect) - return ( - int(math.floor(xMin / factor) * factor), - int(math.floor(yMin / factor) * factor), - int(math.ceil(xMax / factor) * factor), - int(math.ceil(yMax / factor) * factor), - ) - - -class Vector(_Vector): - def __init__(self, *args, **kwargs): - warnings.warn( - "fontTools.misc.arrayTools.Vector has been deprecated, please use " - "fontTools.misc.vector.Vector instead.", - DeprecationWarning, - ) - - -def pairwise(iterable, reverse=False): - """Iterate over current and next items in iterable. - - Args: - iterable: An iterable - reverse: If true, iterate in reverse order. - - Returns: - A iterable yielding two elements per iteration. - - Example: - - >>> tuple(pairwise([])) - () - >>> tuple(pairwise([], reverse=True)) - () - >>> tuple(pairwise([0])) - ((0, 0),) - >>> tuple(pairwise([0], reverse=True)) - ((0, 0),) - >>> tuple(pairwise([0, 1])) - ((0, 1), (1, 0)) - >>> tuple(pairwise([0, 1], reverse=True)) - ((1, 0), (0, 1)) - >>> tuple(pairwise([0, 1, 2])) - ((0, 1), (1, 2), (2, 0)) - >>> tuple(pairwise([0, 1, 2], reverse=True)) - ((2, 1), (1, 0), (0, 2)) - >>> tuple(pairwise(['a', 'b', 'c', 'd'])) - (('a', 'b'), ('b', 'c'), ('c', 'd'), ('d', 'a')) - >>> tuple(pairwise(['a', 'b', 'c', 'd'], reverse=True)) - (('d', 'c'), ('c', 'b'), ('b', 'a'), ('a', 'd')) - """ - if not iterable: - return - if reverse: - it = reversed(iterable) - else: - it = iter(iterable) - first = next(it, None) - a = first - for b in it: - yield (a, b) - a = b - yield (a, first) - - -def _test(): - """ - >>> import math - >>> calcBounds([]) - (0, 0, 0, 0) - >>> calcBounds([(0, 40), (0, 100), (50, 50), (80, 10)]) - (0, 10, 80, 100) - >>> updateBounds((0, 0, 0, 0), (100, 100)) - (0, 0, 100, 100) - >>> pointInRect((50, 50), (0, 0, 100, 100)) - True - >>> pointInRect((0, 0), (0, 0, 100, 100)) - True - >>> pointInRect((100, 100), (0, 0, 100, 100)) - True - >>> not pointInRect((101, 100), (0, 0, 100, 100)) - True - >>> list(pointsInRect([(50, 50), (0, 0), (100, 100), (101, 100)], (0, 0, 100, 100))) - [True, True, True, False] - >>> vectorLength((3, 4)) - 5.0 - >>> vectorLength((1, 1)) == math.sqrt(2) - True - >>> list(asInt16([0, 0.1, 0.5, 0.9])) - [0, 0, 1, 1] - >>> normRect((0, 10, 100, 200)) - (0, 10, 100, 200) - >>> normRect((100, 200, 0, 10)) - (0, 10, 100, 200) - >>> scaleRect((10, 20, 50, 150), 1.5, 2) - (15.0, 40, 75.0, 300) - >>> offsetRect((10, 20, 30, 40), 5, 6) - (15, 26, 35, 46) - >>> insetRect((10, 20, 50, 60), 5, 10) - (15, 30, 45, 50) - >>> insetRect((10, 20, 50, 60), -5, -10) - (5, 10, 55, 70) - >>> intersects, rect = sectRect((0, 10, 20, 30), (0, 40, 20, 50)) - >>> not intersects - True - >>> intersects, rect = sectRect((0, 10, 20, 30), (5, 20, 35, 50)) - >>> intersects - 1 - >>> rect - (5, 20, 20, 30) - >>> unionRect((0, 10, 20, 30), (0, 40, 20, 50)) - (0, 10, 20, 50) - >>> rectCenter((0, 0, 100, 200)) - (50.0, 100.0) - >>> rectCenter((0, 0, 100, 199.0)) - (50.0, 99.5) - >>> intRect((0.9, 2.9, 3.1, 4.1)) - (0, 2, 4, 5) - """ - - -if __name__ == "__main__": - import sys - import doctest - - sys.exit(doctest.testmod().failed) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/voltLib/voltToFea.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/voltLib/voltToFea.py deleted file mode 100644 index 2265d5029533706e59d61d4626217d32b5066acc..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/voltLib/voltToFea.py +++ /dev/null @@ -1,726 +0,0 @@ -"""\ -MS VOLT ``.vtp`` to AFDKO ``.fea`` OpenType Layout converter. - -Usage ------ - -To convert a VTP project file: - - - $ fonttools voltLib.voltToFea input.vtp output.fea - -It is also possible convert font files with `TSIV` table (as saved from Volt), -in this case the glyph names used in the Volt project will be mapped to the -actual glyph names in the font files when written to the feature file: - - $ fonttools voltLib.voltToFea input.ttf output.fea - -The ``--quiet`` option can be used to suppress warnings. - -The ``--traceback`` can be used to get Python traceback in case of exceptions, -instead of suppressing the traceback. - - -Limitations ------------ - -* Not all VOLT features are supported, the script will error if it it - encounters something it does not understand. Please report an issue if this - happens. -* AFDKO feature file syntax for mark positioning is awkward and does not allow - setting the mark coverage. It also defines mark anchors globally, as a result - some mark positioning lookups might cover many marks than what was in the VOLT - file. This should not be an issue in practice, but if it is then the only way - is to modify the VOLT file or the generated feature file manually to use unique - mark anchors for each lookup. -* VOLT allows subtable breaks in any lookup type, but AFDKO feature file - implementations vary in their support; currently AFDKO’s makeOTF supports - subtable breaks in pair positioning lookups only, while FontTools’ feaLib - support it for most substitution lookups and only some positioning lookups. -""" - -import logging -import re -from io import StringIO - -from fontTools.feaLib import ast -from fontTools.ttLib import TTFont, TTLibError -from fontTools.voltLib import ast as VAst -from fontTools.voltLib.parser import Parser as VoltParser - -log = logging.getLogger("fontTools.voltLib.voltToFea") - -TABLES = ["GDEF", "GSUB", "GPOS"] - - -class MarkClassDefinition(ast.MarkClassDefinition): - def asFea(self, indent=""): - res = "" - if not getattr(self, "used", False): - res += "#" - res += ast.MarkClassDefinition.asFea(self, indent) - return res - - -# For sorting voltLib.ast.GlyphDefinition, see its use below. -class Group: - def __init__(self, group): - self.name = group.name.lower() - self.groups = [ - x.group.lower() for x in group.enum.enum if isinstance(x, VAst.GroupName) - ] - - def __lt__(self, other): - if self.name in other.groups: - return True - if other.name in self.groups: - return False - if self.groups and not other.groups: - return False - if not self.groups and other.groups: - return True - - -class VoltToFea: - _NOT_LOOKUP_NAME_RE = re.compile(r"[^A-Za-z_0-9.]") - _NOT_CLASS_NAME_RE = re.compile(r"[^A-Za-z_0-9.\-]") - - def __init__(self, file_or_path, font=None): - self._file_or_path = file_or_path - self._font = font - - self._glyph_map = {} - self._glyph_order = None - - self._gdef = {} - self._glyphclasses = {} - self._features = {} - self._lookups = {} - - self._marks = set() - self._ligatures = {} - - self._markclasses = {} - self._anchors = {} - - self._settings = {} - - self._lookup_names = {} - self._class_names = {} - - def _lookupName(self, name): - if name not in self._lookup_names: - res = self._NOT_LOOKUP_NAME_RE.sub("_", name) - while res in self._lookup_names.values(): - res += "_" - self._lookup_names[name] = res - return self._lookup_names[name] - - def _className(self, name): - if name not in self._class_names: - res = self._NOT_CLASS_NAME_RE.sub("_", name) - while res in self._class_names.values(): - res += "_" - self._class_names[name] = res - return self._class_names[name] - - def _collectStatements(self, doc, tables): - # Collect and sort group definitions first, to make sure a group - # definition that references other groups comes after them since VOLT - # does not enforce such ordering, and feature file require it. - groups = [s for s in doc.statements if isinstance(s, VAst.GroupDefinition)] - for statement in sorted(groups, key=lambda x: Group(x)): - self._groupDefinition(statement) - - for statement in doc.statements: - if isinstance(statement, VAst.GlyphDefinition): - self._glyphDefinition(statement) - elif isinstance(statement, VAst.AnchorDefinition): - if "GPOS" in tables: - self._anchorDefinition(statement) - elif isinstance(statement, VAst.SettingDefinition): - self._settingDefinition(statement) - elif isinstance(statement, VAst.GroupDefinition): - pass # Handled above - elif isinstance(statement, VAst.ScriptDefinition): - self._scriptDefinition(statement) - elif not isinstance(statement, VAst.LookupDefinition): - raise NotImplementedError(statement) - - # Lookup definitions need to be handled last as they reference glyph - # and mark classes that might be defined after them. - for statement in doc.statements: - if isinstance(statement, VAst.LookupDefinition): - if statement.pos and "GPOS" not in tables: - continue - if statement.sub and "GSUB" not in tables: - continue - self._lookupDefinition(statement) - - def _buildFeatureFile(self, tables): - doc = ast.FeatureFile() - statements = doc.statements - - if self._glyphclasses: - statements.append(ast.Comment("# Glyph classes")) - statements.extend(self._glyphclasses.values()) - - if self._markclasses: - statements.append(ast.Comment("\n# Mark classes")) - statements.extend(c[1] for c in sorted(self._markclasses.items())) - - if self._lookups: - statements.append(ast.Comment("\n# Lookups")) - for lookup in self._lookups.values(): - statements.extend(getattr(lookup, "targets", [])) - statements.append(lookup) - - # Prune features - features = self._features.copy() - for ftag in features: - scripts = features[ftag] - for stag in scripts: - langs = scripts[stag] - for ltag in langs: - langs[ltag] = [l for l in langs[ltag] if l.lower() in self._lookups] - scripts[stag] = {t: l for t, l in langs.items() if l} - features[ftag] = {t: s for t, s in scripts.items() if s} - features = {t: f for t, f in features.items() if f} - - if features: - statements.append(ast.Comment("# Features")) - for ftag, scripts in features.items(): - feature = ast.FeatureBlock(ftag) - stags = sorted(scripts, key=lambda k: 0 if k == "DFLT" else 1) - for stag in stags: - feature.statements.append(ast.ScriptStatement(stag)) - ltags = sorted(scripts[stag], key=lambda k: 0 if k == "dflt" else 1) - for ltag in ltags: - include_default = True if ltag == "dflt" else False - feature.statements.append( - ast.LanguageStatement(ltag, include_default=include_default) - ) - for name in scripts[stag][ltag]: - lookup = self._lookups[name.lower()] - lookupref = ast.LookupReferenceStatement(lookup) - feature.statements.append(lookupref) - statements.append(feature) - - if self._gdef and "GDEF" in tables: - classes = [] - for name in ("BASE", "MARK", "LIGATURE", "COMPONENT"): - if name in self._gdef: - classname = "GDEF_" + name.lower() - glyphclass = ast.GlyphClassDefinition(classname, self._gdef[name]) - statements.append(glyphclass) - classes.append(ast.GlyphClassName(glyphclass)) - else: - classes.append(None) - - gdef = ast.TableBlock("GDEF") - gdef.statements.append(ast.GlyphClassDefStatement(*classes)) - statements.append(gdef) - - return doc - - def convert(self, tables=None): - doc = VoltParser(self._file_or_path).parse() - - if tables is None: - tables = TABLES - if self._font is not None: - self._glyph_order = self._font.getGlyphOrder() - - self._collectStatements(doc, tables) - fea = self._buildFeatureFile(tables) - return fea.asFea() - - def _glyphName(self, glyph): - try: - name = glyph.glyph - except AttributeError: - name = glyph - return ast.GlyphName(self._glyph_map.get(name, name)) - - def _groupName(self, group): - try: - name = group.group - except AttributeError: - name = group - return ast.GlyphClassName(self._glyphclasses[name.lower()]) - - def _coverage(self, coverage): - items = [] - for item in coverage: - if isinstance(item, VAst.GlyphName): - items.append(self._glyphName(item)) - elif isinstance(item, VAst.GroupName): - items.append(self._groupName(item)) - elif isinstance(item, VAst.Enum): - items.append(self._enum(item)) - elif isinstance(item, VAst.Range): - items.append((item.start, item.end)) - else: - raise NotImplementedError(item) - return items - - def _enum(self, enum): - return ast.GlyphClass(self._coverage(enum.enum)) - - def _context(self, context): - out = [] - for item in context: - coverage = self._coverage(item) - if not isinstance(coverage, (tuple, list)): - coverage = [coverage] - out.extend(coverage) - return out - - def _groupDefinition(self, group): - name = self._className(group.name) - glyphs = self._enum(group.enum) - glyphclass = ast.GlyphClassDefinition(name, glyphs) - - self._glyphclasses[group.name.lower()] = glyphclass - - def _glyphDefinition(self, glyph): - try: - self._glyph_map[glyph.name] = self._glyph_order[glyph.id] - except TypeError: - pass - - if glyph.type in ("BASE", "MARK", "LIGATURE", "COMPONENT"): - if glyph.type not in self._gdef: - self._gdef[glyph.type] = ast.GlyphClass() - self._gdef[glyph.type].glyphs.append(self._glyphName(glyph.name)) - - if glyph.type == "MARK": - self._marks.add(glyph.name) - elif glyph.type == "LIGATURE": - self._ligatures[glyph.name] = glyph.components - - def _scriptDefinition(self, script): - stag = script.tag - for lang in script.langs: - ltag = lang.tag - for feature in lang.features: - lookups = {l.split("\\")[0]: True for l in feature.lookups} - ftag = feature.tag - if ftag not in self._features: - self._features[ftag] = {} - if stag not in self._features[ftag]: - self._features[ftag][stag] = {} - assert ltag not in self._features[ftag][stag] - self._features[ftag][stag][ltag] = lookups.keys() - - def _settingDefinition(self, setting): - if setting.name.startswith("COMPILER_"): - self._settings[setting.name] = setting.value - else: - log.warning(f"Unsupported setting ignored: {setting.name}") - - def _adjustment(self, adjustment): - adv, dx, dy, adv_adjust_by, dx_adjust_by, dy_adjust_by = adjustment - - adv_device = adv_adjust_by and adv_adjust_by.items() or None - dx_device = dx_adjust_by and dx_adjust_by.items() or None - dy_device = dy_adjust_by and dy_adjust_by.items() or None - - return ast.ValueRecord( - xPlacement=dx, - yPlacement=dy, - xAdvance=adv, - xPlaDevice=dx_device, - yPlaDevice=dy_device, - xAdvDevice=adv_device, - ) - - def _anchor(self, adjustment): - adv, dx, dy, adv_adjust_by, dx_adjust_by, dy_adjust_by = adjustment - - assert not adv_adjust_by - dx_device = dx_adjust_by and dx_adjust_by.items() or None - dy_device = dy_adjust_by and dy_adjust_by.items() or None - - return ast.Anchor( - dx or 0, - dy or 0, - xDeviceTable=dx_device or None, - yDeviceTable=dy_device or None, - ) - - def _anchorDefinition(self, anchordef): - anchorname = anchordef.name - glyphname = anchordef.glyph_name - anchor = self._anchor(anchordef.pos) - - if anchorname.startswith("MARK_"): - name = "_".join(anchorname.split("_")[1:]) - markclass = ast.MarkClass(self._className(name)) - glyph = self._glyphName(glyphname) - markdef = MarkClassDefinition(markclass, anchor, glyph) - self._markclasses[(glyphname, anchorname)] = markdef - else: - if glyphname not in self._anchors: - self._anchors[glyphname] = {} - if anchorname not in self._anchors[glyphname]: - self._anchors[glyphname][anchorname] = {} - self._anchors[glyphname][anchorname][anchordef.component] = anchor - - def _gposLookup(self, lookup, fealookup): - statements = fealookup.statements - - pos = lookup.pos - if isinstance(pos, VAst.PositionAdjustPairDefinition): - for (idx1, idx2), (pos1, pos2) in pos.adjust_pair.items(): - coverage_1 = pos.coverages_1[idx1 - 1] - coverage_2 = pos.coverages_2[idx2 - 1] - - # If not both are groups, use “enum pos” otherwise makeotf will - # fail. - enumerated = False - for item in coverage_1 + coverage_2: - if not isinstance(item, VAst.GroupName): - enumerated = True - - glyphs1 = self._coverage(coverage_1) - glyphs2 = self._coverage(coverage_2) - record1 = self._adjustment(pos1) - record2 = self._adjustment(pos2) - assert len(glyphs1) == 1 - assert len(glyphs2) == 1 - statements.append( - ast.PairPosStatement( - glyphs1[0], record1, glyphs2[0], record2, enumerated=enumerated - ) - ) - elif isinstance(pos, VAst.PositionAdjustSingleDefinition): - for a, b in pos.adjust_single: - glyphs = self._coverage(a) - record = self._adjustment(b) - assert len(glyphs) == 1 - statements.append( - ast.SinglePosStatement([(glyphs[0], record)], [], [], False) - ) - elif isinstance(pos, VAst.PositionAttachDefinition): - anchors = {} - for marks, classname in pos.coverage_to: - for mark in marks: - # Set actually used mark classes. Basically a hack to get - # around the feature file syntax limitation of making mark - # classes global and not allowing mark positioning to - # specify mark coverage. - for name in mark.glyphSet(): - key = (name, "MARK_" + classname) - self._markclasses[key].used = True - markclass = ast.MarkClass(self._className(classname)) - for base in pos.coverage: - for name in base.glyphSet(): - if name not in anchors: - anchors[name] = [] - if classname not in anchors[name]: - anchors[name].append(classname) - - for name in anchors: - components = 1 - if name in self._ligatures: - components = self._ligatures[name] - - marks = [] - for mark in anchors[name]: - markclass = ast.MarkClass(self._className(mark)) - for component in range(1, components + 1): - if len(marks) < component: - marks.append([]) - anchor = None - if component in self._anchors[name][mark]: - anchor = self._anchors[name][mark][component] - marks[component - 1].append((anchor, markclass)) - - base = self._glyphName(name) - if name in self._marks: - mark = ast.MarkMarkPosStatement(base, marks[0]) - elif name in self._ligatures: - mark = ast.MarkLigPosStatement(base, marks) - else: - mark = ast.MarkBasePosStatement(base, marks[0]) - statements.append(mark) - elif isinstance(pos, VAst.PositionAttachCursiveDefinition): - # Collect enter and exit glyphs - enter_coverage = [] - for coverage in pos.coverages_enter: - for base in coverage: - for name in base.glyphSet(): - enter_coverage.append(name) - exit_coverage = [] - for coverage in pos.coverages_exit: - for base in coverage: - for name in base.glyphSet(): - exit_coverage.append(name) - - # Write enter anchors, also check if the glyph has exit anchor and - # write it, too. - for name in enter_coverage: - glyph = self._glyphName(name) - entry = self._anchors[name]["entry"][1] - exit = None - if name in exit_coverage: - exit = self._anchors[name]["exit"][1] - exit_coverage.pop(exit_coverage.index(name)) - statements.append(ast.CursivePosStatement(glyph, entry, exit)) - - # Write any remaining exit anchors. - for name in exit_coverage: - glyph = self._glyphName(name) - exit = self._anchors[name]["exit"][1] - statements.append(ast.CursivePosStatement(glyph, None, exit)) - else: - raise NotImplementedError(pos) - - def _gposContextLookup( - self, lookup, prefix, suffix, ignore, fealookup, targetlookup - ): - statements = fealookup.statements - - assert not lookup.reversal - - pos = lookup.pos - if isinstance(pos, VAst.PositionAdjustPairDefinition): - for (idx1, idx2), (pos1, pos2) in pos.adjust_pair.items(): - glyphs1 = self._coverage(pos.coverages_1[idx1 - 1]) - glyphs2 = self._coverage(pos.coverages_2[idx2 - 1]) - assert len(glyphs1) == 1 - assert len(glyphs2) == 1 - glyphs = (glyphs1[0], glyphs2[0]) - - if ignore: - statement = ast.IgnorePosStatement([(prefix, glyphs, suffix)]) - else: - lookups = (targetlookup, targetlookup) - statement = ast.ChainContextPosStatement( - prefix, glyphs, suffix, lookups - ) - statements.append(statement) - elif isinstance(pos, VAst.PositionAdjustSingleDefinition): - glyphs = [ast.GlyphClass()] - for a, b in pos.adjust_single: - glyph = self._coverage(a) - glyphs[0].extend(glyph) - - if ignore: - statement = ast.IgnorePosStatement([(prefix, glyphs, suffix)]) - else: - statement = ast.ChainContextPosStatement( - prefix, glyphs, suffix, [targetlookup] - ) - statements.append(statement) - elif isinstance(pos, VAst.PositionAttachDefinition): - glyphs = [ast.GlyphClass()] - for coverage, _ in pos.coverage_to: - glyphs[0].extend(self._coverage(coverage)) - - if ignore: - statement = ast.IgnorePosStatement([(prefix, glyphs, suffix)]) - else: - statement = ast.ChainContextPosStatement( - prefix, glyphs, suffix, [targetlookup] - ) - statements.append(statement) - else: - raise NotImplementedError(pos) - - def _gsubLookup(self, lookup, prefix, suffix, ignore, chain, fealookup): - statements = fealookup.statements - - sub = lookup.sub - for key, val in sub.mapping.items(): - if not key or not val: - path, line, column = sub.location - log.warning(f"{path}:{line}:{column}: Ignoring empty substitution") - continue - statement = None - glyphs = self._coverage(key) - replacements = self._coverage(val) - if ignore: - chain_context = (prefix, glyphs, suffix) - statement = ast.IgnoreSubstStatement([chain_context]) - elif isinstance(sub, VAst.SubstitutionSingleDefinition): - assert len(glyphs) == 1 - assert len(replacements) == 1 - statement = ast.SingleSubstStatement( - glyphs, replacements, prefix, suffix, chain - ) - elif isinstance(sub, VAst.SubstitutionReverseChainingSingleDefinition): - assert len(glyphs) == 1 - assert len(replacements) == 1 - statement = ast.ReverseChainSingleSubstStatement( - prefix, suffix, glyphs, replacements - ) - elif isinstance(sub, VAst.SubstitutionMultipleDefinition): - assert len(glyphs) == 1 - statement = ast.MultipleSubstStatement( - prefix, glyphs[0], suffix, replacements, chain - ) - elif isinstance(sub, VAst.SubstitutionLigatureDefinition): - assert len(replacements) == 1 - statement = ast.LigatureSubstStatement( - prefix, glyphs, suffix, replacements[0], chain - ) - else: - raise NotImplementedError(sub) - statements.append(statement) - - def _lookupDefinition(self, lookup): - mark_attachement = None - mark_filtering = None - - flags = 0 - if lookup.direction == "RTL": - flags |= 1 - if not lookup.process_base: - flags |= 2 - # FIXME: Does VOLT support this? - # if not lookup.process_ligatures: - # flags |= 4 - if not lookup.process_marks: - flags |= 8 - elif isinstance(lookup.process_marks, str): - mark_attachement = self._groupName(lookup.process_marks) - elif lookup.mark_glyph_set is not None: - mark_filtering = self._groupName(lookup.mark_glyph_set) - - lookupflags = None - if flags or mark_attachement is not None or mark_filtering is not None: - lookupflags = ast.LookupFlagStatement( - flags, mark_attachement, mark_filtering - ) - if "\\" in lookup.name: - # Merge sub lookups as subtables (lookups named “base\sub”), - # makeotf/feaLib will issue a warning and ignore the subtable - # statement if it is not a pairpos lookup, though. - name = lookup.name.split("\\")[0] - if name.lower() not in self._lookups: - fealookup = ast.LookupBlock(self._lookupName(name)) - if lookupflags is not None: - fealookup.statements.append(lookupflags) - fealookup.statements.append(ast.Comment("# " + lookup.name)) - else: - fealookup = self._lookups[name.lower()] - fealookup.statements.append(ast.SubtableStatement()) - fealookup.statements.append(ast.Comment("# " + lookup.name)) - self._lookups[name.lower()] = fealookup - else: - fealookup = ast.LookupBlock(self._lookupName(lookup.name)) - if lookupflags is not None: - fealookup.statements.append(lookupflags) - self._lookups[lookup.name.lower()] = fealookup - - if lookup.comments is not None: - fealookup.statements.append(ast.Comment("# " + lookup.comments)) - - contexts = [] - if lookup.context: - for context in lookup.context: - prefix = self._context(context.left) - suffix = self._context(context.right) - ignore = context.ex_or_in == "EXCEPT_CONTEXT" - contexts.append([prefix, suffix, ignore, False]) - # It seems that VOLT will create contextual substitution using - # only the input if there is no other contexts in this lookup. - if ignore and len(lookup.context) == 1: - contexts.append([[], [], False, True]) - else: - contexts.append([[], [], False, False]) - - targetlookup = None - for prefix, suffix, ignore, chain in contexts: - if lookup.sub is not None: - self._gsubLookup(lookup, prefix, suffix, ignore, chain, fealookup) - - if lookup.pos is not None: - if self._settings.get("COMPILER_USEEXTENSIONLOOKUPS"): - fealookup.use_extension = True - if prefix or suffix or chain or ignore: - if not ignore and targetlookup is None: - targetname = self._lookupName(lookup.name + " target") - targetlookup = ast.LookupBlock(targetname) - fealookup.targets = getattr(fealookup, "targets", []) - fealookup.targets.append(targetlookup) - self._gposLookup(lookup, targetlookup) - self._gposContextLookup( - lookup, prefix, suffix, ignore, fealookup, targetlookup - ) - else: - self._gposLookup(lookup, fealookup) - - -def main(args=None): - """Convert MS VOLT to AFDKO feature files.""" - - import argparse - from pathlib import Path - - from fontTools import configLogger - - parser = argparse.ArgumentParser( - "fonttools voltLib.voltToFea", description=main.__doc__ - ) - parser.add_argument( - "input", metavar="INPUT", type=Path, help="input font/VTP file to process" - ) - parser.add_argument( - "featurefile", metavar="OUTPUT", type=Path, help="output feature file" - ) - parser.add_argument( - "-t", - "--table", - action="append", - choices=TABLES, - dest="tables", - help="List of tables to write, by default all tables are written", - ) - parser.add_argument( - "-q", "--quiet", action="store_true", help="Suppress non-error messages" - ) - parser.add_argument( - "--traceback", action="store_true", help="Don’t catch exceptions" - ) - - options = parser.parse_args(args) - - configLogger(level=("ERROR" if options.quiet else "INFO")) - - file_or_path = options.input - font = None - try: - font = TTFont(file_or_path) - if "TSIV" in font: - file_or_path = StringIO(font["TSIV"].data.decode("utf-8")) - else: - log.error('"TSIV" table is missing, font was not saved from VOLT?') - return 1 - except TTLibError: - pass - - converter = VoltToFea(file_or_path, font) - try: - fea = converter.convert(options.tables) - except NotImplementedError as e: - if options.traceback: - raise - location = getattr(e.args[0], "location", None) - message = f'"{e}" is not supported' - if location: - path, line, column = location - log.error(f"{path}:{line}:{column}: {message}") - else: - log.error(message) - return 1 - with open(options.featurefile, "w") as feafile: - feafile.write(fea) - - -if __name__ == "__main__": - import sys - - sys.exit(main()) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/prism-eca040d0.css b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/prism-eca040d0.css deleted file mode 100644 index 89c237a8ec460aa3accc2274bb1a065931890348..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/prism-eca040d0.css +++ /dev/null @@ -1 +0,0 @@ -.gradio-container-3-40-1 code[class*=language-],.gradio-container-3-40-1 pre[class*=language-]{color:#000;background:none;text-shadow:0 1px white;font-family:Consolas,Monaco,Andale Mono,Ubuntu Mono,monospace;font-size:1em;text-align:left;white-space:pre;word-spacing:normal;word-break:normal;word-wrap:normal;line-height:1.5;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-hyphens:none;-moz-hyphens:none;-ms-hyphens:none;hyphens:none}.gradio-container-3-40-1 pre[class*=language-]::-moz-selection,.gradio-container-3-40-1 pre[class*=language-] ::-moz-selection,.gradio-container-3-40-1 code[class*=language-]::-moz-selection,.gradio-container-3-40-1 code[class*=language-] ::-moz-selection{text-shadow:none;background:#b3d4fc}.gradio-container-3-40-1 pre[class*=language-]::selection,.gradio-container-3-40-1 pre[class*=language-] ::selection,.gradio-container-3-40-1 code[class*=language-]::selection,.gradio-container-3-40-1 code[class*=language-] ::selection{text-shadow:none;background:#b3d4fc}@media print{.gradio-container-3-40-1 code[class*=language-],.gradio-container-3-40-1 pre[class*=language-]{text-shadow:none}}.gradio-container-3-40-1 pre[class*=language-]{padding:1em;margin:.5em 0;overflow:auto}.gradio-container-3-40-1 :not(pre)>code[class*=language-],.gradio-container-3-40-1 pre[class*=language-]{background:#f5f2f0}.gradio-container-3-40-1 :not(pre)>code[class*=language-]{padding:.1em;border-radius:.3em;white-space:normal}.gradio-container-3-40-1 .token.comment,.gradio-container-3-40-1 .token.prolog,.gradio-container-3-40-1 .token.doctype,.gradio-container-3-40-1 .token.cdata{color:#708090}.gradio-container-3-40-1 .token.punctuation{color:#999}.gradio-container-3-40-1 .token.namespace{opacity:.7}.gradio-container-3-40-1 .token.property,.gradio-container-3-40-1 .token.tag,.gradio-container-3-40-1 .token.boolean,.gradio-container-3-40-1 .token.number,.gradio-container-3-40-1 .token.constant,.gradio-container-3-40-1 .token.symbol,.gradio-container-3-40-1 .token.deleted{color:#905}.gradio-container-3-40-1 .token.selector,.gradio-container-3-40-1 .token.attr-name,.gradio-container-3-40-1 .token.string,.gradio-container-3-40-1 .token.char,.gradio-container-3-40-1 .token.builtin,.gradio-container-3-40-1 .token.inserted{color:#690}.gradio-container-3-40-1 .token.operator,.gradio-container-3-40-1 .token.entity,.gradio-container-3-40-1 .token.url,.gradio-container-3-40-1 .language-css .token.string,.gradio-container-3-40-1 .style .token.string{color:#9a6e3a;background:hsla(0,0%,100%,.5)}.gradio-container-3-40-1 .token.atrule,.gradio-container-3-40-1 .token.attr-value,.gradio-container-3-40-1 .token.keyword{color:#07a}.gradio-container-3-40-1 .token.function,.gradio-container-3-40-1 .token.class-name{color:#dd4a68}.gradio-container-3-40-1 .token.regex,.gradio-container-3-40-1 .token.important,.gradio-container-3-40-1 .token.variable{color:#e90}.gradio-container-3-40-1 .token.important,.gradio-container-3-40-1 .token.bold{font-weight:700}.gradio-container-3-40-1 .token.italic{font-style:italic}.gradio-container-3-40-1 .token.entity{cursor:help} diff --git a/spaces/declare-lab/tango/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_pix2pix_zero.py b/spaces/declare-lab/tango/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_pix2pix_zero.py deleted file mode 100644 index 46b93a0589ce1775e26921a6cc5dcdcf464c4b29..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_pix2pix_zero.py +++ /dev/null @@ -1,470 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import gc -import unittest - -import numpy as np -import torch -from transformers import CLIPTextConfig, CLIPTextModel, CLIPTokenizer - -from diffusers import ( - AutoencoderKL, - DDIMInverseScheduler, - DDIMScheduler, - DDPMScheduler, - EulerAncestralDiscreteScheduler, - LMSDiscreteScheduler, - StableDiffusionPix2PixZeroPipeline, - UNet2DConditionModel, -) -from diffusers.utils import load_numpy, slow, torch_device -from diffusers.utils.testing_utils import load_image, load_pt, require_torch_gpu, skip_mps - -from ...pipeline_params import TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS, TEXT_GUIDED_IMAGE_VARIATION_PARAMS -from ...test_pipelines_common import PipelineTesterMixin - - -torch.backends.cuda.matmul.allow_tf32 = False - - -@skip_mps -class StableDiffusionPix2PixZeroPipelineFastTests(PipelineTesterMixin, unittest.TestCase): - pipeline_class = StableDiffusionPix2PixZeroPipeline - params = TEXT_GUIDED_IMAGE_VARIATION_PARAMS - batch_params = TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS - - @classmethod - def setUpClass(cls): - cls.source_embeds = load_pt( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/src_emb_0.pt" - ) - - cls.target_embeds = load_pt( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/tgt_emb_0.pt" - ) - - def get_dummy_components(self): - torch.manual_seed(0) - unet = UNet2DConditionModel( - block_out_channels=(32, 64), - layers_per_block=2, - sample_size=32, - in_channels=4, - out_channels=4, - down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), - up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), - cross_attention_dim=32, - ) - scheduler = DDIMScheduler() - torch.manual_seed(0) - vae = AutoencoderKL( - block_out_channels=[32, 64], - in_channels=3, - out_channels=3, - down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], - up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], - latent_channels=4, - ) - torch.manual_seed(0) - text_encoder_config = CLIPTextConfig( - bos_token_id=0, - eos_token_id=2, - hidden_size=32, - intermediate_size=37, - layer_norm_eps=1e-05, - num_attention_heads=4, - num_hidden_layers=5, - pad_token_id=1, - vocab_size=1000, - ) - text_encoder = CLIPTextModel(text_encoder_config) - tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") - - components = { - "unet": unet, - "scheduler": scheduler, - "vae": vae, - "text_encoder": text_encoder, - "tokenizer": tokenizer, - "safety_checker": None, - "feature_extractor": None, - "inverse_scheduler": None, - "caption_generator": None, - "caption_processor": None, - } - return components - - def get_dummy_inputs(self, device, seed=0): - generator = torch.manual_seed(seed) - - inputs = { - "prompt": "A painting of a squirrel eating a burger", - "generator": generator, - "num_inference_steps": 2, - "guidance_scale": 6.0, - "cross_attention_guidance_amount": 0.15, - "source_embeds": self.source_embeds, - "target_embeds": self.target_embeds, - "output_type": "numpy", - } - return inputs - - def test_stable_diffusion_pix2pix_zero_default_case(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - sd_pipe = StableDiffusionPix2PixZeroPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - image = sd_pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1] - assert image.shape == (1, 64, 64, 3) - expected_slice = np.array([0.5184, 0.503, 0.4917, 0.4022, 0.3455, 0.464, 0.5324, 0.5323, 0.4894]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 - - def test_stable_diffusion_pix2pix_zero_negative_prompt(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - sd_pipe = StableDiffusionPix2PixZeroPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - negative_prompt = "french fries" - output = sd_pipe(**inputs, negative_prompt=negative_prompt) - image = output.images - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 64, 64, 3) - expected_slice = np.array([0.5464, 0.5072, 0.5012, 0.4124, 0.3624, 0.466, 0.5413, 0.5468, 0.4927]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 - - def test_stable_diffusion_pix2pix_zero_euler(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - components["scheduler"] = EulerAncestralDiscreteScheduler( - beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear" - ) - sd_pipe = StableDiffusionPix2PixZeroPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - image = sd_pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 64, 64, 3) - expected_slice = np.array([0.5114, 0.5051, 0.5222, 0.5279, 0.5037, 0.5156, 0.4604, 0.4966, 0.504]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 - - def test_stable_diffusion_pix2pix_zero_ddpm(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - components["scheduler"] = DDPMScheduler() - sd_pipe = StableDiffusionPix2PixZeroPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - image = sd_pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 64, 64, 3) - expected_slice = np.array([0.5185, 0.5027, 0.492, 0.401, 0.3445, 0.464, 0.5321, 0.5327, 0.4892]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 - - # Non-determinism caused by the scheduler optimizing the latent inputs during inference - @unittest.skip("non-deterministic pipeline") - def test_inference_batch_single_identical(self): - return super().test_inference_batch_single_identical() - - -@slow -@require_torch_gpu -class StableDiffusionPix2PixZeroPipelineSlowTests(unittest.TestCase): - def tearDown(self): - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - @classmethod - def setUpClass(cls): - cls.source_embeds = load_pt( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat.pt" - ) - - cls.target_embeds = load_pt( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/dog.pt" - ) - - def get_inputs(self, seed=0): - generator = torch.manual_seed(seed) - - inputs = { - "prompt": "turn him into a cyborg", - "generator": generator, - "num_inference_steps": 3, - "guidance_scale": 7.5, - "cross_attention_guidance_amount": 0.15, - "source_embeds": self.source_embeds, - "target_embeds": self.target_embeds, - "output_type": "numpy", - } - return inputs - - def test_stable_diffusion_pix2pix_zero_default(self): - pipe = StableDiffusionPix2PixZeroPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", safety_checker=None, torch_dtype=torch.float16 - ) - pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - - inputs = self.get_inputs() - image = pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1].flatten() - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array([0.5742, 0.5757, 0.5747, 0.5781, 0.5688, 0.5713, 0.5742, 0.5664, 0.5747]) - - assert np.abs(expected_slice - image_slice).max() < 5e-2 - - def test_stable_diffusion_pix2pix_zero_k_lms(self): - pipe = StableDiffusionPix2PixZeroPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", safety_checker=None, torch_dtype=torch.float16 - ) - pipe.scheduler = LMSDiscreteScheduler.from_config(pipe.scheduler.config) - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - - inputs = self.get_inputs() - image = pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1].flatten() - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array([0.6367, 0.5459, 0.5146, 0.5479, 0.4905, 0.4753, 0.4961, 0.4629, 0.4624]) - - assert np.abs(expected_slice - image_slice).max() < 5e-2 - - def test_stable_diffusion_pix2pix_zero_intermediate_state(self): - number_of_steps = 0 - - def callback_fn(step: int, timestep: int, latents: torch.FloatTensor) -> None: - callback_fn.has_been_called = True - nonlocal number_of_steps - number_of_steps += 1 - if step == 1: - latents = latents.detach().cpu().numpy() - assert latents.shape == (1, 4, 64, 64) - latents_slice = latents[0, -3:, -3:, -1] - expected_slice = np.array([0.1345, 0.268, 0.1539, 0.0726, 0.0959, 0.2261, -0.2673, 0.0277, -0.2062]) - - assert np.abs(latents_slice.flatten() - expected_slice).max() < 5e-2 - elif step == 2: - latents = latents.detach().cpu().numpy() - assert latents.shape == (1, 4, 64, 64) - latents_slice = latents[0, -3:, -3:, -1] - expected_slice = np.array([0.1393, 0.2637, 0.1617, 0.0724, 0.0987, 0.2271, -0.2666, 0.0299, -0.2104]) - - assert np.abs(latents_slice.flatten() - expected_slice).max() < 5e-2 - - callback_fn.has_been_called = False - - pipe = StableDiffusionPix2PixZeroPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", safety_checker=None, torch_dtype=torch.float16 - ) - pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) - pipe = pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - - inputs = self.get_inputs() - pipe(**inputs, callback=callback_fn, callback_steps=1) - assert callback_fn.has_been_called - assert number_of_steps == 3 - - def test_stable_diffusion_pipeline_with_sequential_cpu_offloading(self): - torch.cuda.empty_cache() - torch.cuda.reset_max_memory_allocated() - torch.cuda.reset_peak_memory_stats() - - pipe = StableDiffusionPix2PixZeroPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", safety_checker=None, torch_dtype=torch.float16 - ) - pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) - pipe = pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing(1) - pipe.enable_sequential_cpu_offload() - - inputs = self.get_inputs() - _ = pipe(**inputs) - - mem_bytes = torch.cuda.max_memory_allocated() - # make sure that less than 8.2 GB is allocated - assert mem_bytes < 8.2 * 10**9 - - -@slow -@require_torch_gpu -class InversionPipelineSlowTests(unittest.TestCase): - def tearDown(self): - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - @classmethod - def setUpClass(cls): - raw_image = load_image( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png" - ) - - raw_image = raw_image.convert("RGB").resize((512, 512)) - - cls.raw_image = raw_image - - def test_stable_diffusion_pix2pix_inversion(self): - pipe = StableDiffusionPix2PixZeroPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", safety_checker=None, torch_dtype=torch.float16 - ) - pipe.inverse_scheduler = DDIMInverseScheduler.from_config(pipe.scheduler.config) - - caption = "a photography of a cat with flowers" - pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) - pipe.enable_model_cpu_offload() - pipe.set_progress_bar_config(disable=None) - - generator = torch.manual_seed(0) - output = pipe.invert(caption, image=self.raw_image, generator=generator, num_inference_steps=10) - inv_latents = output[0] - - image_slice = inv_latents[0, -3:, -3:, -1].flatten() - - assert inv_latents.shape == (1, 4, 64, 64) - expected_slice = np.array([0.8447, -0.0730, 0.7588, -1.2070, -0.4678, 0.1511, -0.8555, 1.1816, -0.7666]) - - assert np.abs(expected_slice - image_slice.cpu().numpy()).max() < 5e-2 - - def test_stable_diffusion_2_pix2pix_inversion(self): - pipe = StableDiffusionPix2PixZeroPipeline.from_pretrained( - "stabilityai/stable-diffusion-2-1", safety_checker=None, torch_dtype=torch.float16 - ) - pipe.inverse_scheduler = DDIMInverseScheduler.from_config(pipe.scheduler.config) - - caption = "a photography of a cat with flowers" - pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) - pipe.enable_model_cpu_offload() - pipe.set_progress_bar_config(disable=None) - - generator = torch.manual_seed(0) - output = pipe.invert(caption, image=self.raw_image, generator=generator, num_inference_steps=10) - inv_latents = output[0] - - image_slice = inv_latents[0, -3:, -3:, -1].flatten() - - assert inv_latents.shape == (1, 4, 64, 64) - expected_slice = np.array([0.8970, -0.1611, 0.4766, -1.1162, -0.5923, 0.1050, -0.9678, 1.0537, -0.6050]) - - assert np.abs(expected_slice - image_slice.cpu().numpy()).max() < 5e-2 - - def test_stable_diffusion_pix2pix_full(self): - # numpy array of https://huggingface.co/datasets/hf-internal-testing/diffusers-images/blob/main/pix2pix/dog.png - expected_image = load_numpy( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/dog.npy" - ) - - pipe = StableDiffusionPix2PixZeroPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", safety_checker=None, torch_dtype=torch.float16 - ) - pipe.inverse_scheduler = DDIMInverseScheduler.from_config(pipe.scheduler.config) - - caption = "a photography of a cat with flowers" - pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) - pipe.enable_model_cpu_offload() - pipe.set_progress_bar_config(disable=None) - - generator = torch.manual_seed(0) - output = pipe.invert(caption, image=self.raw_image, generator=generator) - inv_latents = output[0] - - source_prompts = 4 * ["a cat sitting on the street", "a cat playing in the field", "a face of a cat"] - target_prompts = 4 * ["a dog sitting on the street", "a dog playing in the field", "a face of a dog"] - - source_embeds = pipe.get_embeds(source_prompts) - target_embeds = pipe.get_embeds(target_prompts) - - image = pipe( - caption, - source_embeds=source_embeds, - target_embeds=target_embeds, - num_inference_steps=50, - cross_attention_guidance_amount=0.15, - generator=generator, - latents=inv_latents, - negative_prompt=caption, - output_type="np", - ).images - - max_diff = np.abs(expected_image - image).mean() - assert max_diff < 0.05 - - def test_stable_diffusion_2_pix2pix_full(self): - # numpy array of https://huggingface.co/datasets/hf-internal-testing/diffusers-images/blob/main/pix2pix/dog_2.png - expected_image = load_numpy( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/dog_2.npy" - ) - - pipe = StableDiffusionPix2PixZeroPipeline.from_pretrained( - "stabilityai/stable-diffusion-2-1", safety_checker=None, torch_dtype=torch.float16 - ) - pipe.inverse_scheduler = DDIMInverseScheduler.from_config(pipe.scheduler.config) - - caption = "a photography of a cat with flowers" - pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) - pipe.enable_model_cpu_offload() - pipe.set_progress_bar_config(disable=None) - - generator = torch.manual_seed(0) - output = pipe.invert(caption, image=self.raw_image, generator=generator) - inv_latents = output[0] - - source_prompts = 4 * ["a cat sitting on the street", "a cat playing in the field", "a face of a cat"] - target_prompts = 4 * ["a dog sitting on the street", "a dog playing in the field", "a face of a dog"] - - source_embeds = pipe.get_embeds(source_prompts) - target_embeds = pipe.get_embeds(target_prompts) - - image = pipe( - caption, - source_embeds=source_embeds, - target_embeds=target_embeds, - num_inference_steps=125, - cross_attention_guidance_amount=0.015, - generator=generator, - latents=inv_latents, - negative_prompt=caption, - output_type="np", - ).images - - mean_diff = np.abs(expected_image - image).mean() - assert mean_diff < 0.25 diff --git a/spaces/dekk-i386/pdflangchain/htmlTemplates.py b/spaces/dekk-i386/pdflangchain/htmlTemplates.py deleted file mode 100644 index 9f0e6496058299100f75cb3b121be84c077e723e..0000000000000000000000000000000000000000 --- a/spaces/dekk-i386/pdflangchain/htmlTemplates.py +++ /dev/null @@ -1,44 +0,0 @@ -css = ''' - - - -

%(title)s

- -''' - -DOC_HEADER_EXTERNALCSS = '''\ - - - - - %(title)s - - - - -

%(title)s

- -''' - -DOC_FOOTER = '''\ - - -''' - - -class HtmlFormatter(Formatter): - r""" - Format tokens as HTML 4 ```` tags. By default, the content is enclosed - in a ``
`` tag, itself wrapped in a ``
`` tag (but see the `nowrap` option). - The ``
``'s CSS class can be set by the `cssclass` option. - - If the `linenos` option is set to ``"table"``, the ``
`` is
-    additionally wrapped inside a ```` which has one row and two
-    cells: one containing the line numbers and one containing the code.
-    Example:
-
-    .. sourcecode:: html
-
-        
-
- - -
-
1
-            2
-
-
def foo(bar):
-              pass
-            
-
- - (whitespace added to improve clarity). - - A list of lines can be specified using the `hl_lines` option to make these - lines highlighted (as of Pygments 0.11). - - With the `full` option, a complete HTML 4 document is output, including - the style definitions inside a ``