diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Be2works Full Version How to Repair Laptop Batteries with Ease.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Be2works Full Version How to Repair Laptop Batteries with Ease.md deleted file mode 100644 index a842cddffe7105cd369e31fbc1f7f6c99bd541a3..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Be2works Full Version How to Repair Laptop Batteries with Ease.md +++ /dev/null @@ -1,122 +0,0 @@ - -

Download GX Developer 8.7 Full Crack

-

If you are looking for a reliable and powerful software to program and control Mitsubishi PLCs, you might want to try GX Developer. This software is widely used by engineers and technicians who work with Mitsubishi controllers in various industries. In this article, we will show you how to download GX Developer 8.7 full crack, which is the latest version of this software, and how to use it effectively.

-

download gx developer 8.7 full crack


Download ✪✪✪ https://byltly.com/2uKzsX



-

What is GX Developer?

-

GX Developer is the basic controller programming environment supporting Q Process, Q, L, FX Series and legacy controllers A and AnS Series. It is part of the MELSOFT series of software products developed by Mitsubishi Electric. It allows you to create, edit, debug and transfer programs for Mitsubishi PLCs using various programming languages, such as ladder logic, structured text, function block diagram, sequential function chart and MELSAP-L.

-

Features and benefits of GX Developer

-

Some of the features and benefits of GX Developer are:

- -

System requirements for GX Developer

-

The minimum system requirements for GX Developer are:

- - - - - - - -
Operating systemWindows XP/Vista/7/8/10 (32-bit or 64-bit)
CPUPentium III 800 MHz or higher
Memory256 MB or more
HDD500 MB or more of free space
DisplayXGA (1024 x 768) or higher resolution
OthersCD-ROM drive, mouse, keyboard, printer (optional)
-

How to download GX Developer 8.7 full crack?

-

To download GX Developer 8.7 full crack, you need to follow these steps:

-

How to download gx developer 8.7 full crack for free
-Download gx developer 8.7 full crack with serial key
-Download gx developer 8.7 full crack for windows 10
-Download gx developer 8.7 full crack for plc programming
-Download gx developer 8.7 full crack latest version
-Download gx developer 8.7 full crack torrent link
-Download gx developer 8.7 full crack from official website
-Download gx developer 8.7 full crack without virus
-Download gx developer 8.7 full crack online
-Download gx developer 8.7 full crack and install guide
-Download gx developer 8.7 full crack for mitsubishi plc
-Download gx developer 8.7 full crack with license key
-Download gx developer 8.7 full crack for mac
-Download gx developer 8.7 full crack for linux
-Download gx developer 8.7 full crack for android
-Download gx developer 8.7 full crack zip file
-Download gx developer 8.7 full crack rar file
-Download gx developer 8.7 full crack iso file
-Download gx developer 8.7 full crack setup file
-Download gx developer 8.7 full crack exe file
-Download gx developer 8.7 full crack software
-Download gx developer 8.7 full crack tool
-Download gx developer 8.7 full crack patch
-Download gx developer 8.7 full crack keygen
-Download gx developer 8.7 full crack activation code
-Download gx developer 8.7 full crack registration code
-Download gx developer 8.7 full crack product key
-Download gx developer 8.7 full crack generator
-Download gx developer 8.7 full crack hack
-Download gx developer 8.7 full crack mod
-Download gx developer 8.7 full crack apk
-Download gx developer 8.7 full crack review
-Download gx developer 8.7 full crack tutorial
-Download gx developer 8.7 full crack video
-Download gx developer 8.7 full crack youtube
-Download gx developer 8.7 full crack blog
-Download gx developer 8.7 full crack forum
-Download gx developer 8.7 full crack reddit
-Download gx developer 8.7 full crack quora
-Download gx developer 8.7 full crack facebook
-Download gx developer 8.7 full crack twitter
-Download gx developer 8.7 full crack instagram
-Download gx developer 8.7 full crack pinterest
-Download gx developer 8.7 full crack linkedin
-Download gx developer 8.7 full crack medium
-Download gx developer 8.7 full crack wordpress
-Download gx developer 8.7 full crack wixsite
-Download gx developer 8.7 full crack google drive
-Download gx developer 8.7 full crack dropbox

-

Step 1: Visit the official website of Mitsubishi Electric

-

The first step is to visit the official website of Mitsubishi Electric at https://www.mitsubishielectric.com/app/fa/download/search.do?mode=manual&kisyu=%2Fmelsoft. This is where you can find all the software products related to Mitsubishi controllers.

-

Step 2: Select the software category and GX Developer product

-

The next step is to select the software category and GX Developer product from the list. You can use the search box or the filters to narrow down your options. For example, you can type "GX Developer" in the search box or select "MELSOFT" as the large category and "GX series" as the small category. Then, you will see a list of manuals for different versions of GX Developer. You need to select "GX Developer Version 8 Operating Manual" as your desired product.

-

Step 3: Download the installation file and the crack file

-

The third step is to download the installation file and the crack file for GX Developer 8.7 full crack. You can click on the "Download" button next to each file name to start downloading them. The installation file is named "GXDEV807E.zip" and has a size of about 1 GB. The crack file is named "GXDEV807E_Crack.zip" and has a size of about 10 MB.

-

Step 4: Install GX Developer and apply the crack

-

The final step is to install GX Developer and apply the crack to activate it. You need to follow these sub-steps:

-
    -
  1. Extract both zip files using a tool like WinRAR or 7-Zip.
  2. -
  3. Run the setup.exe file from the extracted folder of "GXDEV807E.zip". Follow the instructions on the screen to complete the installation process.
  4. -
  5. Copy all the files from the extracted folder of "GXDEV807E_Crack.zip" and paste them into the installation folder of GX Developer. The default installation folder is "C:\Program Files (x86)\MELSOFT\GX Works2". Overwrite any existing files if prompted.
  6. -
  7. Run GX Developer from your desktop shortcut or start menu. You should see a message saying "License registration completed" on the bottom right corner of the screen.
  8. -
  9. Congratulations! You have successfully installed GX Developer 8.7 full crack on your computer.
  10. -
-

How to use GX Developer 8.7?

-

To use GX Developer 8.7 effectively, you need to follow these steps:

-

Create a new project or open an existing one

-

The first step is to create a new project or open an existing one in GX Developer. You can do this by clicking on the "File" menu and selecting "New Project" or "Open Project". A project is a collection of files that contain your program, settings, comments and other information related to your controller. You need to specify a project name, a controller type, a communication method and other options when creating a new project.

-

Configure the controller settings and communication parameters

-

The next step is to configure the controller settings and communication parameters in GX Developer. You can do this by clicking on the "Project" menu and selecting "Parameter". A parameter is a value that determines how your controller operates or communicates with other devices. You need to set up parameters such as input/output assignments, device comments, memory allocation, network configuration, password protection and others according to your needs.

-

Write, edit and debug your program using various tools and languages

-

The third step is to write, edit and debug your program using various tools and languages in GX Developer. You can do this by clicking on the "Program" menu and selecting "Edit". An edit window will open where you can write your program using different programming languages such as ladder logic (LD), structured text (ST), function block diagram (FBD), sequential function chart (SFC) or MELSAP-L (ML). You can also use various tools such as syntax check, cross reference, comment input/output, device monitor/editor and others to help you write your program more efficiently.

-

Transfer your program to your controller and monitor its operation

-

The final step is to transfer your program to your controller and monitor its operation in GX Developer. You can do this by clicking on the "Online" menu and selecting "Transfer To PLC" or "Transfer From PLC". A transfer window will open where you can select which files you want to transfer between your computer and your controller. You can also click on monitor window or a test window will open where you can monitor or test the operation of your controller and your program. You can use various functions such as start/stop, force on/off, data change, data trace and others to control or observe your controller and your program.

-

Conclusion

-

In conclusion, GX Developer is a powerful and reliable software that allows you to program and control Mitsubishi PLCs. It has many features and benefits that make it easy and convenient to use. It supports various programming languages and communication protocols. It also has a built-in simulator and a data logging function. To download GX Developer 8.7 full crack, you need to visit the official website of Mitsubishi Electric, select the software category and GX Developer product, download the installation file and the crack file, install GX Developer and apply the crack. To use GX Developer 8.7 effectively, you need to create or open a project, configure the controller settings and communication parameters, write, edit and debug your program using various tools and languages, transfer your program to your controller and monitor its operation.

-

FAQs

-

Here are some frequently asked questions about GX Developer 8.7 full crack:

-
    -
  1. Q: Is it legal to download GX Developer 8.7 full crack?
  2. -
  3. A: No, it is not legal to download GX Developer 8.7 full crack. It is a violation of the software license agreement and intellectual property rights of Mitsubishi Electric. You should only download GX Developer from the official website of Mitsubishi Electric and pay for the license fee.
  4. -
  5. Q: Is it safe to download GX Developer 8.7 full crack?
  6. -
  7. A: No, it is not safe to download GX Developer 8.7 full crack. It may contain viruses, malware or spyware that can harm your computer or steal your personal information. You should only download GX Developer from the official website of Mitsubishi Electric and scan it with a reliable antivirus software.
  8. -
  9. Q: Is it compatible with Windows 10?
  10. -
  11. A: Yes, it is compatible with Windows 10. However, you may need to run it in compatibility mode or as an administrator if you encounter any problems.
  12. -
  13. Q: How can I update GX Developer 8.7?
  14. -
  15. A: You can update GX Developer 8.7 by visiting the official website of Mitsubishi Electric and downloading the latest version of GX Developer or the update patch file. You should uninstall the previous version of GX Developer before installing the new one.
  16. -
  17. Q: How can I get technical support for GX Developer 8.7?
  18. -
  19. A: You can get technical support for GX Developer 8.7 by contacting Mitsubishi Electric or its authorized distributors or service providers in your region. You can also refer to the online help system or the user manual of GX Developer for more information.
  20. -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Boris FX Optics A Comprehensive Review of Features Benefits and Drawbacks.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Boris FX Optics A Comprehensive Review of Features Benefits and Drawbacks.md deleted file mode 100644 index ef404175a3d7023c18048f43582f2eb35f99a050..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Boris FX Optics A Comprehensive Review of Features Benefits and Drawbacks.md +++ /dev/null @@ -1,43 +0,0 @@ - -

Boris FX Optics Review: A Powerful Plugin for Photo Editing

-

Boris FX Optics is a plugin that allows you to apply cinematic effects and filters to your photos in Photoshop, Lightroom, or as a standalone application. It offers over 160 filters and thousands of presets that can transform your images into stunning artworks.

-

In this Boris FX Optics review, we will take a look at some of the features and benefits of this plugin, as well as some of the drawbacks and limitations. We will also show you some examples of how you can use Boris FX Optics to enhance your photos.

-

boris fx optics review


Download Filehttps://byltly.com/2uKwBd



-

Features and Benefits of Boris FX Optics

-

Boris FX Optics is a plugin that is designed to give you creative control over your photos. It lets you apply various effects and filters that can change the mood, tone, color, and style of your images. Some of the features and benefits of Boris FX Optics are:

- -

Drawbacks and Limitations of Boris FX Optics

-

Boris FX Optics is a plugin that is not without its flaws and limitations. Some of the drawbacks and limitations of Boris FX Optics are:

- -

Examples of Using Boris FX Optics

-

Boris FX Optics is a plugin that can help you create stunning photos with cinematic effects and filters. Here are some examples of how you can use Boris FX Optics to enhance your photos:

-

Example 1: Adding Film Grain and Color Grading

-

In this example, we will use Boris FX Optics to add some film grain and color grading to a portrait photo. Here are the steps:

-
    -
  1. Open the photo in Photoshop and launch Boris FX Optics from the Filter menu.
  2. -
  3. Select the Film Stocks filter from the Film Lab category.
  4. -
  5. Choose a preset that suits your style. For this example, we will use Kodak Portra 400 VC.
  6. -
  7. Adjust the amount of film grain using the Grain slider.
  8. -
  9. Adjust the color grading using the Color Correction sliders.
  10. -
  11. Click OK to apply the effect and return to Photoshop.
  12. -
-

Here is the before and after comparison:

-

- -Before -After - -

Example 2: Adding Lens Flare and Vignette

-

In this example, we will use Boris FX Optics to add some lens flare and vignette to a landscape photo. Here are the steps: ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Convert Videos to Any Device with Freemake Video Converter Gold 4.1.9.16 Portable Torrent.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Convert Videos to Any Device with Freemake Video Converter Gold 4.1.9.16 Portable Torrent.md deleted file mode 100644 index 3fc29c670085529db5ef3ad68e67c0d21735b8cf..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Convert Videos to Any Device with Freemake Video Converter Gold 4.1.9.16 Portable Torrent.md +++ /dev/null @@ -1,200 +0,0 @@ -
- - - - - - - -
Outline
-

Freemake Video Converter Gold 4.1.9.16 Portable Torrent: A Powerful and Versatile Video Converter

-
    -
  • Introduction

    -
      -
    • What is Freemake Video Converter Gold 4.1.9.16 Portable Torrent?
    • -
    • What are the main features and benefits of using it?
    • -
    -
  • -
  • How to download and install Freemake Video Converter Gold 4.1.9.16 Portable Torrent

    -
      -
    • Where to find the torrent file and how to use a torrent client
    • -
    • How to run the portable version without installation
    • -
    -
  • -
  • How to use Freemake Video Converter Gold 4.1.9.16 Portable Torrent

    -
      -
    • How to add video files and choose output formats and settings

    • -
    • How to convert videos to various devices and platforms

    • -
    • How to rip and burn DVDs and Blu-rays

    • -
    • How to edit videos and add subtitles

    • -
    • How to upload videos online

    • -
    -
  • -
  • Conclusion

    -
      -
    • A summary of the main points and advantages of Freemake Video Converter Gold 4.1.9.16 Portable Torrent
    • -
    • A call to action for the readers to try it out
    • -
    -
  • -
  • FAQs

    -
      -
    • What are the system requirements for Freemake Video Converter Gold 4.1.9.16 Portable Torrent?
    • -
    • What is the difference between the free version and the gold pack?
    • -
    • How to activate the gold pack features?
    • -
    • How to update Freemake Video Converter Gold 4.1.9.16 Portable Torrent?
    • -
    • How to contact the support team or report a problem?
    • -
    -
  • -
-
- And here is the article based on the outline: - - - - - - -
Article
-

Freemake Video Converter Gold 4.1.9.16 Portable Torrent: A Powerful and Versatile Video Converter

-

If you are looking for a reliable and easy-to-use video converter that can handle any video format, device, or platform, you might want to check out Freemake Video Converter Gold 4.1.9.16 Portable Torrent.

-

This is a portable version of one of the most popular video converters on the market, which means you can run it from any USB drive or external hard drive without installing it on your computer.

-

Freemake Video Converter Gold 4.1.9.16 Portable Torrent


Download File ✑ ✑ ✑ https://byltly.com/2uKw0f



-

In this article, we will show you what Freemake Video Converter Gold 4.1.9.16 Portable Torrent can do for you, how to download and install it, and how to use it for various video conversion tasks.

-

What is Freemake Video Converter Gold 4.1.9.16 Portable Torrent?

-

Freemake Video Converter Gold 4.1.9.16 Portable Torrent is a software that allows you to convert video files from one format to another, as well as rip and burn DVDs and Blu-rays, edit videos, add subtitles, and upload videos online.

-

It supports over 200 input formats, including AVI, MP4, MKV, WMV, MPG, 3GP, SWF, FLV, MOV, DV, RM, QT, TS, MTS, etc.

-

Freemake Video Converter Gold Portable Download
-How to use Freemake Video Converter Gold 4.1.9.16
-Freemake Video Converter Gold Crack Torrent
-Freemake Video Converter Gold 4.1.9.16 Features
-Freemake Video Converter Gold Review
-Freemake Video Converter Gold License Key
-Freemake Video Converter Gold vs Pro
-Freemake Video Converter Gold Free Trial
-Freemake Video Converter Gold System Requirements
-Freemake Video Converter Gold Activation Code
-Freemake Video Converter Gold Full Version
-Freemake Video Converter Gold 4.1.9.16 Patch
-Freemake Video Converter Gold Tutorial
-Freemake Video Converter Gold Alternatives
-Freemake Video Converter Gold Discount Coupon
-Freemake Video Converter Gold Support
-Freemake Video Converter Gold 4.1.9.16 Serial Key
-Freemake Video Converter Gold User Guide
-Freemake Video Converter Gold Benefits
-Freemake Video Converter Gold Problems
-Freemake Video Converter Gold 4.1.9.16 Portable Zip
-Freemake Video Converter Gold Online
-Freemake Video Converter Gold Quality Settings
-Freemake Video Converter Gold Update
-Freemake Video Converter Gold 4.1.9.16 Portable Mega
-Freemake Video Converter Gold for Mac
-Freemake Video Converter Gold Subtitles
-Freemake Video Converter Gold FAQ
-Freemake Video Converter Gold 4.1.9.16 Portable Mediafire
-Freemake Video Converter Gold for Windows 10
-Freemake Video Converter Gold DVD Menu
-Freemake Video Converter Gold Tips and Tricks
-Freemake Video Converter Gold 4.1.9.16 Portable Google Drive
-Freemake Video Converter Gold for Android
-Freemake Video Converter Gold Watermark Removal
-Freemake Video Converter Gold Testimonials
-Freemake Video Converter Gold 4.1.9.16 Portable Rarbg
-Freemake Video Converter Gold for Linux
-Freemake Video Converter Gold Audio Settings
-Freemake Video Converter Gold Refund Policy
-Freemake Video Converter Gold 4.1.9.16 Portable Kickass
-Freemake Video Converter Gold for iPhone
-Freemake Video Converter Gold Crop and Rotate
-Freemake Video Converter Gold Comparison Chart
-Freemake Video Converter Gold 4.1.9.16 Portable Magnet Link
-Freelance video converter gold for iPad

-

It also supports output formats for various devices and platforms, such as iPod, iPhone, iPad, PSP, Android, BlackBerry, Nokia, Xbox, Apple TV, etc.

-

You can also convert videos to HTML5 formats (Ogg, WebM, H.264) for modern web browsers.

-

Freemake Video Converter Gold 4.1.9.16 Portable Torrent is fast and efficient thanks to its integrated CUDA and DXVA technologies that optimize the conversion process and reduce CPU usage.

-

It also has a gold pack feature that unlocks some exclusive options such as automatic black bar removal, custom DVD menus, backup function, advanced preset editor, etc.

-

How to download and install Freemake Video Converter Gold 4.1.9.16 Portable Torrent

-

To download Freemake Video Converter Gold 4.1.9.16 Portable Torrent, you need a torrent client such as uTorrent or BitTorrent.

-

You can find the torrent file on various websites such as nsaneforums.com or youngworldforum.forumfree.it.

-

Once you have downloaded the torrent file, open it with your torrent client and choose where to save the portable version of Freemake Video Converter Gold 4.1.9.16.

-

The download size is about 30 MB.

-

After the download is complete, you can run Freemake Video Converter Gold 4.1.9.16 Portable by double-clicking on the executable file (FreemakeVideoConverterPortable.exe).

-

You don't need to install anything on your computer or register any account.

-

How to use Freemake Video Converter Gold 4.1.9.16 Portable Torrent

-

How to add video files and choose output formats and settings

-

To add video files to Freemake Video Converter Gold 4.1.9.16 Portable Torrent, you can either drag and drop them from your computer or click on the +Video button at the top left corner of the interface.

-

You can also add audio files (MP3, AAC, WMA, WAV) or image files (JPG, BMP, PNG,GIF) if you want to create a slideshow or a music video.

-

Once you have added your files, you can choose the output format from the bottom row of icons.

-

You can either select a specific device or platform (such as iPhone or YouTube) or a general format (such as AVI or MP3).

-

You can also click on the cogwheel icon next to each format icon to customize some settings such as resolution, bitrate, frame rate, codec, etc.

-

How to convert videos to various devices and platforms

-

If you want to convert videos for a specific device or platform (such as iPod or Facebook), you just need to select it from the bottom row of icons.

-

Freemake Video Converter Gold 4.1.9.16 Portable Torrent will automatically adjust the output parameters according to the optimal compatibility and quality standards.

-

You can also preview how your video will look like on your device or platform by clicking on the play button next to each format icon.

-

To start the conversion process, click on the Convert button at the bottom right corner of the interface.

-

You can choose where to save your converted files or open them directly after conversion.

-

How to rip and burn DVDs and Blu-rays

-

If you want to rip an unprotected DVD or Blu-ray disc (or an ISO image or a DVD folder) into a video file format (such as AVI or MP4), you just need to click on the +DVD button at the top left corner of the interface.

-

Then select your source disc (or image or folder) from your computer or optical drive.

-

Then choose your output format from the bottom row of icons (you can also select Blu-ray if you want to create a Blu-ray disc out of your source disc).

-

You can also edit some settings such as title selection, language selection, subtitle selection, etc by clicking on the cogwheel icon next to each format icon.

-

To start ripping process, click on Convert button at bottom right corner of interface. You can choose where save your ripped files or open them directly after ripping.

-

If you want burn a video file format (such as AVI or MP4) into a DVD or Blu-ray disc (or an ISO image or a DVD folder), you just need to click on +Video button at top left corner of interface. Then add your video files from your computer. Then choose DVD or Blu-ray from bottom row of icons. custom DVD menus by clicking on Menu button next each format icon. You can choose from various templates, add your own background image, title, etc. To start burning process, click on Burn button at bottom right corner of interface. You can choose where save your burned files or open them directly after burning.

-

How to edit videos and add subtitles

-

If you want edit your videos before converting them, you can use built-in video editor by clicking on scissors icon next each video file in list. You can perform various editing tasks such as trimming, cropping, rotating, flipping, joining, etc. You can also add transitions, effects, watermarks, etc by clicking on respective buttons at top right corner of editor window. To apply changes, click OK button at bottom right corner of editor window. To cancel changes, click Cancel button at bottom left corner of editor window.

-

If you want add subtitles your videos before converting them, you can use built-in subtitle tool by clicking on SRT icon next each video file in list. You can either import external subtitle files (SSA/SRT/ASS) from your computer or search online subtitles by clicking on respective buttons at top right corner of subtitle window. You can also adjust some settings such as font size, color, position, etc by clicking on cogwheel icon at top left corner of subtitle window. To apply changes, click OK button at bottom right corner of subtitle window. To cancel changes, click Cancel button at bottom left corner of subtitle window.

-

How to upload videos online

-

If you want upload your videos online after converting them, you can use built-in uploader by clicking on Upload button at top right corner of interface.

-

You can choose from various websites such as YouTube, Facebook, Vimeo, Dailymotion, etc and enter your account details and video information.

-

You can also adjust some settings such as video quality, privacy options, tags, etc by clicking on cogwheel icon next to each website icon.

-

To start uploading process, click on Upload button at bottom right corner of interface. You can monitor the progress and status of your uploads and open them directly after uploading.

-

Conclusion

-

Freemake Video Converter Gold 4.1.9.16 Portable Torrent is a powerful and versatile video converter that can handle any video format, device, or platform.

-

It is also fast and efficient thanks to its integrated CUDA and DXVA technologies that optimize the conversion process and reduce CPU usage.

-

It also has a gold pack feature that unlocks some exclusive options such as automatic black bar removal, custom DVD menus, backup function, advanced preset editor, etc.

-

It is also easy to use and intuitive thanks to its simple and user-friendly interface and its built-in tools for editing, adding subtitles, and uploading videos online.

-

If you want to try out Freemake Video Converter Gold 4.1.9.16 Portable Torrent for yourself, you can download it from various websites such as nsaneforums.com or youngworldforum.forumfree.it using a torrent client such as uTorrent or BitTorrent.

-

You don't need to install anything on your computer or register any account. You just need to run it from any USB drive or external hard drive and start converting your videos to any format, device, or platform you want.

-

So what are you waiting for? Download Freemake Video Converter Gold 4.1.9.16 Portable Torrent today and enjoy the best video conversion experience ever!

-

FAQs

-
    -
  • What are the system requirements for Freemake Video Converter Gold 4.1.9.16 Portable Torrent?
  • -
-

The system requirements for Freemake Video Converter Gold 4.1.9.16 Portable Torrent are:

-
    -
  • Windows Vista/7/8/10
  • -
  • .NET Framework 4.0 or higher
  • -
  • 256 MB or more RAM
  • -
  • 50 MB free hard disc space
  • -
  • Stable Internet connection for video download & YouTube upload
  • -
  • DVD-ROM drive for burning DVD
  • -
  • BD-ROM drive for burning Blu-ray
  • -
-
    -
  • What is the difference between the free version and the gold pack?
  • -
-

The free version of Freemake Video Converter allows you to convert video files to various formats, devices, and platforms, as well as rip and burn DVDs and Blu-rays, edit videos, add subtitles, and upload videos online.

-

The gold pack is a set of unique features that enhance the free version with some exclusive options such as automatic black bar removal, custom DVD menus, backup function, advanced preset editor, etc.

-
    -
  • How to activate the gold pack features?
  • -
-

To activate the gold pack features, you need to purchase a license key from the official website http://www.freemake.com/packs/gold_freemake_video_converter/.

-

Then you need to enter the license key in the program by clicking on Help -> Get Gold Pack -> I already have a key -> Enter key -> OK.

-
    -
  • How to update Freemake Video Converter Gold 4.1.9.16 Portable Torrent?
  • -
-

To update Freemake Video Converter Gold 4.1.9.16 Portable Torrent, you need to download the latest version from the official website http://www.freemake.com/free_video_converter/.

-

Then you need to overwrite the old executable file (FreemakeVideoConverterPortable.exe) with the new one in your portable folder.

-
    -
  • How to contact the support team or report a problem?
  • -
-

To contact the support team or report a problem with Freemake Video Converter Gold 4.1.9.16 Portable Torrent, you can use the following methods:

- -

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Bajrangi Bhaijaan 2015 720p DvdRip.md b/spaces/1gistliPinn/ChatGPT4/Examples/Bajrangi Bhaijaan 2015 720p DvdRip.md deleted file mode 100644 index fa73e9d6259801b9a4ca372813262d7f045ad0cb..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Bajrangi Bhaijaan 2015 720p DvdRip.md +++ /dev/null @@ -1,26 +0,0 @@ -

Bajrangi Bhaijaan 2015 720p DvdRip


Download Filehttps://imgfil.com/2uy1i6



- -At the end of the DVD there is information about the search code. In the beginning of the movie starts a credit: Bajrangi Bhaijaan 720p DvDRip x264 English Subtitle Movie Samples. No charge, no catches, no hassle, we will not share your information with anyone. The mbox file is about 9 gigabytes and contains an index of the contents, like a DVD table of contents. - -This release supports Matroska movies only, and does not support subtitles, codecs and other features. - -It also does not have a working DVD menu. This means that it cannot be played directly on any DVD player. It must be copied to a DVD media first. The subtitle files were not encoded with the same compression ratio as the main movie, so they are not as small as usual. - -Bajrangi Bhaijaan: Story, Cast, Overview, 2015 - -Director: Kabir Khan - -Writer: Kabir Khan - -Starring: Akshay Kumar, Nawazuddin Siddiqui, Paresh Rawal, Sonakshi Sinha, Varun Sharma, Ritesh Deshmukh, and Manoj Pahwa. Special appearance by the then Bihar Chief Minister Nitish Kumar, Rajnath Singh, Arvind Kejriwal and Prakash Karat. - -Plot Summary : - -During the 1990s, in the town of Nagpur in the Indian state of Maharashtra, two men were competing for the love of a girl: a poor Muslim named Babban Ashfaq, who worked as a guard in a printing factory and could not afford a dowry, and a rich Hindu named Lallu Singhal, who owned a printing business and could afford one. - -When the girl, Priyanka, turned 21, her father approached a prominent crime lord, Harshvardhan Ganga, who in turn introduced him to a mafioso, Baba Kamlakar Dhanpat. Baba Kamlakar Dhanpat gave the poor man a dowry of his own, but the Hindu was not satisfied. - -Then a rival gang, the Bajrangi, came to the rescue. They were a band of six thieves who specialized in stealing jewellery and money. They smuggled Baba Kamlakar Dhanpat's money to Switzerland, where it was kept in a safe deposit box. Baba Kamlakar Dhanpat then drove to Mumbai 4fefd39f24
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Biologija Pries Egzamina Knyga Pdf 44.md b/spaces/1gistliPinn/ChatGPT4/Examples/Biologija Pries Egzamina Knyga Pdf 44.md deleted file mode 100644 index cf8dd37324a2879b8c7aa5e4f552405f5eec385a..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Biologija Pries Egzamina Knyga Pdf 44.md +++ /dev/null @@ -1,32 +0,0 @@ - -

Biologija Pries Egzamina Knyga Pdf 44: A Review of the Best Biology Book for Exam Preparation

- -

If you are looking for a biology book that can help you ace your exams, you might want to check out Biologija Pries Egzamina Knyga Pdf 44. This book is a collection of 100 professional tips for biology exams, written by experts in the field. It covers topics such as genetics, ecology, evolution, cell biology, anatomy, physiology, and more. It also provides practice questions, answers, and explanations for each topic.

-

Biologija Pries Egzamina Knyga Pdf 44


Download File 🆓 https://imgfil.com/2uxYBo



- -

Biologija Pries Egzamina Knyga Pdf 44 is a digital book that you can download and read on your computer or mobile device. It is available in Lithuanian language, and it has been praised by many students and teachers for its clarity, accuracy, and relevance. It is designed to help you prepare for the national biology exam in Lithuania, as well as other biology tests and competitions.

- -

In this article, we will review Biologija Pries Egzamina Knyga Pdf 44 and tell you why it is one of the best biology books for exam preparation. We will also give you some tips on how to use it effectively and where to get it online.

- -

Why Biologija Pries Egzamina Knyga Pdf 44 is a Great Biology Book for Exam Preparation

- -

There are many reasons why Biologija Pries Egzamina Knyga Pdf 44 is a great biology book for exam preparation. Here are some of them:

-

- - - -

How to Use Biologija Pries Egzamina Knyga Pdf 44 Effectively

- -

To get the most out of Biologija Pries Egzamina Knyga Pdf 44, here are some tips on how to use it effectively:

- - -

As you explore the biomes, you will encounter different mobs, items, and dungeons. Some of the mobs are friendly and can be tamed or ridden, such as flying pigs, sheepuffs, moas, and aerwhales. Some of the mobs are hostile and will attack you on sight, such as zephyrs, cockatrices, tempests, and stormbringers. Some of the mobs are neutral and will only attack you if provoked, such as aerbunnies, phyg, and kirrid.

-

Some of the items are useful and can help you in your adventure, such as golden parachutes, cloud staffs, dart shooters, and gravitite gloves. Some of the items are rare and can only be obtained by defeating bosses or completing dungeons, such as valkyrie lances, sun altars, slider keys, and necromancer staffs.

-

Some of the dungeons are easy and can be completed by anyone, such as bronze dungeons. Some of the dungeons are hard and require skill and strategy, such as silver dungeons. Some of the dungeons are epic and require teamwork and preparation, such as gold dungeons and slider's labyrinth.

-

Step 5: Fight Bosses and Earn Rewards

-

The Aether is not only a place of exploration and discovery, but also a place of challenge and reward. There are four bosses in the Aether that you can fight and earn rewards from. They are:

- -

These bosses are not easy to defeat. You need to prepare well before you challenge them. You need to have good weapons, armor, food, potions, and other items that can help you in combat. You also need to have a good strategy and know the weaknesses and strengths of each boss. You can find more tips and guides on how to fight the bosses online or by asking other players.

-

If you manage to defeat the bosses, you will be rewarded with some of the best items in the mod. You can use these items to enhance your gameplay, or to trade with other players. You can also brag about your achievements and show off your trophies.

-

Conclusion

-

GTA SA Aether 2 is a mod that adds a new dimension to Grand Theft Auto: San Andreas, where you can explore a sky realm full of floating islands, mythical creatures, and dungeons. It is a mod that combines the worlds of GTA and Minecraft, and offers a lot of features, challenges, and rewards.

-

To play GTA SA Aether 2, you need to download and install some files and software, such as Minecraft Forge, Aether II, and Gilded Games Util. Then you need to build an Aether portal with stone and iron ingot, and enter it to travel to the Aether dimension. There you can gather skyroot logs and make basic tools, explore the Aether biomes, mobs, items, and dungeons, and fight the bosses and earn rewards.

-

GTA SA Aether 2 is a mod that is worth playing if you are a fan of GTA or Minecraft, or both. It is a mod that will give you a new perspective on San Andreas, and a new adventure in the Aether. It is a mod that will make you fall in love with San Andreas all over again.

-

FAQs

-

Here are some frequently asked questions about GTA SA Aether 2:

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Brotato The Ultimate Potato Shooter Roguelite for iPhone.md b/spaces/fatiXbelha/sd/Brotato The Ultimate Potato Shooter Roguelite for iPhone.md deleted file mode 100644 index 3fd7dda198ab860bb88cce0661d4911b62be3e8a..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Brotato The Ultimate Potato Shooter Roguelite for iPhone.md +++ /dev/null @@ -1,98 +0,0 @@ -
-

Brotato: A Fun and Challenging Shooter Roguelite Game for iPhone

-

If you are looking for a new and exciting game to play on your iPhone, you might want to check out Brotato. Brotato is a top-down arena shooter roguelite where you play as a potato wielding up to 6 weapons at a time to fight off hordes of aliens. You can choose from a variety of traits and items to create unique builds and survive until help arrives. In this article, we will tell you more about what Brotato is, how to download it on your iPhone, what are the benefits of playing it, and what are some alternatives to it.

-

brotato download iphone


Download Zip · https://urllie.com/2uNHyD



-

What is Brotato?

-

Brotato is a game developed by Erabit Studios, a Singapore-based indie game studio. It was released in June 2023 for iOS, Android, and Steam platforms. It has received overwhelmingly positive reviews from players and critics alike, who praised its fast-paced action, colorful graphics, humorous tone, and addictive gameplay.

-

The story behind Brotato

-

The game is set in a distant future where potatoes have evolved into intelligent beings and have colonized other planets. However, one day, an alien invasion threatens their peaceful existence. The sole survivor of the attack is Brotato, the only potato capable of handling 6 weapons at the same time. Waiting to be rescued by his mates, Brotato must survive in this hostile environment and fend off the alien menace.

-

The gameplay of Brotato

-

Brotato is a shooter roguelite, which means that each run is different and randomized. You start with a basic weapon and a random trait that affects your stats and abilities. You can find more weapons and items as you progress through the waves of enemies, but you also face more challenges and dangers. You can also unlock more characters with different traits that can change your playstyle. The game has an auto-firing option by default, but you can also manually aim if you prefer. The game has fast runs that last under 30 minutes, so you can enjoy a quick session anytime.

-

The features of Brotato

-

Some of the features that make Brotato stand out are:

- -

How to download Brotato on iPhone?

-

If you want to play Brotato on your iPhone, you have two options: download it from the App Store or from the official website.

-

Download from the App Store

-

The easiest way to get Brotato on your iPhone is to download it from the App Store. Here are the steps to do so:

-

brotato app store iphone
-brotato premium ios download
-brotato roguelite game for iphone
-how to install brotato on iphone
-brotato iphone gameplay review
-brotato arena shooter ios game
-brotato potato character iphone game
-best weapons for brotato on iphone
-brotato tips and tricks for ios
-brotato free vs premium on iphone
-brotato update for iphone users
-brotato cloud saving on ios devices
-brotato discord server for iphone players
-brotato erabit studios ios app
-brotato bullet hell action iphone game
-brotato characters and traits for ios
-brotato items and weapons for iphone
-brotato wave survival mode on ios
-brotato accessibility options for iphone
-brotato editor's choice on app store
-brotato ratings and feedback on iphone
-brotato privacy policy for ios users
-brotato in-app purchases for iphone
-brotato game center support on ios
-brotato compatible devices for iphone
-brotato screenshots and videos for ios
-brotato new features and bug fixes for iphone
-brotato contact information for ios support
-brotato social media accounts for iphone fans
-brotato fun and addictive iphone game
-download brotato latest version for ios
-play brotato offline on iphone
-customize your runs with brotato on ios
-collect materials and shop items in brotato on iphone
-kill aliens and survive in brotato on ios
-enjoy fast runs with brotato on iphone
-create unique builds with brotato on ios
-wield up to 6 weapons with brotato on iphone
-challenge friends and check leaderboards with brotato on ios
-learn more about the story of brotato on ios

- Open the App Store app on your iPhone and tap on the search icon at the bottom right corner - Type "Brotato" in the search bar and tap on the first result that appears - Tap on the "Get" button to start downloading the game. You might need to enter your Apple ID and password or use Touch ID or Face ID to confirm the download - Wait for the download to finish and then tap on the "Open" button to launch the game - Enjoy playing Brotato on your iPhone!

Download from the official website

-

Another way to get Brotato on your iPhone is to download it from the official website of Erabit Studios. Here are the steps to do so:

-- Go to https://erabit.com/brotato/ on your iPhone's browser and scroll down to the bottom of the page - Tap on the "Download for iOS" button and you will be redirected to a page with a QR code - Scan the QR code with your iPhone's camera or a QR code scanner app and you will be taken to a page where you can download the game - Tap on the "Install" button and then tap on "Allow" when prompted to install a profile on your device - Go to Settings > General > Profile & Device Management and tap on the profile named "Erabit Studios" - Tap on "Trust Erabit Studios" and then tap on "Trust" again to confirm - Go back to your home screen and you will see the Brotato icon. Tap on it to launch the game - Enjoy playing Brotato on your iPhone!

What are the benefits of playing Brotato?

-

Brotato is not only a fun and entertaining game, but also a beneficial one. Here are some of the benefits of playing Brotato:

-

Improve your reflexes and strategy skills

-

Brotato is a fast-paced game that requires quick thinking and reaction. You have to dodge bullets, avoid traps, and shoot enemies while managing your weapons, items, and health. You also have to plan ahead and choose the best traits and items for your build. Playing Brotato can help you improve your reflexes and strategy skills, which can be useful in other aspects of life.

-

Enjoy a variety of characters, items, and weapons

-

Brotato is a game that offers a lot of variety and replay value. You can play as different characters with different traits that affect your gameplay. You can also find hundreds of items and weapons that can change your abilities and performance. You can mix and match different combinations to create unique builds and experiences. Playing Brotato can help you enjoy a variety of characters, items, and weapons, which can keep you entertained for hours.

-

Compete with your friends and other players

-

Brotato is a game that supports Game Center, which means you can challenge your friends and other players online. You can compare your scores, achievements, and rankings with others and see who is the best Brotato player. You can also chat with other players and share tips and tricks. Playing Brotato can help you compete with your friends and other players, which can make you more motivated and social.

-

What are some alternatives to Brotato?

-

If you like Brotato, you might also like some of these alternatives:

-

Super Kill-BOI 9000

-

Super Kill-BOI 9000 is another top-down arena shooter roguelite where you play as a cyborg killing machine who has to survive waves of enemies in a futuristic dystopia. You can upgrade your weapons, abilities, and stats as you progress through the levels. The game has retro-style graphics, synthwave music, and dark humor.

-

Nordic Ashes

-

Nordic Ashes is a top-down action-adventure roguelite where you play as a Viking warrior who has to explore a procedurally generated world full of Norse mythology. You can collect runes, artifacts, and weapons that grant you different powers and effects. The game has pixel-art graphics, atmospheric soundtracks, and epic boss battles.

-

Stickman's Arena

-

Stickman's Arena is a top-down multiplayer shooter where you play as a stickman who has to fight against other stickmen in various arenas. You can customize your stickman with different skins, hats, outfits, and weapons. The game has simple graphics, catchy music, and chaotic gameplay.

-

Conclusion

-

Brotato is a fun and challenging shooter roguelite game for iPhone that you should definitely try out. It has a captivating story, addictive gameplay, colorful graphics, humorous tone, and tons of variety. You can download it from the App Store or from the official website of Erabit Studios. You can also enjoy the benefits of playing Brotato, such as improving your reflexes and strategy skills, enjoying a variety of characters, items, and weapons, and competing with your friends and other players. If you are looking for some alternatives to Brotato, you can try out Super Kill-BOI 9000, Nordic Ashes, or Stickman's Arena. We hope you have fun playing Brotato and other similar games on your iPhone.

-

FAQs

-

Here are some frequently asked questions about Brotato and its answers:

-

Q: How much does Brotato cost?

-

A: Brotato is a free-to-play game, which means you can download and play it without paying anything. However, the game offers in-app purchases that allow you to get more spuds (the in-game currency) or unlock premium features such as removing ads, getting more characters, or getting more materials.

-

Q: Is Brotato compatible with my iPhone?

-

A: Brotato requires iOS 10.0 or later and is compatible with iPhone 5S or newer models. You can check the compatibility of your device by going to the App Store page of Brotato and scrolling down to the "Compatibility" section.

-

Q: Is Brotato safe to download and play?

-

A: Yes, Brotato is safe to download and play. The game has been tested and verified by Apple and does not contain any viruses or malware. The game also has a privacy policy that explains how the developer collects and shares your data. You can read the privacy policy by going to the App Store page of Brotato and tapping on the "Privacy Policy" link.

-

Q: How can I contact the developer of Brotato?

-

A: If you have any questions, feedback, or issues regarding Brotato, you can contact the developer of Brotato by emailing them at support@erabit.com or by visiting their website at https://erabit.com/contact/.

-

Q: How can I learn more about Brotato?

-

A: If you want to learn more about Brotato, you can visit the official website of Erabit Studios at https://erabit.com/brotato/, where you can find more information, screenshots, videos, and news about the game. You can also follow them on social media platforms such as Facebook, Twitter, Instagram, and YouTube.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Build Manage and Customize Your City with SimCity APK Mod.md b/spaces/fatiXbelha/sd/Build Manage and Customize Your City with SimCity APK Mod.md deleted file mode 100644 index 2d8242b290a0b8fe78ff1aa2ef2544955bccfc26..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Build Manage and Customize Your City with SimCity APK Mod.md +++ /dev/null @@ -1,117 +0,0 @@ - -

Download SimCity APK Mod: How to Build Your Dream City with Unlimited Resources

-

Do you love playing city-building games? Do you want to create your own metropolis with unlimited possibilities? If yes, then you should try SimCity, one of the most popular and addictive games in this genre. SimCity is a simulation game where you can design, build, and manage your own city. You can choose from various types of buildings, roads, parks, landmarks, and services to make your city unique and attractive. You can also deal with various challenges such as traffic, pollution, crime, disasters, and more.

-

download simcity apk mod


Download File ……… https://urllie.com/2uNvGe



-

However, building your dream city is not easy. You need to have enough resources such as money, materials, and energy to construct and upgrade your buildings. You also need to balance your budget and keep your citizens happy. Sometimes, you may feel frustrated or bored by the slow progress or the limited options in the game. That's why many players look for ways to hack or mod the game to get unlimited resources and unlock all the features.

-

In this article, we will show you how to download SimCity APK mod, a modified version of the game that gives you unlimited money, unlocked all buildings and items, and no ads. We will also tell you the pros and cons of using this mod, and answer some frequently asked questions about it. So, if you are ready to build your dream city with ease, read on!

-

What is SimCity APK Mod?

-

SimCity APK mod is a modified version of the original SimCity game that you can download and install on your Android device. It is not an official app from the game developer, but a third-party app that has been modified by some hackers or modders. The main purpose of this mod is to give you unlimited resources and unlock all the features in the game. This way, you can build your city without any restrictions or limitations.

-

Features of SimCity APK Mod

-

Here are some of the features that you can enjoy when you download SimCity APK mod:

-

Unlimited Money

-

Money is the most important resource in SimCity. You need money to buy buildings, roads, services, and other items. You also need money to upgrade your buildings and improve your city's performance. However, earning money in the game is not easy. You have to collect taxes from your citizens, complete quests, sell items, and more. Sometimes, you may run out of money or spend more than you earn.

-

With SimCity APK mod, you don't have to worry about money anymore. You will have unlimited money in your account that you can use for anything you want. You can buy any building or item that you like, upgrade them to the max level, and expand your city as much as you want. You don't have to wait for hours or days to earn enough money for your next project.

-

Unlocked All Buildings and Items

-

Another feature of SimCity APK mod is that it unlocks all the buildings and items in the game. Normally, you have to unlock them by leveling up, completing achievements, or spending real money. Some of the buildings and items are very expensive or hard to get. For example, some of the landmarks such as Eiffel Tower, Statue of Liberty, or Big Ben cost thousands of SimCash (the premium currency in the game) or require a lot of materials and energy to build. Some of the items such as specializations, disasters, or regions are only available for a limited time or in certain events.

-

With SimCity APK mod, you can access all the buildings and items in the game without any restrictions. You can build any landmark or specialization that you want, unleash any disaster or scenario that you like, and explore any region or map that you prefer. You can also customize your city with various themes, styles, and decorations. You can make your city look like Paris, Tokyo, London, or any other place in the world.

-

download simcity buildit mod apk unlimited money
-download simcity buildit mod apk latest version
-download simcity buildit mod apk offline
-download simcity buildit mod apk android 1
-download simcity buildit mod apk 2021
-download simcity buildit mod apk revdl
-download simcity buildit mod apk happymod
-download simcity buildit mod apk rexdl
-download simcity buildit mod apk no root
-download simcity buildit mod apk terbaru
-download simcity deluxe mod apk
-download simcity deluxe mod apk android
-download simcity deluxe mod apk unlimited money
-download simcity deluxe mod apk data
-download simcity deluxe mod apk obb
-download simcity 4 mod apk
-download simcity 4 mod apk android
-download simcity 4 mod apk unlimited money
-download simcity 4 mod apk data
-download simcity 4 mod apk obb
-download simcity 5 mod apk
-download simcity 5 mod apk android
-download simcity 5 mod apk unlimited money
-download simcity 5 mod apk data
-download simcity 5 mod apk obb
-download simcity classic mod apk
-download simcity classic mod apk android
-download simcity classic mod apk unlimited money
-download simcity classic mod apk data
-download simcity classic mod apk obb
-download game simcity buildit mod apk
-download game simcity buildit mod apk unlimited money
-download game simcity buildit mod apk latest version
-download game simcity buildit mod apk offline
-download game simcity buildit mod apk android 1
-download game simcity deluxe mod apk
-download game simcity deluxe mod apk android
-download game simcity deluxe mod apk unlimited money
-download game simcity deluxe mod apk data
-download game simcity deluxe mod apk obb
-free download simcity buildit mod apk
-free download simcity buildit mod apk unlimited money
-free download simcity buildit mod apk latest version
-free download simcity buildit mod apk offline
-free download simcity buildit mod apk android 1
-free download simcity deluxe mod apk
-free download simcity deluxe mod apk android
-free download simcity deluxe mod apk unlimited money
-free download simcity deluxe mod apk data

-

No Ads

-

The last feature of SimCity APK mod is that it removes all the ads in the game. Ads are annoying and distracting, especially when you are trying to enjoy your game. They can also slow down your device or consume your data. Sometimes, they may even contain viruses or malware that can harm your device or steal your information.

-

With SimCity APK mod, you can play the game without any ads. You don't have to watch any videos or click on any banners to get extra rewards or bonuses. You don't have to worry about any pop-ups or redirects that may interrupt your game or affect your device. You can have a smooth and uninterrupted gaming experience.

-

How to Download and Install SimCity APK Mod?

-

Now that you know the features of SimCity APK mod, you may be wondering how to download and install it on your device. Here are the steps that you need to follow:

-

Step 1: Enable Unknown Sources

-

Since SimCity APK mod is not an official app from the Google Play Store, you need to enable unknown sources on your device to install it. This is a security setting that prevents unauthorized apps from being installed on your device. To enable unknown sources, go to your device's settings, then security, then unknown sources, and toggle it on.

-

Step 2: Download the APK File

-

Next, you need to download the APK file of SimCity APK mod from a reliable source. There are many websites that offer this mod, but not all of them are safe or trustworthy. Some of them may contain fake or outdated files that may not work or may harm your device. To avoid this, you should do some research and read some reviews before downloading the file.

-

One of the websites that we recommend is [SimCity APK Mod], which provides the latest and working version of the mod. You can download the file from this website by clicking on the download button and following the instructions. The file size is about 100 MB, so make sure you have enough space on your device and a stable internet connection.

-

Step 3: Install the APK File

-

After downloading the file, you need to install it on your device. To do this, locate the file in your device's file manager and tap on it. You may see a warning message that says "This type of file can harm your device". Don't worry, this is just a standard message for unknown sources. Just tap on "OK" and proceed with the installation.

-

The installation process may take a few minutes, depending on your device's performance. Wait until it is finished and don't interrupt it.

-

Step 4: Launch the Game and Enjoy

-

Once the installation is done, you can launch the game and enjoy it. You will see a new icon on your home screen or app drawer that says "SimCity Mod". Tap on it and start playing. You will notice that you have unlimited money, unlocked all buildings and items, and no ads in the game. You can build your dream city with ease and have fun.

-

Pros and Cons of SimCity APK Mod

-

SimCity APK mod may sound like a perfect solution for those who want to play SimCity without any limitations or frustrations. However, like anything else in life, it has its pros and cons. Here are some of them:

-

Pros

-

Some of the advantages of using SimCity APK mod are:

-

More Fun and Creativity

-

With SimCity APK mod, you can have more fun and creativity in building your city. You can experiment with different types of buildings, roads, parks, landmarks, and services. You can also try different themes, styles, and decorations for your city. You can make your city look realistic or fantasy-like, modern or ancient, colorful or monochrome. The possibilities are endless.

-

No Need to Spend Real Money

-

Another advantage of using SimCity APK mod is that you don't need to spend real money to enjoy the game. You don't have to buy SimCash or Simoleons with your hard-earned cash. You don't have to watch ads or complete surveys to get free rewards or bonuses. You don't have to wait for hours or days to get enough resources for your next project. You can have everything you want for free.

-

No Annoying Ads

-

A third advantage of using SimCity APK mod is that you don't have to deal with annoying ads in the game. Ads can ruin your gaming experience and waste your time and data. They can also expose you to viruses or malware that can harm your device or steal your information. With SimCity APK mod, you can play the game without any ads. You can have a smooth and uninterrupted gaming experience.

-

Cons

-

Some of the disadvantages of using SimCity APK mod are:

-

Risk of Viruses and Malware

-

One of the risks of using SimCity APK mod is that you may download a file that contains viruses or malware. Since the mod is not an official app from the game developer, it may not be safe or trustworthy. Some of the websites that offer the mod may be malicious or fraudulent. They may infect your device with harmful software that can damage your device or steal your information. To avoid this, you should always download the mod from a reliable source and scan it with an antivirus app before installing it.

-

Possible Ban from the Official Game

-

Another risk of using SimCity APK mod is that you may get banned from the official game. The game developer may detect that you are using a modified version of the game and suspend or terminate your account. This means that you will lose all your progress and achievements in the game. You will also not be able to play online or connect with other players. To avoid this, you should always use the mod at your own risk and discretion.

-

Less Challenge and Satisfaction

-

A third risk of using SimCity APK mod is that you may lose the challenge and satisfaction of playing the game. The game is designed to be challenging and rewarding, where you have to work hard and smart to build your city. You have to plan, strategize, and manage your resources and budget. You have to deal with various problems and crises that may arise in your city. You have to earn your rewards and achievements by completing quests and goals.

-

With SimCity APK mod, you may lose the sense of accomplishment and enjoyment that comes from playing the game. You may feel bored or lazy by having everything handed to you on a silver platter. You may not appreciate the value or beauty of your city because you didn't work for it. You may not learn anything new or improve your skills because you didn't face any challenges or difficulties.

-

Conclusion

-

SimCity APK mod is a modified version of the original SimCity game that gives you unlimited resources and unlocks all the features in the game. It can be a great way to have more fun and creativity in building your city without any limitations or frustrations. However, it also has some risks and drawbacks that you should be aware of before using it.

-

If you decide to download SimCity APK mod, you should always do it from a reliable source and scan it with an antivirus app before installing it. You should also use it at your own risk and discretion, as you may get banned from the official game or lose the challenge and satisfaction of playing the game.

-

We hope this article has helped you understand what SimCity APK mod is, how to download and install it, and what are its pros and cons. If you have any questions or comments, feel free to leave them below.

-

FAQs

-

Here are some frequently asked questions about SimCity APK mod:

-

Is SimCity APK Mod Safe?

-

SimCity APK mod is not an official app from the game developer, but a third-party app that has been modified by some hackers or modders. It may not be safe or trustworthy, as it may contain viruses or malware that can harm your device or steal your information. To ensure your safety, you should always download the mod from a reliable source and scan it with an antivirus app before installing it.

-

Is SimCity APK Mod Legal?

-

SimCity APK mod is not legal, as it violates the terms and conditions of the game developer. It also infringes on their intellectual property rights and copyrights. By using the mod, you are breaking the law and may face legal consequences. To avoid this, you should always play the game with the original version and respect the game developer's rights and policies.

-

Does SimCity APK Mod Work Offline?

-

SimCity APK mod does not work offline, as it requires an internet connection to run. The game needs to connect to the game server and sync your data and progress. If you play the game offline, you may encounter errors or glitches, or lose your data or progress. To ensure a smooth and stable gaming experience, you should always play the game online with a good internet connection.

-

Can I Play SimCity APK Mod with Friends?

-

SimCity APK mod does not support multiplayer mode, as it is a modified version of the game that is not compatible with the official game. You cannot play the game with your friends or other players online, as you may get banned from the game server or face other issues. To enjoy the multiplayer mode, you should play the game with the original version and connect with your friends or other players through Facebook, Google Play, or Game Center.

-

Can I Update SimCity APK Mod?

-

SimCity APK mod does not support automatic updates, as it is a modified version of the game that is not linked to the Google Play Store. You cannot update the game through the app or the store, as you may lose the mod features or face other problems. To update the game, you have to download and install the latest version of the mod from a reliable source. However, you should be careful and backup your data before updating, as you may lose your data or progress in the process.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Age of Fantasy Mod APK and Become the Master of Fantasy.md b/spaces/fatiXbelha/sd/Download Age of Fantasy Mod APK and Become the Master of Fantasy.md deleted file mode 100644 index 2c835d693f1a2c9ce23b016103270eaac4ffe96b..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Age of Fantasy Mod APK and Become the Master of Fantasy.md +++ /dev/null @@ -1,88 +0,0 @@ - -

Download Age of Fantasy Mod APK: A Strategy Game with Epic Battles

-

If you are a fan of strategy games, you might want to check out Age of Fantasy, a turn-based game that lets you command different races and units in epic battles. And if you want to enjoy the game with more features and resources, you might want to download Age of Fantasy Mod APK, a modified version that gives you unlimited money, gems, and more. In this article, we will tell you what Age of Fantasy is, why you should download its mod APK, and how to do it.

-

What is Age of Fantasy?

-

Age of Fantasy is a strategy game developed by Zero Touch group, a team of indie developers who love pixel art and retro games. The game is inspired by classic games like Age of Empires, Warcraft, and Civilization, but with a fantasy twist. You can choose from different races, such as humans, elves, orcs, undead, dwarves, and more, each with their own unique units and abilities. You can also create your own maps and scenarios, or play online with other players in multiplayer mode. The game has a pixel art style and a retro soundtrack that will make you feel nostalgic.

-

download age of fantasy mod apk


DOWNLOAD https://urllie.com/2uNwCR



-

Features of Age of Fantasy

-

- Multiple races and units to choose from

-

Age of Fantasy has over 10 races to choose from, each with their own strengths and weaknesses. You can play as humans, elves, orcs, undead, dwarves, scaledfolk, merfolk, kobolds, trolls, goblins, and more. Each race has different units, such as warriors, archers, mages, cavalry, siege weapons, dragons, etc. You can also upgrade your units with new skills and abilities as you progress in the game.

-

- Customizable maps and scenarios

-

Age of Fantasy lets you create your own maps and scenarios with its map editor. You can design your own terrain, place buildings and resources, set the starting positions and objectives for each player, and add triggers and events to make your map more dynamic. You can also share your maps with other players online or download maps created by others.

-

- Online multiplayer and campaign mode

-

Age of Fantasy has an online multiplayer mode that lets you play with other players around the world. You can join or create rooms with different settings, such as map size, turn limit, fog of war, etc. You can also chat with other players and send them emojis. If you prefer to play solo, you can also try the campaign mode, which has over 100 missions to complete. The campaign mode will take you through different stories and scenarios involving different races and characters.

-

- Pixel art graphics and retro music

-

Age of Fantasy has a pixel art style that will remind you of the old-school games from the 90s. The game has colorful graphics and detailed animations that bring the fantasy world to life. The game also has a retro soundtrack that matches the mood and atmosphere of the game. You can enjoy the nostalgic sound effects and music while playing the game.

-

Why download Age of Fantasy Mod APK?

-

Age of Fantasy is a fun and addictive strategy game that will keep you entertained for hours. However, if you want to enjoy the game with more features and resources, you might want to download Age of Fantasy Mod APK. This is a modified version of the game that gives you some advantages

that other players don't have. Here are some of the benefits of Age of Fantasy Mod APK:

-

Benefits of Age of Fantasy Mod APK

-

- Unlimited money and gems

-

Money and gems are the main currencies in Age of Fantasy. You need them to buy new units, upgrade your existing ones, and unlock new races. However, earning money and gems can be slow and tedious, especially if you want to get the best units and races. With Age of Fantasy Mod APK, you don't have to worry about that. You will get unlimited money and gems that you can use to buy anything you want in the game. You can also use them to speed up your progress and complete the missions faster.

-

- All races and units unlocked

-

Age of Fantasy has over 10 races to choose from, but not all of them are available at the start. You have to unlock them by completing certain missions or paying with gems. Some of the races are more expensive than others, and some of them are only available for a limited time. With Age of Fantasy Mod APK, you don't have to wait or pay to unlock any race or unit. You will have access to all of them from the beginning, and you can switch between them as you please. You can also try different combinations and strategies with different races and units.

-

download age of fantasy mod apk unlimited money
-download age of fantasy mod apk latest version
-download age of fantasy mod apk for android
-download age of fantasy mod apk for ios
-download age of fantasy mod apk free
-download age of fantasy mod apk offline
-download age of fantasy mod apk no ads
-download age of fantasy mod apk 1.1831
-download age of fantasy mod apk apkloli
-download age of fantasy mod apk rexdl
-download age of fantasy mod apk revdl
-download age of fantasy mod apk happymod
-download age of fantasy mod apk an1
-download age of fantasy mod apk android 1
-download age of fantasy mod apk apkpure
-download age of fantasy mod apk apkmody
-download age of fantasy mod apk mob.org
-download age of fantasy mod apk uptodown
-download age of fantasy mod apk hack
-download age of fantasy mod apk cheat
-download age of fantasy mod apk unlocked
-download age of fantasy mod apk premium
-download age of fantasy mod apk pro
-download age of fantasy mod apk full version
-download age of fantasy mod apk mega mod
-download age of fantasy mod apk unlimited gems
-download age of fantasy mod apk unlimited coins
-download age of fantasy mod apk unlimited resources
-download age of fantasy mod apk unlimited units
-download age of fantasy mod apk unlimited spells
-download age of fantasy mod apk unlimited upgrades
-download age of fantasy mod apk unlimited everything
-download age of fantasy mod apk god mode
-download age of fantasy mod apk one hit kill
-download age of fantasy mod apk high damage
-download age of fantasy mod apk no root
-download age of fantasy mod apk no verification
-download age of fantasy mod apk no survey
-download age of fantasy mod apk online
-download age of fantasy mod apk multiplayer

-

- No ads and no root required

-

Age of Fantasy is a free game, but it has ads that can interrupt your gameplay and annoy you. Some of the ads are also misleading and can lead you to download unwanted apps or malware. With Age of Fantasy Mod APK, you don't have to deal with any ads. You can enjoy the game without any distractions or risks. Moreover, you don't need to root your device to install Age of Fantasy Mod APK. You can simply download and install it like any other app, without compromising your device's security or warranty.

-

How to download and install Age of Fantasy Mod APK?

-

Now that you know the benefits of Age of Fantasy Mod APK, you might be wondering how to download and install it on your device. Don't worry, it's very easy and simple. Just follow these steps:

-

- Step 1: Download the APK file from a trusted source

-

The first thing you need to do is to download the APK file of Age of Fantasy Mod APK from a trusted source. There are many websites that offer modded apps, but not all of them are safe and reliable. Some of them may contain viruses or malware that can harm your device or steal your personal information. To avoid that, you should only download the APK file from a reputable website that has positive reviews and feedback from other users. You can also scan the file with an antivirus app before installing it.

-

- Step 2: Enable unknown sources on your device

-

The next thing you need to do is to enable unknown sources on your device. This is a security setting that prevents you from installing apps from sources other than the official Google Play Store. However, since Age of Fantasy Mod APK is not available on the Play Store, you need to enable unknown sources to install it. To do that, go to your device's settings, then security, then unknown sources, and toggle it on. You may see a warning message, but don't worry, it's safe as long as you download the APK file from a trusted source.

-

- Step 3: Install the APK file and launch the game

-

The final step is to install the APK file and launch the game. To do that, locate the downloaded APK file on your device's storage, then tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish. Once it's done, you can launch the game from your app drawer or home screen. You will see a mod menu where you can enable or disable the mod features as you wish. Enjoy!

-

Conclusion

-

Age of Fantasy is a strategy game that lets you command different races and units in epic battles. It has multiple features that make it fun and addictive, such as customizable maps, online multiplayer, pixel art graphics, and retro music. However, if you want to enjoy the game with more features and resources, you should download Age of Fantasy Mod APK, a modified version that gives you unlimited money, gems, all races and units unlocked, no ads, and no root required. You can download and install it easily by following our guide above.

-

If you liked this article, please share it with your friends who love strategy games. Also, let us know what you think about Age of Fantasy Mod APK in the comments below. Thank you for reading!

-

FAQs

Here are some of the frequently asked questions about Age of Fantasy Mod APK:

-

- Is Age of Fantasy Mod APK safe to download and install?

-

Yes, Age of Fantasy Mod APK is safe to download and install, as long as you get it from a trusted source. You should also scan the file with an antivirus app before installing it, just to be sure. Age of Fantasy Mod APK does not require root access, so it will not harm your device or void your warranty.

-

- Is Age of Fantasy Mod APK compatible with my device?

-

Age of Fantasy Mod APK is compatible with most Android devices that run on Android 4.1 or higher. However, some devices may have compatibility issues or performance problems due to different hardware specifications or software versions. If you encounter any problems while playing the game, you can try adjusting the settings or contacting the developer for support.

-

- How do I update Age of Fantasy Mod APK?

-

Age of Fantasy Mod APK is not available on the Google Play Store, so you will not receive automatic updates from there. However, you can check for updates from the website where you downloaded the APK file, or from the mod menu in the game. You can also follow the developer's social media accounts or join their community forums to get the latest news and updates about the game.

-

- Can I play online with other players using Age of Fantasy Mod APK?

-

Yes, you can play online with other players using Age of Fantasy Mod APK, but only with those who are using the same modded version as you. You cannot play with players who are using the original version or a different modded version, as they will have different game data and features. You can also create or join private rooms with your friends who are using the same modded version as you.

-

- Can I use Age of Fantasy Mod APK on PC?

-

Yes, you can use Age of Fantasy Mod APK on PC, but you will need an Android emulator to do so. An Android emulator is a software that allows you to run Android apps and games on your PC. There are many Android emulators available online, such as BlueStacks, NoxPlayer, LDPlayer, etc. You can choose one that suits your preferences and system requirements. Once you have installed an Android emulator on your PC, you can download and install Age of Fantasy Mod APK on it and play it like you would on your mobile device.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/fengmuxi/ChatGpt-Web/app/api/config/route.ts b/spaces/fengmuxi/ChatGpt-Web/app/api/config/route.ts deleted file mode 100644 index 2b3bcbf203e9cfbf671b3143dd51160cb3e1f812..0000000000000000000000000000000000000000 --- a/spaces/fengmuxi/ChatGpt-Web/app/api/config/route.ts +++ /dev/null @@ -1,26 +0,0 @@ -import { NextResponse } from "next/server"; - -import { getServerSideConfig } from "../../config/server"; - -const serverConfig = getServerSideConfig(); - -// Danger! Don not write any secret value here! -// 警告!不要在这里写入任何敏感信息! -const DANGER_CONFIG = { - needCode: serverConfig.needCode, - hideUserApiKey: serverConfig.hideUserApiKey, - enableGPT4: serverConfig.enableGPT4, -}; - -declare global { - type DangerConfig = typeof DANGER_CONFIG; -} - -async function handle() { - return NextResponse.json(DANGER_CONFIG); -} - -export const GET = handle; -export const POST = handle; - -export const runtime = "edge"; diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Game NBA LIVE Mobile and Compete in Live Events and Tournaments.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Game NBA LIVE Mobile and Compete in Live Events and Tournaments.md deleted file mode 100644 index d73a26b03700572aab59819901692b2533e2f811..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Game NBA LIVE Mobile and Compete in Live Events and Tournaments.md +++ /dev/null @@ -1,114 +0,0 @@ -
-

How to Download NBA Games on Your Mobile Device

-

If you are a fan of basketball and the NBA, you might want to download some NBA games on your mobile device. Playing NBA games on your mobile device can be a great way to enjoy the thrill of the sport anytime and anywhere. You can also improve your basketball skills, learn new strategies, and compete with other players online.

-

download game nba


Download File ››› https://gohhs.com/2uPtEe



-

There are many benefits of playing NBA games on your mobile device. Some of them are:

- -

But what are the best NBA games to download on your mobile device? There are many options available, but we have selected two of the most popular and highly rated ones for you. These are:

-

NBA 2K Mobile Basketball Game

-

NBA 2K Mobile Basketball Game is one of the most realistic and immersive basketball games you can play on your mobile device. It is developed by 2K, Inc., a leading company in sports video games. It has over 10 million downloads and a 4.5-star rating on the Google Play Store.

-

download game nba 2k mobile basketball
-download game nba live mobile basketball
-download game nba 2k23
-download game nba 2k23 arcade edition
-download game nba 2k22
-download game nba jam
-download game nba 2k21
-download game nba 2k20
-download game nba 2k19
-download game nba 2k18
-download game nba 2k17
-download game nba 2k16
-download game nba 2k15
-download game nba 2k14
-download game nba 2k13
-download game nba 2k12
-download game nba 2k11
-download game nba live 19
-download game nba live 18
-download game nba live 16
-download game nba live 15
-download game nba live 14
-download game nba live 10
-download game nba live 09
-download game nba live 08
-download game nba live 07
-download game nba live 06
-download game nba live 05
-download game nba live 04
-download game nba live 03
-download game nba live 2001
-download game nba live 2000
-download game nba live 99
-download game nba live 98
-download game nba street vol.3
-download game nba street vol.2
-download game nba street homecourt
-download game nba street showdown
-download game nba ballers phenom
-download game nba ballers rebound
-download game nba ballers chosen one
-download game nba inside drive 2004
-download game nba inside drive 2003
-download game nba inside drive 2002
-download game nba inside drive 2000
-download game nba in the zone '98
-download game nba in the zone '99
-download game nba in the zone '00
-download game nba in the zone '02

-

Features of NBA 2K Mobile Basketball Game

-

Some of the features of NBA 2K Mobile Basketball Game are:

- -

How to Download NBA 2K Mobile Basketball Game

-

To download NBA 2K Mobile Basketball Game, follow these steps:

-<

- Go to the Google Play Store or the App Store and search for NBA 2K Mobile Basketball Game

-

- Tap on the Install button and wait for the game to download and install on your device

-

- Launch the game and sign in with your Google Play Games or Game Center account

-

- Customize your profile and choose your favorite NBA team

-

- Start playing and enjoy the game

-

NBA LIVE Mobile Basketball

-

NBA LIVE Mobile Basketball is another popular and highly rated basketball game you can play on your mobile device. It is developed by Electronic Arts, a renowned company in video games. It has over 100 million downloads and a 4.3-star rating on the Google Play Store.

-

Features of NBA LIVE Mobile Basketball

-

Some of the features of NBA LIVE Mobile Basketball are:

- -

How to Download NBA LIVE Mobile Basketball

-

To download NBA LIVE Mobile Basketball, follow these steps:

-

- Go to the Google Play Store or the App Store and search for NBA LIVE Mobile Basketball

-

- Tap on the Install button and wait for the game to download and install on your device

-

- Launch the game and sign in with your Facebook, Google Play Games, or Game Center account

-

- Choose your favorite NBA team and customize your jersey and court

-

- Start playing and enjoy the game

Conclusion

-

Downloading NBA games on your mobile device can be a fun and convenient way to enjoy basketball anytime and anywhere. You can choose from a variety of NBA games that offer realistic graphics, exciting gameplay, and online competition. You can also improve your basketball skills, learn new strategies, and collect your favorite NBA players and teams.

-

In this article, we have reviewed two of the best NBA games to download on your mobile device: NBA 2K Mobile Basketball Game and NBA LIVE Mobile Basketball. Both of these games have millions of downloads and high ratings on the Google Play Store and the App Store. They also have many features that make them stand out from other NBA games.

-

To download these games, you just need to follow some simple steps that we have explained in this article. You can also check out the links below for more information and reviews about these games. We hope you enjoy playing these games and have a great time with basketball.

-

FAQs

-

Q: How much space do I need to download these NBA games on my mobile device?

-

A: The size of these NBA games may vary depending on your device and the updates. However, as a general estimate, you will need about 1.5 GB of free space to download NBA 2K Mobile Basketball Game and about 100 MB of free space to download NBA LIVE Mobile Basketball.

-

Q: Do I need an internet connection to play these NBA games on my mobile device?

-

A: Yes, you will need an internet connection to play these NBA games on your mobile device. You will also need an internet connection to download and update these games.

-

Q: Are these NBA games compatible with my mobile device?

-

A: These NBA games are compatible with most Android and iOS devices that meet the minimum requirements. You can check the compatibility of your device on the Google Play Store or the App Store before downloading these games.

-

Q: Are these NBA games free to play?

-

A: Yes, these NBA games are free to play. However, they may contain in-app purchases that allow you to buy extra items, coins, or features. You can disable in-app purchases in your device settings if you do not want to use them.

-

Q: How can I contact the developers of these NBA games if I have any issues or feedback?

-

A: You can contact the developers of these NBA games by using the following methods:

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/depd/lib/browser/index.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/depd/lib/browser/index.js deleted file mode 100644 index 6be45cc20b33f20dcdc580b9709f1a4a20bb87a1..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/depd/lib/browser/index.js +++ /dev/null @@ -1,77 +0,0 @@ -/*! - * depd - * Copyright(c) 2015 Douglas Christopher Wilson - * MIT Licensed - */ - -'use strict' - -/** - * Module exports. - * @public - */ - -module.exports = depd - -/** - * Create deprecate for namespace in caller. - */ - -function depd (namespace) { - if (!namespace) { - throw new TypeError('argument namespace is required') - } - - function deprecate (message) { - // no-op in browser - } - - deprecate._file = undefined - deprecate._ignored = true - deprecate._namespace = namespace - deprecate._traced = false - deprecate._warned = Object.create(null) - - deprecate.function = wrapfunction - deprecate.property = wrapproperty - - return deprecate -} - -/** - * Return a wrapped function in a deprecation message. - * - * This is a no-op version of the wrapper, which does nothing but call - * validation. - */ - -function wrapfunction (fn, message) { - if (typeof fn !== 'function') { - throw new TypeError('argument fn must be a function') - } - - return fn -} - -/** - * Wrap property in a deprecation message. - * - * This is a no-op version of the wrapper, which does nothing but call - * validation. - */ - -function wrapproperty (obj, prop, message) { - if (!obj || (typeof obj !== 'object' && typeof obj !== 'function')) { - throw new TypeError('argument obj must be object') - } - - var descriptor = Object.getOwnPropertyDescriptor(obj, prop) - - if (!descriptor) { - throw new TypeError('must call property on owner object') - } - - if (!descriptor.configurable) { - throw new TypeError('property must be configurable') - } -} diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/escape-html/Readme.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/escape-html/Readme.md deleted file mode 100644 index 653d9eaa793317827ce724c4a0756110e9356fc8..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/escape-html/Readme.md +++ /dev/null @@ -1,43 +0,0 @@ - -# escape-html - - Escape string for use in HTML - -## Example - -```js -var escape = require('escape-html'); -var html = escape('foo & bar'); -// -> foo & bar -``` - -## Benchmark - -``` -$ npm run-script bench - -> escape-html@1.0.3 bench nodejs-escape-html -> node benchmark/index.js - - - http_parser@1.0 - node@0.10.33 - v8@3.14.5.9 - ares@1.9.0-DEV - uv@0.10.29 - zlib@1.2.3 - modules@11 - openssl@1.0.1j - - 1 test completed. - 2 tests completed. - 3 tests completed. - - no special characters x 19,435,271 ops/sec ±0.85% (187 runs sampled) - single special character x 6,132,421 ops/sec ±0.67% (194 runs sampled) - many special characters x 3,175,826 ops/sec ±0.65% (193 runs sampled) -``` - -## License - - MIT \ No newline at end of file diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/ws/README.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/ws/README.md deleted file mode 100644 index 4ae71f6d06437c4217c7423b8f15cdcee383b62b..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/ws/README.md +++ /dev/null @@ -1,495 +0,0 @@ -# ws: a Node.js WebSocket library - -[![Version npm](https://img.shields.io/npm/v/ws.svg?logo=npm)](https://www.npmjs.com/package/ws) -[![CI](https://img.shields.io/github/workflow/status/websockets/ws/CI/master?label=CI&logo=github)](https://github.com/websockets/ws/actions?query=workflow%3ACI+branch%3Amaster) -[![Coverage Status](https://img.shields.io/coveralls/websockets/ws/master.svg?logo=coveralls)](https://coveralls.io/github/websockets/ws) - -ws is a simple to use, blazing fast, and thoroughly tested WebSocket client and -server implementation. - -Passes the quite extensive Autobahn test suite: [server][server-report], -[client][client-report]. - -**Note**: This module does not work in the browser. The client in the docs is a -reference to a back end with the role of a client in the WebSocket -communication. Browser clients must use the native -[`WebSocket`](https://developer.mozilla.org/en-US/docs/Web/API/WebSocket) -object. To make the same code work seamlessly on Node.js and the browser, you -can use one of the many wrappers available on npm, like -[isomorphic-ws](https://github.com/heineiuo/isomorphic-ws). - -## Table of Contents - -- [Protocol support](#protocol-support) -- [Installing](#installing) - - [Opt-in for performance](#opt-in-for-performance) -- [API docs](#api-docs) -- [WebSocket compression](#websocket-compression) -- [Usage examples](#usage-examples) - - [Sending and receiving text data](#sending-and-receiving-text-data) - - [Sending binary data](#sending-binary-data) - - [Simple server](#simple-server) - - [External HTTP/S server](#external-https-server) - - [Multiple servers sharing a single HTTP/S server](#multiple-servers-sharing-a-single-https-server) - - [Client authentication](#client-authentication) - - [Server broadcast](#server-broadcast) - - [Round-trip time](#round-trip-time) - - [Use the Node.js streams API](#use-the-nodejs-streams-api) - - [Other examples](#other-examples) -- [FAQ](#faq) - - [How to get the IP address of the client?](#how-to-get-the-ip-address-of-the-client) - - [How to detect and close broken connections?](#how-to-detect-and-close-broken-connections) - - [How to connect via a proxy?](#how-to-connect-via-a-proxy) -- [Changelog](#changelog) -- [License](#license) - -## Protocol support - -- **HyBi drafts 07-12** (Use the option `protocolVersion: 8`) -- **HyBi drafts 13-17** (Current default, alternatively option - `protocolVersion: 13`) - -## Installing - -``` -npm install ws -``` - -### Opt-in for performance - -There are 2 optional modules that can be installed along side with the ws -module. These modules are binary addons which improve certain operations. -Prebuilt binaries are available for the most popular platforms so you don't -necessarily need to have a C++ compiler installed on your machine. - -- `npm install --save-optional bufferutil`: Allows to efficiently perform - operations such as masking and unmasking the data payload of the WebSocket - frames. -- `npm install --save-optional utf-8-validate`: Allows to efficiently check if a - message contains valid UTF-8. - -To not even try to require and use these modules, use the -[`WS_NO_BUFFER_UTIL`](./doc/ws.md#ws_no_buffer_util) and -[`WS_NO_UTF_8_VALIDATE`](./doc/ws.md#ws_no_utf_8_validate) environment -variables. These might be useful to enhance security in systems where a user can -put a package in the package search path of an application of another user, due -to how the Node.js resolver algorithm works. - -## API docs - -See [`/doc/ws.md`](./doc/ws.md) for Node.js-like documentation of ws classes and -utility functions. - -## WebSocket compression - -ws supports the [permessage-deflate extension][permessage-deflate] which enables -the client and server to negotiate a compression algorithm and its parameters, -and then selectively apply it to the data payloads of each WebSocket message. - -The extension is disabled by default on the server and enabled by default on the -client. It adds a significant overhead in terms of performance and memory -consumption so we suggest to enable it only if it is really needed. - -Note that Node.js has a variety of issues with high-performance compression, -where increased concurrency, especially on Linux, can lead to [catastrophic -memory fragmentation][node-zlib-bug] and slow performance. If you intend to use -permessage-deflate in production, it is worthwhile to set up a test -representative of your workload and ensure Node.js/zlib will handle it with -acceptable performance and memory usage. - -Tuning of permessage-deflate can be done via the options defined below. You can -also use `zlibDeflateOptions` and `zlibInflateOptions`, which is passed directly -into the creation of [raw deflate/inflate streams][node-zlib-deflaterawdocs]. - -See [the docs][ws-server-options] for more options. - -```js -import WebSocket, { WebSocketServer } from 'ws'; - -const wss = new WebSocketServer({ - port: 8080, - perMessageDeflate: { - zlibDeflateOptions: { - // See zlib defaults. - chunkSize: 1024, - memLevel: 7, - level: 3 - }, - zlibInflateOptions: { - chunkSize: 10 * 1024 - }, - // Other options settable: - clientNoContextTakeover: true, // Defaults to negotiated value. - serverNoContextTakeover: true, // Defaults to negotiated value. - serverMaxWindowBits: 10, // Defaults to negotiated value. - // Below options specified as default values. - concurrencyLimit: 10, // Limits zlib concurrency for perf. - threshold: 1024 // Size (in bytes) below which messages - // should not be compressed if context takeover is disabled. - } -}); -``` - -The client will only use the extension if it is supported and enabled on the -server. To always disable the extension on the client set the -`perMessageDeflate` option to `false`. - -```js -import WebSocket from 'ws'; - -const ws = new WebSocket('ws://www.host.com/path', { - perMessageDeflate: false -}); -``` - -## Usage examples - -### Sending and receiving text data - -```js -import WebSocket from 'ws'; - -const ws = new WebSocket('ws://www.host.com/path'); - -ws.on('open', function open() { - ws.send('something'); -}); - -ws.on('message', function message(data) { - console.log('received: %s', data); -}); -``` - -### Sending binary data - -```js -import WebSocket from 'ws'; - -const ws = new WebSocket('ws://www.host.com/path'); - -ws.on('open', function open() { - const array = new Float32Array(5); - - for (var i = 0; i < array.length; ++i) { - array[i] = i / 2; - } - - ws.send(array); -}); -``` - -### Simple server - -```js -import { WebSocketServer } from 'ws'; - -const wss = new WebSocketServer({ port: 8080 }); - -wss.on('connection', function connection(ws) { - ws.on('message', function message(data) { - console.log('received: %s', data); - }); - - ws.send('something'); -}); -``` - -### External HTTP/S server - -```js -import { createServer } from 'https'; -import { readFileSync } from 'fs'; -import { WebSocketServer } from 'ws'; - -const server = createServer({ - cert: readFileSync('/path/to/cert.pem'), - key: readFileSync('/path/to/key.pem') -}); -const wss = new WebSocketServer({ server }); - -wss.on('connection', function connection(ws) { - ws.on('message', function message(data) { - console.log('received: %s', data); - }); - - ws.send('something'); -}); - -server.listen(8080); -``` - -### Multiple servers sharing a single HTTP/S server - -```js -import { createServer } from 'http'; -import { parse } from 'url'; -import { WebSocketServer } from 'ws'; - -const server = createServer(); -const wss1 = new WebSocketServer({ noServer: true }); -const wss2 = new WebSocketServer({ noServer: true }); - -wss1.on('connection', function connection(ws) { - // ... -}); - -wss2.on('connection', function connection(ws) { - // ... -}); - -server.on('upgrade', function upgrade(request, socket, head) { - const { pathname } = parse(request.url); - - if (pathname === '/foo') { - wss1.handleUpgrade(request, socket, head, function done(ws) { - wss1.emit('connection', ws, request); - }); - } else if (pathname === '/bar') { - wss2.handleUpgrade(request, socket, head, function done(ws) { - wss2.emit('connection', ws, request); - }); - } else { - socket.destroy(); - } -}); - -server.listen(8080); -``` - -### Client authentication - -```js -import { createServer } from 'http'; -import { WebSocketServer } from 'ws'; - -const server = createServer(); -const wss = new WebSocketServer({ noServer: true }); - -wss.on('connection', function connection(ws, request, client) { - ws.on('message', function message(data) { - console.log(`Received message ${data} from user ${client}`); - }); -}); - -server.on('upgrade', function upgrade(request, socket, head) { - // This function is not defined on purpose. Implement it with your own logic. - authenticate(request, function next(err, client) { - if (err || !client) { - socket.write('HTTP/1.1 401 Unauthorized\r\n\r\n'); - socket.destroy(); - return; - } - - wss.handleUpgrade(request, socket, head, function done(ws) { - wss.emit('connection', ws, request, client); - }); - }); -}); - -server.listen(8080); -``` - -Also see the provided [example][session-parse-example] using `express-session`. - -### Server broadcast - -A client WebSocket broadcasting to all connected WebSocket clients, including -itself. - -```js -import WebSocket, { WebSocketServer } from 'ws'; - -const wss = new WebSocketServer({ port: 8080 }); - -wss.on('connection', function connection(ws) { - ws.on('message', function message(data, isBinary) { - wss.clients.forEach(function each(client) { - if (client.readyState === WebSocket.OPEN) { - client.send(data, { binary: isBinary }); - } - }); - }); -}); -``` - -A client WebSocket broadcasting to every other connected WebSocket clients, -excluding itself. - -```js -import WebSocket, { WebSocketServer } from 'ws'; - -const wss = new WebSocketServer({ port: 8080 }); - -wss.on('connection', function connection(ws) { - ws.on('message', function message(data, isBinary) { - wss.clients.forEach(function each(client) { - if (client !== ws && client.readyState === WebSocket.OPEN) { - client.send(data, { binary: isBinary }); - } - }); - }); -}); -``` - -### Round-trip time - -```js -import WebSocket from 'ws'; - -const ws = new WebSocket('wss://websocket-echo.com/'); - -ws.on('open', function open() { - console.log('connected'); - ws.send(Date.now()); -}); - -ws.on('close', function close() { - console.log('disconnected'); -}); - -ws.on('message', function message(data) { - console.log(`Round-trip time: ${Date.now() - data} ms`); - - setTimeout(function timeout() { - ws.send(Date.now()); - }, 500); -}); -``` - -### Use the Node.js streams API - -```js -import WebSocket, { createWebSocketStream } from 'ws'; - -const ws = new WebSocket('wss://websocket-echo.com/'); - -const duplex = createWebSocketStream(ws, { encoding: 'utf8' }); - -duplex.pipe(process.stdout); -process.stdin.pipe(duplex); -``` - -### Other examples - -For a full example with a browser client communicating with a ws server, see the -examples folder. - -Otherwise, see the test cases. - -## FAQ - -### How to get the IP address of the client? - -The remote IP address can be obtained from the raw socket. - -```js -import { WebSocketServer } from 'ws'; - -const wss = new WebSocketServer({ port: 8080 }); - -wss.on('connection', function connection(ws, req) { - const ip = req.socket.remoteAddress; -}); -``` - -When the server runs behind a proxy like NGINX, the de-facto standard is to use -the `X-Forwarded-For` header. - -```js -wss.on('connection', function connection(ws, req) { - const ip = req.headers['x-forwarded-for'].split(',')[0].trim(); -}); -``` - -### How to detect and close broken connections? - -Sometimes the link between the server and the client can be interrupted in a way -that keeps both the server and the client unaware of the broken state of the -connection (e.g. when pulling the cord). - -In these cases ping messages can be used as a means to verify that the remote -endpoint is still responsive. - -```js -import { WebSocketServer } from 'ws'; - -function heartbeat() { - this.isAlive = true; -} - -const wss = new WebSocketServer({ port: 8080 }); - -wss.on('connection', function connection(ws) { - ws.isAlive = true; - ws.on('pong', heartbeat); -}); - -const interval = setInterval(function ping() { - wss.clients.forEach(function each(ws) { - if (ws.isAlive === false) return ws.terminate(); - - ws.isAlive = false; - ws.ping(); - }); -}, 30000); - -wss.on('close', function close() { - clearInterval(interval); -}); -``` - -Pong messages are automatically sent in response to ping messages as required by -the spec. - -Just like the server example above your clients might as well lose connection -without knowing it. You might want to add a ping listener on your clients to -prevent that. A simple implementation would be: - -```js -import WebSocket from 'ws'; - -function heartbeat() { - clearTimeout(this.pingTimeout); - - // Use `WebSocket#terminate()`, which immediately destroys the connection, - // instead of `WebSocket#close()`, which waits for the close timer. - // Delay should be equal to the interval at which your server - // sends out pings plus a conservative assumption of the latency. - this.pingTimeout = setTimeout(() => { - this.terminate(); - }, 30000 + 1000); -} - -const client = new WebSocket('wss://websocket-echo.com/'); - -client.on('open', heartbeat); -client.on('ping', heartbeat); -client.on('close', function clear() { - clearTimeout(this.pingTimeout); -}); -``` - -### How to connect via a proxy? - -Use a custom `http.Agent` implementation like [https-proxy-agent][] or -[socks-proxy-agent][]. - -## Changelog - -We're using the GitHub [releases][changelog] for changelog entries. - -## License - -[MIT](LICENSE) - -[changelog]: https://github.com/websockets/ws/releases -[client-report]: http://websockets.github.io/ws/autobahn/clients/ -[https-proxy-agent]: https://github.com/TooTallNate/node-https-proxy-agent -[node-zlib-bug]: https://github.com/nodejs/node/issues/8871 -[node-zlib-deflaterawdocs]: - https://nodejs.org/api/zlib.html#zlib_zlib_createdeflateraw_options -[permessage-deflate]: https://tools.ietf.org/html/rfc7692 -[server-report]: http://websockets.github.io/ws/autobahn/servers/ -[session-parse-example]: ./examples/express-session-parse -[socks-proxy-agent]: https://github.com/TooTallNate/node-socks-proxy-agent -[ws-server-options]: ./doc/ws.md#new-websocketserveroptions-callback diff --git a/spaces/fffiloni/image-to-sound-fx-debug/app.py b/spaces/fffiloni/image-to-sound-fx-debug/app.py deleted file mode 100644 index dac7dcc2d72a8f91340d6350fa4d99428b43a853..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/image-to-sound-fx-debug/app.py +++ /dev/null @@ -1,81 +0,0 @@ -import gradio as gr - - -caption = gr.Blocks.load(name="spaces/SRDdev/Image-Caption") -audio_gen = gr.Blocks.load(name="spaces/haoheliu/audioldm-text-to-audio-generation") - -def infer(image_input): - cap = caption(image_input, fn_index=0) - sound = audio_gen(cap, 5, 2.5, 45, 3, fn_index=0) - - return gr.Textbox.update(value=cap, visible=True), sound - -title = """ -
-
-

- Image to Sound Effect -

-
-

- Convert an image to a corresponding sound effect generated through GPT2 Image Captioning & AudioLDM -

-
-""" - -article = """ - - - -
-

You may also like:

- -
- - - - - - - - - - - - - -
-
-""" - -with gr.Blocks(css="style.css") as demo: - with gr.Column(elem_id="col-container"): - - gr.HTML(title) - - input_img = gr.Image(type="filepath", elem_id="input-img") - caption_output = gr.Textbox(label="Caption", lines=1, visible=False, elem_id="text-caption") - sound_output = gr.Video(label="Result", elem_id="sound-output") - - generate = gr.Button("Generate SFX from Image") - - - - gr.HTML(article) - - generate.click(infer, inputs=[input_img], outputs=[caption_output, sound_output], api_name="i2fx") - - -demo.queue(max_size=32, concurrency_count=20).launch() diff --git a/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_43.py b/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_43.py deleted file mode 100644 index 2b0025e3ecdf5139930bade493849aaa8693480b..0000000000000000000000000000000000000000 --- a/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_43.py +++ /dev/null @@ -1,26 +0,0 @@ - -import re - -def is_spam(message): - # Rule 1: Check for the presence of special characters or spaces between characters (common in spam messages) - if re.search(r'[\W]', message): - return True - - # Rule 2: Check for non-standard domain names - domain_regex = r'(http|https)://[^\s/]+' - domain_matches = re.findall(domain_regex, message) - for match in domain_matches: - if not ('.' in match and len(match) > 5): # exclude standard ones - return True - - # Rule 3: Check for unusual percentage signs - if re.search(r'[%][^ ][^\d]', message): - return True - - # Rule 4: Check for the presence of unusual substrings (광고, 보장, 무료, 무료거부, 등록, SMS, 입장, 1000명, 무조건, 매수) - spam_keywords = ["광고", "보장", "무료", "무료거부", "등록", "SMS", "입장", "1000명", "무조건", "매수"] - for word in spam_keywords: - if word in message: - return True - - return False diff --git a/spaces/flatindo/generate2/diffusion_webui/utils/preprocces_utils.py b/spaces/flatindo/generate2/diffusion_webui/utils/preprocces_utils.py deleted file mode 100644 index f1824721e4804eecd48b453a37c1ce0377468773..0000000000000000000000000000000000000000 --- a/spaces/flatindo/generate2/diffusion_webui/utils/preprocces_utils.py +++ /dev/null @@ -1,96 +0,0 @@ -from controlnet_aux import ( - CannyDetector, - ContentShuffleDetector, - HEDdetector, - LineartAnimeDetector, - LineartDetector, - MediapipeFaceDetector, - MidasDetector, - MLSDdetector, - NormalBaeDetector, - OpenposeDetector, - PidiNetDetector, - SamDetector, - ZoeDetector, -) - -import numpy as np -import cv2 - -def pad64(x): - return int(np.ceil(float(x) / 64.0) * 64 - x) - -def HWC3(x): - assert x.dtype == np.uint8 - if x.ndim == 2: - x = x[:, :, None] - assert x.ndim == 3 - H, W, C = x.shape - assert C == 1 or C == 3 or C == 4 - if C == 3: - return x - if C == 1: - return np.concatenate([x, x, x], axis=2) - if C == 4: - color = x[:, :, 0:3].astype(np.float32) - alpha = x[:, :, 3:4].astype(np.float32) / 255.0 - y = color * alpha + 255.0 * (1.0 - alpha) - y = y.clip(0, 255).astype(np.uint8) - return y - -def safer_memory(x): - return np.ascontiguousarray(x.copy()).copy() - - -def resize_image_with_pad(input_image, resolution, skip_hwc3=False): - if skip_hwc3: - img = input_image - else: - img = HWC3(input_image) - - H_raw, W_raw, _ = img.shape - k = float(resolution) / float(min(H_raw, W_raw)) - interpolation = cv2.INTER_CUBIC if k > 1 else cv2.INTER_AREA - H_target = int(np.round(float(H_raw) * k)) - W_target = int(np.round(float(W_raw) * k)) - img = cv2.resize(img, (W_target, H_target), interpolation=interpolation) - H_pad, W_pad = pad64(H_target), pad64(W_target) - img_padded = np.pad(img, [[0, H_pad], [0, W_pad], [0, 0]], mode='edge') - - def remove_pad(x): - return safer_memory(x[:H_target, :W_target]) - - return safer_memory(img_padded), remove_pad - - -def scribble_xdog(img, res=512, thr_a=32, **kwargs): - img, remove_pad = resize_image_with_pad(img, res) - g1 = cv2.GaussianBlur(img.astype(np.float32), (0, 0), 0.5) - g2 = cv2.GaussianBlur(img.astype(np.float32), (0, 0), 5.0) - dog = (255 - np.min(g2 - g1, axis=2)).clip(0, 255).astype(np.uint8) - result = np.zeros_like(img, dtype=np.uint8) - result[2 * (255 - dog) > thr_a] = 255 - return remove_pad(result), True - -def none_preprocces(image_path:str): - return Image.open(image_path) - -PREPROCCES_DICT = { - "Hed": HEDdetector.from_pretrained("lllyasviel/Annotators"), - "Midas": MidasDetector.from_pretrained("lllyasviel/Annotators"), - "MLSD": MLSDdetector.from_pretrained("lllyasviel/Annotators"), - "Openpose": OpenposeDetector.from_pretrained("lllyasviel/Annotators"), - "PidiNet": PidiNetDetector.from_pretrained("lllyasviel/Annotators"), - "NormalBae": NormalBaeDetector.from_pretrained("lllyasviel/Annotators"), - "Lineart": LineartDetector.from_pretrained("lllyasviel/Annotators"), - "LineartAnime": LineartAnimeDetector.from_pretrained( - "lllyasviel/Annotators" - ), - "Zoe": ZoeDetector.from_pretrained("lllyasviel/Annotators"), - "Canny": CannyDetector(), - "ContentShuffle": ContentShuffleDetector(), - "MediapipeFace": MediapipeFaceDetector(), - "ScribbleXDOG": scribble_xdog, - "None": none_preprocces -} - \ No newline at end of file diff --git a/spaces/foduucom/stockmarket-future-prediction/app.py b/spaces/foduucom/stockmarket-future-prediction/app.py deleted file mode 100644 index 168086fd80ef8fd2692b567f5d45ae9955e35752..0000000000000000000000000000000000000000 --- a/spaces/foduucom/stockmarket-future-prediction/app.py +++ /dev/null @@ -1,61 +0,0 @@ -import gradio as gr -from gradio import components as gc -import cv2 -import requests -import os -from ultralyticsplus import YOLO, render_result - -# Model Heading and Description -model_heading = "StockMarket: Trends Recognition for Trading Success" -description = """ 🌟 Elevate Your Trading Odyssey with Trend Predictions! 🌟 -Dive deep into the enigma of market trends with the precision of a seasoned detective. 🕵️‍♂️ With Foduu AI's unparalleled insights, transition seamlessly from bearish 'Downs' to bullish 'Ups'. 📉📈 -Consider us your trading compass, guiding you through the financial wilderness like a modern-day Gandalf. 🧙‍♂️ Whether you're a seasoned trader or just embarking on your journey, we're here to illuminate your path. 💡 -Trading with us? It's like possessing the secret recipe to investment success. 🍲💰 -Intrigued? Dive into the world of trading alchemy! 🌌 -💌 Reach Out: info@foddu.com -👍 Give us a thumbs up and embark on an unparalleled trading escapade! No, you won't gain superpowers, but you'll be one step closer to mastering the markets! 🚀🌍📊!""" - -image_path= [['test/1.jpg', 'foduucom/stockmarket-future-prediction', 640, 0.25, 0.45], ['test/2.jpg', 'foduucom/stockmarket-future-prediction', 640, 0.25, 0.45],['test/3.jpg', 'foduucom/stockmarket-future-prediction', 640, 0.25, 0.45]] - -# Load YOLO model -model = YOLO("foduucom/stockmarket-future-prediction") - -def yolov8_img_inference( - image: gc.Image = None, - model_path: str = "foduucom/stockmarket-future-prediction", - image_size: gc.Slider = 640, - conf_threshold: gc.Slider = 0.25, - iou_threshold: gc.Slider = 0.45 -): - model = YOLO(model_path) - model.overrides['conf'] = conf_threshold - model.overrides['iou'] = iou_threshold - model.overrides['agnostic_nms'] = False - model.overrides['max_det'] = 1000 - results = model.predict(image) - render = render_result(model=model, image=image, result=results[0]) - return render - -inputs_image = [ - gc.Image(type="filepath", label="Input Image"), - gc.Dropdown(["foduucom/stockmarket-future-prediction"], default="foduucom/stockmarket-future-prediction", label="Model"), - gc.Slider(minimum=320, maximum=1280, default=640, step=32, label="Image Size"), - gc.Slider(minimum=0.0, maximum=1.0, default=0.25, step=0.05, label="Confidence Threshold"), - gc.Slider(minimum=0.0, maximum=1.0, default=0.45, step=0.05, label="IOU Threshold"), -] - -outputs_image = gc.Image(type="filepath", label="Output Image") - -interface_image = gr.Interface( - fn=yolov8_img_inference, - inputs=inputs_image, - outputs=outputs_image, - title=model_heading, - description=description, - examples=image_path, - cache_examples=False, - theme='huggingface' -) - -interface_image.queue() -interface_image.launch(debug=True) diff --git a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/blocks_plug/run.py b/spaces/freddyaboulton/3.1.4.9-all-demos/demos/blocks_plug/run.py deleted file mode 100644 index 97684fa61b5c6a66eb5e07fa4162510f9d155415..0000000000000000000000000000000000000000 --- a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/blocks_plug/run.py +++ /dev/null @@ -1,36 +0,0 @@ -import gradio as gr - - -def change_tab(): - return gr.Tabs.update(selected=2) - - -identity_demo, input_demo, output_demo = gr.Blocks(), gr.Blocks(), gr.Blocks() - -with identity_demo: - gr.Interface(lambda x: x, "text", "text") - -with input_demo: - t = gr.Textbox(label="Enter your text here") - with gr.Row(): - btn = gr.Button("Submit") - clr = gr.Button("Clear") - clr.click(lambda x: "", t, t) - -with output_demo: - gr.Textbox("This is a static output") - -with gr.Blocks() as demo: - gr.Markdown("Three demos in one!") - with gr.Tabs(selected=1) as tabs: - with gr.TabItem("Text Identity", id=0): - identity_demo.render() - with gr.TabItem("Text Input", id=1): - input_demo.render() - with gr.TabItem("Text Static", id=2): - output_demo.render() - btn = gr.Button("Change tab") - btn.click(inputs=None, outputs=tabs, fn=change_tab) - -if __name__ == "__main__": - demo.launch() diff --git a/spaces/fun-research/FC-CLIP/datasets/prepare_pascal_voc_sem_seg.py b/spaces/fun-research/FC-CLIP/datasets/prepare_pascal_voc_sem_seg.py deleted file mode 100644 index 9b0b0e133caebe60a64a17f923a23cba4c323363..0000000000000000000000000000000000000000 --- a/spaces/fun-research/FC-CLIP/datasets/prepare_pascal_voc_sem_seg.py +++ /dev/null @@ -1,65 +0,0 @@ -# ------------------------------------------------------------------------------ -# Copyright (c) 2022-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# This work is made available under the Nvidia Source Code License. -# To view a copy of this license, visit -# https://github.com/NVlabs/ODISE/blob/main/LICENSE -# -# Written by Jiarui Xu -# ------------------------------------------------------------------------------ - -import os -from pathlib import Path -import shutil - -import numpy as np -import tqdm -from PIL import Image - - -def convert_pas21(input, output): - img = np.asarray(Image.open(input)) - assert img.dtype == np.uint8 - # do nothing - Image.fromarray(img).save(output) - -def convert_pas20(input, output): - img = np.array(Image.open(input)) - img[img == 0] = 255 - img = img - 1 - img[img == 254] = 255 - assert img.dtype == np.uint8 - # do nothing - Image.fromarray(img).save(output) - - -if __name__ == "__main__": - dataset_dir = Path(os.getenv("DETECTRON2_DATASETS", "datasets")) / "pascal_voc_d2" - voc_dir = Path(os.getenv("DETECTRON2_DATASETS", "datasets")) / "VOCdevkit/VOC2012" - for split in ["training", "validation"]: - if split == "training": - img_name_path = voc_dir / "ImageSets/Segmentation/train.txt" - else: - img_name_path = voc_dir / "ImageSets/Segmentation/val.txt" - img_dir = voc_dir / "JPEGImages" - ann_dir = voc_dir / "SegmentationClass" - - output_img_dir = dataset_dir / "images" / split - output_ann_dir_21 = dataset_dir / "annotations_pascal21" / split - output_ann_dir_20 = dataset_dir / "annotations_pascal20" / split - - output_img_dir.mkdir(parents=True, exist_ok=True) - output_ann_dir_21.mkdir(parents=True, exist_ok=True) - output_ann_dir_20.mkdir(parents=True, exist_ok=True) - - with open(img_name_path) as f: - for line in tqdm.tqdm(f.readlines()): - img_name = line.strip() - img_path = img_dir / f"{img_name}.jpg" - ann_path = ann_dir / f"{img_name}.png" - - # print(f'copy2 {output_img_dir}') - shutil.copy2(img_path, output_img_dir) - # print(f"convert {ann_dir} to {output_ann_dir / f'{img_name}.png'}") - convert_pas21(ann_path, output_ann_dir_21 / f"{img_name}.png") - convert_pas20(ann_path, output_ann_dir_20 / f"{img_name}.png") \ No newline at end of file diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/decode_heads/dnl_head.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/decode_heads/dnl_head.py deleted file mode 100644 index 333280c5947066fd3c7ebcfe302a0e7ad65480d5..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/decode_heads/dnl_head.py +++ /dev/null @@ -1,131 +0,0 @@ -import torch -from annotator.uniformer.mmcv.cnn import NonLocal2d -from torch import nn - -from ..builder import HEADS -from .fcn_head import FCNHead - - -class DisentangledNonLocal2d(NonLocal2d): - """Disentangled Non-Local Blocks. - - Args: - temperature (float): Temperature to adjust attention. Default: 0.05 - """ - - def __init__(self, *arg, temperature, **kwargs): - super().__init__(*arg, **kwargs) - self.temperature = temperature - self.conv_mask = nn.Conv2d(self.in_channels, 1, kernel_size=1) - - def embedded_gaussian(self, theta_x, phi_x): - """Embedded gaussian with temperature.""" - - # NonLocal2d pairwise_weight: [N, HxW, HxW] - pairwise_weight = torch.matmul(theta_x, phi_x) - if self.use_scale: - # theta_x.shape[-1] is `self.inter_channels` - pairwise_weight /= theta_x.shape[-1]**0.5 - pairwise_weight /= self.temperature - pairwise_weight = pairwise_weight.softmax(dim=-1) - return pairwise_weight - - def forward(self, x): - # x: [N, C, H, W] - n = x.size(0) - - # g_x: [N, HxW, C] - g_x = self.g(x).view(n, self.inter_channels, -1) - g_x = g_x.permute(0, 2, 1) - - # theta_x: [N, HxW, C], phi_x: [N, C, HxW] - if self.mode == 'gaussian': - theta_x = x.view(n, self.in_channels, -1) - theta_x = theta_x.permute(0, 2, 1) - if self.sub_sample: - phi_x = self.phi(x).view(n, self.in_channels, -1) - else: - phi_x = x.view(n, self.in_channels, -1) - elif self.mode == 'concatenation': - theta_x = self.theta(x).view(n, self.inter_channels, -1, 1) - phi_x = self.phi(x).view(n, self.inter_channels, 1, -1) - else: - theta_x = self.theta(x).view(n, self.inter_channels, -1) - theta_x = theta_x.permute(0, 2, 1) - phi_x = self.phi(x).view(n, self.inter_channels, -1) - - # subtract mean - theta_x -= theta_x.mean(dim=-2, keepdim=True) - phi_x -= phi_x.mean(dim=-1, keepdim=True) - - pairwise_func = getattr(self, self.mode) - # pairwise_weight: [N, HxW, HxW] - pairwise_weight = pairwise_func(theta_x, phi_x) - - # y: [N, HxW, C] - y = torch.matmul(pairwise_weight, g_x) - # y: [N, C, H, W] - y = y.permute(0, 2, 1).contiguous().reshape(n, self.inter_channels, - *x.size()[2:]) - - # unary_mask: [N, 1, HxW] - unary_mask = self.conv_mask(x) - unary_mask = unary_mask.view(n, 1, -1) - unary_mask = unary_mask.softmax(dim=-1) - # unary_x: [N, 1, C] - unary_x = torch.matmul(unary_mask, g_x) - # unary_x: [N, C, 1, 1] - unary_x = unary_x.permute(0, 2, 1).contiguous().reshape( - n, self.inter_channels, 1, 1) - - output = x + self.conv_out(y + unary_x) - - return output - - -@HEADS.register_module() -class DNLHead(FCNHead): - """Disentangled Non-Local Neural Networks. - - This head is the implementation of `DNLNet - `_. - - Args: - reduction (int): Reduction factor of projection transform. Default: 2. - use_scale (bool): Whether to scale pairwise_weight by - sqrt(1/inter_channels). Default: False. - mode (str): The nonlocal mode. Options are 'embedded_gaussian', - 'dot_product'. Default: 'embedded_gaussian.'. - temperature (float): Temperature to adjust attention. Default: 0.05 - """ - - def __init__(self, - reduction=2, - use_scale=True, - mode='embedded_gaussian', - temperature=0.05, - **kwargs): - super(DNLHead, self).__init__(num_convs=2, **kwargs) - self.reduction = reduction - self.use_scale = use_scale - self.mode = mode - self.temperature = temperature - self.dnl_block = DisentangledNonLocal2d( - in_channels=self.channels, - reduction=self.reduction, - use_scale=self.use_scale, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - mode=self.mode, - temperature=self.temperature) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - output = self.convs[0](x) - output = self.dnl_block(output) - output = self.convs[1](output) - if self.concat_input: - output = self.conv_cat(torch.cat([x, output], dim=1)) - output = self.cls_seg(output) - return output diff --git a/spaces/gradio-discord-bots/Llama-2-70b-chat-hf/README.md b/spaces/gradio-discord-bots/Llama-2-70b-chat-hf/README.md deleted file mode 100644 index edc0e82b72140b9ee9cf82d1235a9bbcae081cc2..0000000000000000000000000000000000000000 --- a/spaces/gradio-discord-bots/Llama-2-70b-chat-hf/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Llama 2 70b Chat Hf -emoji: 🦙 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.38.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/gradio/HuBERT/examples/simultaneous_translation/utils/functions.py b/spaces/gradio/HuBERT/examples/simultaneous_translation/utils/functions.py deleted file mode 100644 index f795b5f31cee6d9f8387d6402994b9cbb4c98190..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/simultaneous_translation/utils/functions.py +++ /dev/null @@ -1,149 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - - -def exclusive_cumprod(tensor, dim: int, eps: float = 1e-10): - """ - Implementing exclusive cumprod. - There is cumprod in pytorch, however there is no exclusive mode. - cumprod(x) = [x1, x1x2, x2x3x4, ..., prod_{i=1}^n x_i] - exclusive means cumprod(x) = [1, x1, x1x2, x1x2x3, ..., prod_{i=1}^{n-1} x_i] - """ - tensor_size = list(tensor.size()) - tensor_size[dim] = 1 - return_tensor = safe_cumprod( - torch.cat([torch.ones(tensor_size).type_as(tensor), tensor], dim=dim), - dim=dim, - eps=eps, - ) - - if dim == 0: - return return_tensor[:-1] - elif dim == 1: - return return_tensor[:, :-1] - elif dim == 2: - return return_tensor[:, :, :-1] - else: - raise RuntimeError("Cumprod on dimension 3 and more is not implemented") - - -def safe_cumprod(tensor, dim: int, eps: float = 1e-10): - """ - An implementation of cumprod to prevent precision issue. - cumprod(x) - = [x1, x1x2, x1x2x3, ....] - = [exp(log(x1)), exp(log(x1) + log(x2)), exp(log(x1) + log(x2) + log(x3)), ...] - = exp(cumsum(log(x))) - """ - - if (tensor + eps < 0).any().item(): - raise RuntimeError( - "Safe cumprod can only take non-negative tensors as input." - "Consider use torch.cumprod if you want to calculate negative values." - ) - - log_tensor = torch.log(tensor + eps) - cumsum_log_tensor = torch.cumsum(log_tensor, dim) - exp_cumsum_log_tensor = torch.exp(cumsum_log_tensor) - return exp_cumsum_log_tensor - - -def lengths_to_mask(lengths, max_len: int, dim: int = 0, negative_mask: bool = False): - """ - Convert a tensor of lengths to mask - For example, lengths = [[2, 3, 4]], max_len = 5 - mask = - [[1, 1, 1], - [1, 1, 1], - [0, 1, 1], - [0, 0, 1], - [0, 0, 0]] - """ - assert len(lengths.size()) <= 2 - if len(lengths) == 2: - if dim == 1: - lengths = lengths.t() - lengths = lengths - else: - lengths = lengths.unsqueeze(1) - - # lengths : batch_size, 1 - lengths = lengths.view(-1, 1) - - batch_size = lengths.size(0) - # batch_size, max_len - mask = torch.arange(max_len).expand(batch_size, max_len).type_as(lengths) < lengths - - if negative_mask: - mask = ~mask - - if dim == 0: - # max_len, batch_size - mask = mask.t() - - return mask - - -def moving_sum(x, start_idx: int, end_idx: int): - """ - From MONOTONIC CHUNKWISE ATTENTION - https://arxiv.org/pdf/1712.05382.pdf - Equation (18) - - x = [x_1, x_2, ..., x_N] - MovingSum(x, start_idx, end_idx)_n = Sigma_{m=n−(start_idx−1)}^{n+end_idx-1} x_m - for n in {1, 2, 3, ..., N} - - x : src_len, batch_size - start_idx : start idx - end_idx : end idx - - Example - src_len = 5 - batch_size = 3 - x = - [[ 0, 5, 10], - [ 1, 6, 11], - [ 2, 7, 12], - [ 3, 8, 13], - [ 4, 9, 14]] - - MovingSum(x, 3, 1) = - [[ 0, 5, 10], - [ 1, 11, 21], - [ 3, 18, 33], - [ 6, 21, 36], - [ 9, 24, 39]] - - MovingSum(x, 1, 3) = - [[ 3, 18, 33], - [ 6, 21, 36], - [ 9, 24, 39], - [ 7, 17, 27], - [ 4, 9, 14]] - """ - assert start_idx > 0 and end_idx > 0 - assert len(x.size()) == 2 - src_len, batch_size = x.size() - # batch_size, 1, src_len - x = x.t().unsqueeze(1) - # batch_size, 1, src_len - moving_sum_weight = x.new_ones([1, 1, end_idx + start_idx - 1]) - - moving_sum = ( - torch.nn.functional.conv1d( - x, moving_sum_weight, padding=start_idx + end_idx - 1 - ) - .squeeze(1) - .t() - ) - moving_sum = moving_sum[end_idx:-start_idx] - - assert src_len == moving_sum.size(0) - assert batch_size == moving_sum.size(1) - - return moving_sum diff --git a/spaces/gradio/HuBERT/fairseq/models/wav2vec/wav2vec2.py b/spaces/gradio/HuBERT/fairseq/models/wav2vec/wav2vec2.py deleted file mode 100644 index 714fd3ab50443b8d15715b1cf5abd4eb517298c4..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/models/wav2vec/wav2vec2.py +++ /dev/null @@ -1,1016 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from dataclasses import dataclass, field -from typing import List, Tuple - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.data.data_utils import compute_mask_indices -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from fairseq.models import BaseFairseqModel, register_model -from fairseq.modules import ( - Fp32GroupNorm, - Fp32LayerNorm, - GradMultiply, - GumbelVectorQuantizer, - LayerNorm, - MultiheadAttention, - SamePad, - TransposeLast, -) -from fairseq.modules.transformer_sentence_encoder import init_bert_params -from fairseq.utils import buffered_arange, index_put, is_xla_tensor - - -EXTRACTOR_MODE_CHOICES = ChoiceEnum(["default", "layer_norm"]) -MASKING_DISTRIBUTION_CHOICES = ChoiceEnum(["static", "uniform", "normal", "poisson"]) - - -@dataclass -class Wav2Vec2Config(FairseqDataclass): - extractor_mode: EXTRACTOR_MODE_CHOICES = field( - default="default", - metadata={ - "help": "mode for feature extractor. default has a single group norm with d " - "groups in the first conv block, whereas layer_norm has layer norms in " - "every block (meant to use with normalize=True)" - }, - ) - encoder_layers: int = field( - default=12, metadata={"help": "num encoder layers in the transformer"} - ) - encoder_embed_dim: int = field( - default=768, metadata={"help": "encoder embedding dimension"} - ) - encoder_ffn_embed_dim: int = field( - default=3072, metadata={"help": "encoder embedding dimension for FFN"} - ) - encoder_attention_heads: int = field( - default=12, metadata={"help": "num encoder attention heads"} - ) - activation_fn: ChoiceEnum(utils.get_available_activation_fns()) = field( - default="gelu", metadata={"help": "activation function to use"} - ) - - # dropouts - dropout: float = field( - default=0.1, metadata={"help": "dropout probability for the transformer"} - ) - attention_dropout: float = field( - default=0.1, metadata={"help": "dropout probability for attention weights"} - ) - activation_dropout: float = field( - default=0.0, metadata={"help": "dropout probability after activation in FFN"} - ) - encoder_layerdrop: float = field( - default=0.0, metadata={"help": "probability of dropping a tarnsformer layer"} - ) - dropout_input: float = field( - default=0.0, - metadata={"help": "dropout to apply to the input (after feat extr)"}, - ) - dropout_features: float = field( - default=0.0, - metadata={"help": "dropout to apply to the features (after feat extr)"}, - ) - - final_dim: int = field( - default=0, - metadata={ - "help": "project final representations and targets to this many dimensions." - "set to encoder_embed_dim is <= 0" - }, - ) - layer_norm_first: bool = field( - default=False, metadata={"help": "apply layernorm first in the transformer"} - ) - conv_feature_layers: str = field( - default="[(512, 10, 5)] + [(512, 3, 2)] * 4 + [(512,2,2)] + [(512,2,2)]", - metadata={ - "help": "string describing convolutional feature extraction layers in form of a python list that contains " - "[(dim, kernel_size, stride), ...]" - }, - ) - conv_bias: bool = field( - default=False, metadata={"help": "include bias in conv encoder"} - ) - logit_temp: float = field( - default=0.1, metadata={"help": "temperature to divide logits by"} - ) - quantize_targets: bool = field( - default=False, metadata={"help": "use quantized targets"} - ) - quantize_input: bool = field( - default=False, metadata={"help": "use quantized inputs"} - ) - same_quantizer: bool = field( - default=False, metadata={"help": "use same quantizer for inputs and targets"} - ) - target_glu: bool = field( - default=False, metadata={"help": "adds projection + glu to targets"} - ) - feature_grad_mult: float = field( - default=1.0, metadata={"help": "multiply feature extractor var grads by this"} - ) - quantizer_depth: int = field( - default=1, - metadata={"help": "number of quantizer layers"}, - ) - quantizer_factor: int = field( - default=3, - metadata={ - "help": "dimensionality increase for inner quantizer layers (if depth > 1)" - }, - ) - latent_vars: int = field( - default=320, - metadata={"help": "number of latent variables V in each group of the codebook"}, - ) - latent_groups: int = field( - default=2, - metadata={"help": "number of groups G of latent variables in the codebook"}, - ) - latent_dim: int = field( - default=0, - metadata={ - "help": "if > 0, uses this dimensionality for latent variables. " - "otherwise uses final_dim / latent_groups" - }, - ) - - # masking - mask_length: int = field(default=10, metadata={"help": "mask length"}) - mask_prob: float = field( - default=0.65, metadata={"help": "probability of replacing a token with mask"} - ) - mask_selection: MASKING_DISTRIBUTION_CHOICES = field( - default="static", metadata={"help": "how to choose mask length"} - ) - mask_other: float = field( - default=0, - metadata={ - "help": "secondary mask argument (used for more complex distributions), " - "see help in compute_mask_indices" - }, - ) - no_mask_overlap: bool = field( - default=False, metadata={"help": "whether to allow masks to overlap"} - ) - mask_min_space: int = field( - default=1, - metadata={"help": "min space between spans (if no overlap is enabled)"}, - ) - - # channel masking - mask_channel_length: int = field( - default=10, metadata={"help": "length of the mask for features (channels)"} - ) - mask_channel_prob: float = field( - default=0.0, metadata={"help": "probability of replacing a feature with 0"} - ) - mask_channel_before: bool = False - mask_channel_selection: MASKING_DISTRIBUTION_CHOICES = field( - default="static", - metadata={"help": "how to choose mask length for channel masking"}, - ) - mask_channel_other: float = field( - default=0, - metadata={ - "help": "secondary mask argument (used for more complex distributions), " - "see help in compute_mask_indicesh" - }, - ) - no_mask_channel_overlap: bool = field( - default=False, metadata={"help": "whether to allow channel masks to overlap"} - ) - mask_channel_min_space: int = field( - default=1, - metadata={"help": "min space between spans (if no overlap is enabled)"}, - ) - - # negative selection - num_negatives: int = field( - default=100, - metadata={"help": "number of negative examples from the same sample"}, - ) - negatives_from_everywhere: bool = field( - default=False, - metadata={"help": "sample negatives from everywhere, not just masked states"}, - ) - cross_sample_negatives: int = field( - default=0, metadata={"help": "number of negative examples from the any sample"} - ) - codebook_negatives: int = field( - default=0, metadata={"help": "number of negative examples codebook"} - ) - - # positional embeddings - conv_pos: int = field( - default=128, - metadata={"help": "number of filters for convolutional positional embeddings"}, - ) - conv_pos_groups: int = field( - default=16, - metadata={"help": "number of groups for convolutional positional embedding"}, - ) - - latent_temp: Tuple[float, float, float] = field( - default=(2, 0.5, 0.999995), - metadata={ - "help": "temperature for latent variable sampling. " - "can be tuple of 3 values (start, end, decay)" - }, - ) - - -@register_model("wav2vec2", dataclass=Wav2Vec2Config) -class Wav2Vec2Model(BaseFairseqModel): - def __init__(self, cfg: Wav2Vec2Config): - super().__init__() - self.cfg = cfg - - feature_enc_layers = eval(cfg.conv_feature_layers) - self.embed = feature_enc_layers[-1][0] - - self.feature_extractor = ConvFeatureExtractionModel( - conv_layers=feature_enc_layers, - dropout=0.0, - mode=cfg.extractor_mode, - conv_bias=cfg.conv_bias, - ) - - self.post_extract_proj = ( - nn.Linear(self.embed, cfg.encoder_embed_dim) - if self.embed != cfg.encoder_embed_dim and not cfg.quantize_input - else None - ) - - self.mask_prob = cfg.mask_prob - self.mask_selection = cfg.mask_selection - self.mask_other = cfg.mask_other - self.mask_length = cfg.mask_length - self.no_mask_overlap = cfg.no_mask_overlap - self.mask_min_space = cfg.mask_min_space - - self.mask_channel_prob = cfg.mask_channel_prob - self.mask_channel_before = cfg.mask_channel_before - self.mask_channel_selection = cfg.mask_channel_selection - self.mask_channel_other = cfg.mask_channel_other - self.mask_channel_length = cfg.mask_channel_length - self.no_mask_channel_overlap = cfg.no_mask_channel_overlap - self.mask_channel_min_space = cfg.mask_channel_min_space - - self.dropout_input = nn.Dropout(cfg.dropout_input) - self.dropout_features = nn.Dropout(cfg.dropout_features) - - self.feature_grad_mult = cfg.feature_grad_mult - - self.quantizer = None - self.input_quantizer = None - - self.n_negatives = cfg.num_negatives - self.cross_sample_negatives = cfg.cross_sample_negatives - self.codebook_negatives = cfg.codebook_negatives - self.negatives_from_everywhere = cfg.negatives_from_everywhere - - self.logit_temp = cfg.logit_temp - - final_dim = cfg.final_dim if cfg.final_dim > 0 else cfg.encoder_embed_dim - - if cfg.quantize_targets: - vq_dim = cfg.latent_dim if cfg.latent_dim > 0 else final_dim - self.quantizer = GumbelVectorQuantizer( - dim=self.embed, - num_vars=cfg.latent_vars, - temp=cfg.latent_temp, - groups=cfg.latent_groups, - combine_groups=False, - vq_dim=vq_dim, - time_first=True, - weight_proj_depth=cfg.quantizer_depth, - weight_proj_factor=cfg.quantizer_factor, - ) - self.project_q = nn.Linear(vq_dim, final_dim) - else: - self.project_q = nn.Linear(self.embed, final_dim) - - if cfg.quantize_input: - if cfg.same_quantizer and self.quantizer is not None: - vq_dim = final_dim - self.input_quantizer = self.quantizer - else: - vq_dim = cfg.latent_dim if cfg.latent_dim > 0 else cfg.encoder_embed_dim - self.input_quantizer = GumbelVectorQuantizer( - dim=self.embed, - num_vars=cfg.latent_vars, - temp=cfg.latent_temp, - groups=cfg.latent_groups, - combine_groups=False, - vq_dim=vq_dim, - time_first=True, - weight_proj_depth=cfg.quantizer_depth, - weight_proj_factor=cfg.quantizer_factor, - ) - self.project_inp = nn.Linear(vq_dim, cfg.encoder_embed_dim) - - self.mask_emb = nn.Parameter( - torch.FloatTensor(cfg.encoder_embed_dim).uniform_() - ) - - self.encoder = TransformerEncoder(cfg) - self.layer_norm = LayerNorm(self.embed) - - self.target_glu = None - if cfg.target_glu: - self.target_glu = nn.Sequential( - nn.Linear(final_dim, final_dim * 2), nn.GLU() - ) - - self.final_proj = nn.Linear(cfg.encoder_embed_dim, final_dim) - - def upgrade_state_dict_named(self, state_dict, name): - super().upgrade_state_dict_named(state_dict, name) - """Upgrade a (possibly old) state dict for new versions of fairseq.""" - return state_dict - - @classmethod - def build_model(cls, cfg: Wav2Vec2Config, task=None): - """Build a new model instance.""" - - return cls(cfg) - - def apply_mask( - self, - x, - padding_mask, - mask_indices=None, - mask_channel_indices=None, - ): - B, T, C = x.shape - - if self.mask_channel_prob > 0 and self.mask_channel_before: - mask_channel_indices = compute_mask_indices( - (B, C), - None, - self.mask_channel_prob, - self.mask_channel_length, - self.mask_channel_selection, - self.mask_channel_other, - no_overlap=self.no_mask_channel_overlap, - min_space=self.mask_channel_min_space, - ) - mask_channel_indices = ( - torch.from_numpy(mask_channel_indices) - .to(x.device) - .unsqueeze(1) - .expand(-1, T, -1) - ) - x[mask_channel_indices] = 0 - - if self.mask_prob > 0: - if mask_indices is None: - mask_indices = compute_mask_indices( - (B, T), - padding_mask, - self.mask_prob, - self.mask_length, - self.mask_selection, - self.mask_other, - min_masks=2, - no_overlap=self.no_mask_overlap, - min_space=self.mask_min_space, - ) - mask_indices = torch.from_numpy(mask_indices).to(x.device) - x = index_put(x, mask_indices, self.mask_emb) - else: - mask_indices = None - - if self.mask_channel_prob > 0 and not self.mask_channel_before: - if mask_channel_indices is None: - mask_channel_indices = compute_mask_indices( - (B, C), - None, - self.mask_channel_prob, - self.mask_channel_length, - self.mask_channel_selection, - self.mask_channel_other, - no_overlap=self.no_mask_channel_overlap, - min_space=self.mask_channel_min_space, - ) - mask_channel_indices = ( - torch.from_numpy(mask_channel_indices) - .to(x.device) - .unsqueeze(1) - .expand(-1, T, -1) - ) - x = index_put(x, mask_channel_indices, 0) - - return x, mask_indices - - def sample_negatives(self, y, num, padding_count=None): - - if self.n_negatives == 0 and self.cross_sample_negatives == 0: - return y.new(0) - - bsz, tsz, fsz = y.shape - y = y.view(-1, fsz) # BTC => (BxT)C - - # FIXME: what happens if padding_count is specified? - cross_high = tsz * bsz - high = tsz - (padding_count or 0) - with torch.no_grad(): - assert high > 1, f"{bsz,tsz,fsz}" - - if self.n_negatives > 0: - tszs = ( - buffered_arange(num) - .unsqueeze(-1) - .expand(-1, self.n_negatives) - .flatten() - ) - - neg_idxs = torch.randint( - low=0, high=high - 1, size=(bsz, self.n_negatives * num) - ) - neg_idxs[neg_idxs >= tszs] += 1 - - if self.cross_sample_negatives > 0: - tszs = ( - buffered_arange(num) - .unsqueeze(-1) - .expand(-1, self.cross_sample_negatives) - .flatten() - ) - - cross_neg_idxs = torch.randint( - low=0, - high=cross_high - 1, - size=(bsz, self.cross_sample_negatives * num), - ) - cross_neg_idxs[cross_neg_idxs >= tszs] += 1 - - if self.n_negatives > 0: - for i in range(1, bsz): - neg_idxs[i] += i * high - else: - neg_idxs = cross_neg_idxs - - if self.cross_sample_negatives > 0 and self.n_negatives > 0: - neg_idxs = torch.cat([neg_idxs, cross_neg_idxs], dim=1) - - negs = y[neg_idxs.view(-1)] - negs = negs.view( - bsz, num, self.n_negatives + self.cross_sample_negatives, fsz - ).permute( - 2, 0, 1, 3 - ) # to NxBxTxC - return negs, neg_idxs - - def compute_preds(self, x, y, negatives): - - neg_is_pos = (y == negatives).all(-1) - y = y.unsqueeze(0) - targets = torch.cat([y, negatives], dim=0) - - logits = torch.cosine_similarity(x.float(), targets.float(), dim=-1).type_as(x) - - logits = logits / self.logit_temp - - if is_xla_tensor(logits) or neg_is_pos.any(): - fillval = -float(2 ** 30) - if not hasattr(self, "_inftensor"): - self._inftensor = ( - torch.tensor(fillval).to(x.device) - if is_xla_tensor(logits) - else float("-inf") - ) - logits[1:] = index_put(logits[1:], neg_is_pos, self._inftensor) - - return logits - - def _get_feat_extract_output_lengths(self, input_lengths: torch.LongTensor): - """ - Computes the output length of the convolutional layers - """ - - def _conv_out_length(input_length, kernel_size, stride): - return torch.floor((input_length - kernel_size) / stride + 1) - - conv_cfg_list = eval(self.cfg.conv_feature_layers) - - for i in range(len(conv_cfg_list)): - input_lengths = _conv_out_length( - input_lengths, conv_cfg_list[i][1], conv_cfg_list[i][2] - ) - - return input_lengths.to(torch.long) - - def forward( - self, - source, - padding_mask=None, - mask=True, - features_only=False, - layer=None, - mask_indices=None, - mask_channel_indices=None, - padding_count=None, - ): - - if self.feature_grad_mult > 0: - features = self.feature_extractor(source) - if self.feature_grad_mult != 1.0: - features = GradMultiply.apply(features, self.feature_grad_mult) - else: - with torch.no_grad(): - features = self.feature_extractor(source) - - features_pen = features.float().pow(2).mean() - - features = features.transpose(1, 2) - features = self.layer_norm(features) - unmasked_features = features.clone() - - if padding_mask is not None and padding_mask.any(): - input_lengths = (1 - padding_mask.long()).sum(-1) - # apply conv formula to get real output_lengths - output_lengths = self._get_feat_extract_output_lengths(input_lengths) - - padding_mask = torch.zeros( - features.shape[:2], dtype=features.dtype, device=features.device - ) - - # these two operations makes sure that all values - # before the output lengths indices are attended to - padding_mask[ - ( - torch.arange(padding_mask.shape[0], device=padding_mask.device), - output_lengths - 1, - ) - ] = 1 - padding_mask = (1 - padding_mask.flip([-1]).cumsum(-1).flip([-1])).bool() - else: - padding_mask = None - - if self.post_extract_proj is not None: - features = self.post_extract_proj(features) - - features = self.dropout_input(features) - unmasked_features = self.dropout_features(unmasked_features) - - num_vars = None - code_ppl = None - prob_ppl = None - curr_temp = None - - if self.input_quantizer: - q = self.input_quantizer(features, produce_targets=False) - features = q["x"] - num_vars = q["num_vars"] - code_ppl = q["code_perplexity"] - prob_ppl = q["prob_perplexity"] - curr_temp = q["temp"] - features = self.project_inp(features) - - if mask: - x, mask_indices = self.apply_mask( - features, - padding_mask, - mask_indices=mask_indices, - mask_channel_indices=mask_channel_indices, - ) - if not is_xla_tensor(x) and mask_indices is not None: - # tpu-comment: reducing the size in a dynamic way causes - # too many recompilations on xla. - y = unmasked_features[mask_indices].view( - unmasked_features.size(0), -1, unmasked_features.size(-1) - ) - else: - y = unmasked_features - else: - x = features - y = unmasked_features - mask_indices = None - - x, layer_results = self.encoder(x, padding_mask=padding_mask, layer=layer) - - if features_only: - return { - "x": x, - "padding_mask": padding_mask, - "features": unmasked_features, - "layer_results": layer_results, - } - - if self.quantizer: - q = self.quantizer(y, produce_targets=False) - y = q["x"] - num_vars = q["num_vars"] - code_ppl = q["code_perplexity"] - prob_ppl = q["prob_perplexity"] - curr_temp = q["temp"] - - y = self.project_q(y) - - if self.negatives_from_everywhere: - neg_cands = self.quantizer(unmasked_features, produce_targets=False)[ - "x" - ] - negs, _ = self.sample_negatives( - neg_cands, - y.size(1), - padding_count=padding_count, - ) - negs = self.project_q(negs) - - else: - negs, _ = self.sample_negatives( - y, - y.size(1), - padding_count=padding_count, - ) - - if self.codebook_negatives > 0: - cb_negs = self.quantizer.sample_from_codebook( - y.size(0) * y.size(1), self.codebook_negatives - ) - cb_negs = cb_negs.view( - self.codebook_negatives, y.size(0), y.size(1), -1 - ) # order doesnt matter - cb_negs = self.project_q(cb_negs) - negs = torch.cat([negs, cb_negs], dim=0) - else: - y = self.project_q(y) - - if self.negatives_from_everywhere: - negs, _ = self.sample_negatives( - unmasked_features, - y.size(1), - padding_count=padding_count, - ) - negs = self.project_q(negs) - else: - negs, _ = self.sample_negatives( - y, - y.size(1), - padding_count=padding_count, - ) - - if not is_xla_tensor(x): - # tpu-comment: reducing the size in a dynamic way causes - # too many recompilations on xla. - x = x[mask_indices].view(x.size(0), -1, x.size(-1)) - - if self.target_glu: - y = self.target_glu(y) - negs = self.target_glu(negs) - - x = self.final_proj(x) - x = self.compute_preds(x, y, negs) - - result = { - "x": x, - "padding_mask": padding_mask, - "features_pen": features_pen, - } - - if prob_ppl is not None: - result["prob_perplexity"] = prob_ppl - result["code_perplexity"] = code_ppl - result["num_vars"] = num_vars - result["temp"] = curr_temp - - return result - - def quantize(self, x): - assert self.quantizer is not None - x = self.feature_extractor(x) - x = x.transpose(1, 2) - x = self.layer_norm(x) - return self.quantizer.forward_idx(x) - - def extract_features(self, source, padding_mask, mask=False, layer=None): - res = self.forward( - source, padding_mask, mask=mask, features_only=True, layer=layer - ) - return res - - def get_logits(self, net_output): - logits = net_output["x"] - logits = logits.transpose(0, 2) - logits = logits.reshape(-1, logits.size(-1)) - return logits - - def get_targets(self, sample, net_output, expand_steps=True): - x = net_output["x"] - return x.new_zeros(x.size(1) * x.size(2), dtype=torch.long) - - def get_extra_losses(self, net_output): - pen = [] - - if "prob_perplexity" in net_output: - pen.append( - (net_output["num_vars"] - net_output["prob_perplexity"]) - / net_output["num_vars"] - ) - - if "features_pen" in net_output: - pen.append(net_output["features_pen"]) - - return pen - - def remove_pretraining_modules(self): - self.quantizer = None - self.project_q = None - self.target_glu = None - self.final_proj = None - - -class ConvFeatureExtractionModel(nn.Module): - def __init__( - self, - conv_layers: List[Tuple[int, int, int]], - dropout: float = 0.0, - mode: str = "default", - conv_bias: bool = False, - ): - super().__init__() - - assert mode in {"default", "layer_norm"} - - def block( - n_in, - n_out, - k, - stride, - is_layer_norm=False, - is_group_norm=False, - conv_bias=False, - ): - def make_conv(): - conv = nn.Conv1d(n_in, n_out, k, stride=stride, bias=conv_bias) - nn.init.kaiming_normal_(conv.weight) - return conv - - assert ( - is_layer_norm and is_group_norm - ) == False, "layer norm and group norm are exclusive" - - if is_layer_norm: - return nn.Sequential( - make_conv(), - nn.Dropout(p=dropout), - nn.Sequential( - TransposeLast(), - Fp32LayerNorm(dim, elementwise_affine=True), - TransposeLast(), - ), - nn.GELU(), - ) - elif is_group_norm: - return nn.Sequential( - make_conv(), - nn.Dropout(p=dropout), - Fp32GroupNorm(dim, dim, affine=True), - nn.GELU(), - ) - else: - return nn.Sequential(make_conv(), nn.Dropout(p=dropout), nn.GELU()) - - in_d = 1 - self.conv_layers = nn.ModuleList() - for i, cl in enumerate(conv_layers): - assert len(cl) == 3, "invalid conv definition: " + str(cl) - (dim, k, stride) = cl - - self.conv_layers.append( - block( - in_d, - dim, - k, - stride, - is_layer_norm=mode == "layer_norm", - is_group_norm=mode == "default" and i == 0, - conv_bias=conv_bias, - ) - ) - in_d = dim - - def forward(self, x): - - # BxT -> BxCxT - x = x.unsqueeze(1) - - for conv in self.conv_layers: - x = conv(x) - - return x - - -class TransformerEncoder(nn.Module): - def __init__(self, args): - super().__init__() - - self.dropout = args.dropout - self.embedding_dim = args.encoder_embed_dim - - self.pos_conv = nn.Conv1d( - self.embedding_dim, - self.embedding_dim, - kernel_size=args.conv_pos, - padding=args.conv_pos // 2, - groups=args.conv_pos_groups, - ) - dropout = 0 - std = math.sqrt((4 * (1.0 - dropout)) / (args.conv_pos * self.embedding_dim)) - nn.init.normal_(self.pos_conv.weight, mean=0, std=std) - nn.init.constant_(self.pos_conv.bias, 0) - - self.pos_conv = nn.utils.weight_norm(self.pos_conv, name="weight", dim=2) - self.pos_conv = nn.Sequential(self.pos_conv, SamePad(args.conv_pos), nn.GELU()) - - self.layers = nn.ModuleList( - [ - TransformerSentenceEncoderLayer( - embedding_dim=self.embedding_dim, - ffn_embedding_dim=args.encoder_ffn_embed_dim, - num_attention_heads=args.encoder_attention_heads, - dropout=self.dropout, - attention_dropout=args.attention_dropout, - activation_dropout=args.activation_dropout, - activation_fn=args.activation_fn, - layer_norm_first=args.layer_norm_first, - ) - for _ in range(args.encoder_layers) - ] - ) - - self.layer_norm_first = args.layer_norm_first - self.layer_norm = LayerNorm(self.embedding_dim) - self.layerdrop = args.encoder_layerdrop - - self.apply(init_bert_params) - - def forward(self, x, padding_mask=None, layer=None): - x, layer_results = self.extract_features(x, padding_mask, layer) - - if self.layer_norm_first and layer is None: - x = self.layer_norm(x) - - return x, layer_results - - def extract_features(self, x, padding_mask=None, tgt_layer=None): - - if padding_mask is not None: - x = index_put(x, padding_mask, 0) - - x_conv = self.pos_conv(x.transpose(1, 2)) - x_conv = x_conv.transpose(1, 2) - x = x + x_conv - - if not self.layer_norm_first: - x = self.layer_norm(x) - - x = F.dropout(x, p=self.dropout, training=self.training) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - layer_results = [] - r = None - for i, layer in enumerate(self.layers): - dropout_probability = np.random.random() - if not self.training or (dropout_probability > self.layerdrop): - x, z = layer(x, self_attn_padding_mask=padding_mask, need_weights=False) - if tgt_layer is not None: - layer_results.append((x, z)) - if i == tgt_layer: - r = x - break - - if r is not None: - x = r - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - return x, layer_results - - def max_positions(self): - """Maximum output length supported by the encoder.""" - return self.args.max_positions - - def upgrade_state_dict_named(self, state_dict, name): - """Upgrade a (possibly old) state dict for new versions of fairseq.""" - return state_dict - - -class TransformerSentenceEncoderLayer(nn.Module): - """ - Implements a Transformer Encoder Layer used in BERT/XLM style pre-trained - models. - """ - - def __init__( - self, - embedding_dim: float = 768, - ffn_embedding_dim: float = 3072, - num_attention_heads: float = 8, - dropout: float = 0.1, - attention_dropout: float = 0.1, - activation_dropout: float = 0.1, - activation_fn: str = "relu", - layer_norm_first: bool = False, - ) -> None: - - super().__init__() - # Initialize parameters - self.embedding_dim = embedding_dim - self.dropout = dropout - self.activation_dropout = activation_dropout - - # Initialize blocks - self.activation_fn = utils.get_activation_fn(activation_fn) - self.self_attn = MultiheadAttention( - self.embedding_dim, - num_attention_heads, - dropout=attention_dropout, - self_attention=True, - ) - - self.dropout1 = nn.Dropout(dropout) - self.dropout2 = nn.Dropout(self.activation_dropout) - self.dropout3 = nn.Dropout(dropout) - - self.layer_norm_first = layer_norm_first - - # layer norm associated with the self attention layer - self.self_attn_layer_norm = LayerNorm(self.embedding_dim) - self.fc1 = nn.Linear(self.embedding_dim, ffn_embedding_dim) - self.fc2 = nn.Linear(ffn_embedding_dim, self.embedding_dim) - - # layer norm associated with the position wise feed-forward NN - self.final_layer_norm = LayerNorm(self.embedding_dim) - - def forward( - self, - x: torch.Tensor, - self_attn_mask: torch.Tensor = None, - self_attn_padding_mask: torch.Tensor = None, - need_weights: bool = False, - att_args=None, - ): - """ - LayerNorm is applied either before or after the self-attention/ffn - modules similar to the original Transformer imlementation. - """ - residual = x - - if self.layer_norm_first: - x = self.self_attn_layer_norm(x) - x, attn = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=self_attn_padding_mask, - attn_mask=self_attn_mask, - ) - x = self.dropout1(x) - x = residual + x - - residual = x - x = self.final_layer_norm(x) - x = self.activation_fn(self.fc1(x)) - x = self.dropout2(x) - x = self.fc2(x) - x = self.dropout3(x) - x = residual + x - else: - x, attn = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=self_attn_padding_mask, - ) - - x = self.dropout1(x) - x = residual + x - - x = self.self_attn_layer_norm(x) - - residual = x - x = self.activation_fn(self.fc1(x)) - x = self.dropout2(x) - x = self.fc2(x) - x = self.dropout3(x) - x = residual + x - x = self.final_layer_norm(x) - - return x, attn diff --git a/spaces/gradio/HuBERT/fairseq/trainer.py b/spaces/gradio/HuBERT/fairseq/trainer.py deleted file mode 100644 index 1deb14326f90dea246b9a1a8d3b97b95c5472a5e..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/trainer.py +++ /dev/null @@ -1,1439 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Train a network across multiple GPUs. -""" - -import contextlib -import logging -import sys -import time -from argparse import Namespace -from itertools import chain -from typing import Any, Dict, List - -import torch -from fairseq import checkpoint_utils, models, optim, utils -from fairseq.dataclass.configs import FairseqConfig -from fairseq.dataclass.utils import convert_namespace_to_omegaconf -from fairseq.distributed import utils as distributed_utils -from fairseq.file_io import PathManager -from fairseq.logging import meters, metrics -from fairseq.nan_detector import NanDetector -from fairseq.optim import lr_scheduler -from omegaconf import OmegaConf - -logger = logging.getLogger(__name__) - - -class Trainer(object): - """Main class for data parallel training. - - This class supports synchronous distributed data parallel training, - where multiple workers each have a full model replica and gradients - are accumulated across workers before each update. We use - :class:`~torch.nn.parallel.DistributedDataParallel` to handle - communication of the gradients across workers. - """ - - def __init__(self, cfg: FairseqConfig, task, model, criterion, quantizer=None): - - if isinstance(cfg, Namespace): - logger.warning( - "argparse.Namespace configuration is deprecated! Automatically converting to OmegaConf" - ) - cfg = convert_namespace_to_omegaconf(cfg) - - self.cfg = cfg - self.task = task - - # catalog shared parameters - shared_params = _catalog_shared_params(model) - self.tpu = cfg.common.tpu - self.cuda = torch.cuda.is_available() and not cfg.common.cpu and not self.tpu - if self.cuda: - self.device = torch.device("cuda") - elif self.tpu: - self.device = utils.get_tpu_device() - else: - self.device = torch.device("cpu") - - if self.cfg.distributed_training.ddp_backend == "fully_sharded": - if self.cfg.common.bf16: - raise ValueError( - "FullyShardedDataParallel is not compatible with --bf16 or " - "--memory-efficient-bf16" - ) - if self.cfg.distributed_training.zero_sharding != "none": - raise ValueError( - "FullyShardedDataParallel is not compatible with --zero-sharding " - "option (it's already built in)" - ) - else: - if ( - hasattr(self.cfg.distributed_training, "cpu_offload") - and self.cfg.distributed_training.cpu_offload - ): - raise ValueError("--cpu-offload requires --ddp-backend=fully_sharded") - - # copy model and criterion to current device/dtype - self._criterion = criterion - self._model = model - if cfg.distributed_training.ddp_backend != "fully_sharded": - if cfg.common.fp16: - assert not cfg.common.amp, "Cannot use fp16 and AMP together" - self._criterion = self._criterion.half() - self._model = self._model.half() - elif cfg.common.bf16: - self._criterion = self._criterion.to(dtype=torch.bfloat16) - self._model = self._model.to(dtype=torch.bfloat16) - elif cfg.common.amp: - self._amp_retries = 0 - if ( - not cfg.distributed_training.pipeline_model_parallel - # the DistributedFairseqModel wrapper will handle moving to device, - # so only handle cases which don't use the wrapper - and not self.use_distributed_wrapper - ): - self._criterion = self._criterion.to(device=self.device) - self._model = self._model.to(device=self.device) - self.pipeline_model_parallel = cfg.distributed_training.pipeline_model_parallel - self.last_device = None - if self.cuda and self.pipeline_model_parallel: - self.last_device = torch.device( - cfg.distributed_training.pipeline_devices[-1] - ) - - # check that shared parameters are preserved after device transfer - for shared_param in shared_params: - ref = _get_module_by_path(self._model, shared_param[0]) - for path in shared_param[1:]: - logger.info( - "detected shared parameter: {} <- {}".format(shared_param[0], path) - ) - _set_module_by_path(self._model, path, ref) - - self._dummy_batch = None # indicates we don't have a dummy batch at first - self._lr_scheduler = None - self._num_updates = 0 - self._num_xla_compiles = 0 # for TPUs - self._optim_history = None - self._optimizer = None - self._warn_once = set() - self._wrapped_criterion = None - self._wrapped_model = None - - # TODO(myleott): support tpu - if self.cuda and self.data_parallel_world_size > 1: - self._grad_norm_buf = torch.cuda.DoubleTensor(self.data_parallel_world_size) - else: - self._grad_norm_buf = None - - self.quantizer = quantizer - if self.quantizer is not None: - self.quantizer.set_trainer(self) - - # get detailed cuda environment - if self.cuda: - self.cuda_env = utils.CudaEnvironment() - if self.data_parallel_world_size > 1: - self.cuda_env_arr = distributed_utils.all_gather_list( - self.cuda_env, group=distributed_utils.get_global_group() - ) - else: - self.cuda_env_arr = [self.cuda_env] - if self.data_parallel_rank == 0: - utils.CudaEnvironment.pretty_print_cuda_env_list(self.cuda_env_arr) - else: - self.cuda_env = None - self.cuda_env_arr = None - - metrics.log_start_time("wall", priority=790, round=0) - - self._start_time = time.time() - self._previous_training_time = 0 - self._cumulative_training_time = None - - def reinitialize(self): - """Reinitialize the Trainer, typically after model params change.""" - self._lr_scheduler = None - self._optimizer = None - self._wrapped_criterion = None - self._wrapped_model = None - - @property - def data_parallel_world_size(self): - if self.cfg.distributed_training.distributed_world_size == 1: - return 1 - return distributed_utils.get_data_parallel_world_size() - - @property - def data_parallel_process_group(self): - return distributed_utils.get_data_parallel_group() - - @property - def data_parallel_rank(self): - if self.cfg.distributed_training.distributed_world_size == 1: - return 0 - return distributed_utils.get_data_parallel_rank() - - @property - def is_data_parallel_master(self): - # NOTE: this returns true for all model parallel replicas with data - # parallel rank 0 - return self.data_parallel_rank == 0 - - @property - def use_distributed_wrapper(self) -> bool: - return ( - self.data_parallel_world_size > 1 and not self.cfg.optimization.use_bmuf - ) or ( - self.cfg.distributed_training.ddp_backend == "fully_sharded" - and self.cfg.distributed_training.cpu_offload - ) - - @property - def should_save_checkpoint_on_current_rank(self) -> bool: - """Indicates whether to save checkpoints on the current DDP rank.""" - if ( - self.cfg.distributed_training.ddp_backend == "fully_sharded" - and self.cfg.distributed_training.use_sharded_state - ) or getattr(self.cfg.model, "base_layers", 0) > 0: - return True - else: - return self.is_data_parallel_master - - @property - def always_call_state_dict_during_save_checkpoint(self) -> bool: - if ( - self.cfg.distributed_training.ddp_backend == "fully_sharded" - and not self.cfg.distributed_training.use_sharded_state - ): - # FSDP calls communication collective when consolidating checkpoints - return True - else: - return False - - @property - def checkpoint_suffix(self) -> str: - """Suffix to add to the checkpoint file name.""" - if ( - self.cfg.distributed_training.ddp_backend == "fully_sharded" - and self.cfg.distributed_training.use_sharded_state - ): - return self.cfg.checkpoint.checkpoint_suffix + "-shard{0}".format( - self.data_parallel_rank - ) - else: - return self.cfg.checkpoint.checkpoint_suffix or "" - - @property - def criterion(self): - if self._wrapped_criterion is None: - if utils.has_parameters(self._criterion) and self.use_distributed_wrapper: - self._wrapped_criterion = models.DistributedFairseqModel( - self.cfg.distributed_training, - self._criterion, - process_group=self.data_parallel_process_group, - device=self.device, - ) - else: - self._wrapped_criterion = self._criterion - return self._wrapped_criterion - - @property - def model(self): - if self._wrapped_model is None: - if self.use_distributed_wrapper: - self._wrapped_model = models.DistributedFairseqModel( - self.cfg.distributed_training, - self._model, - process_group=self.data_parallel_process_group, - device=self.device, - ) - else: - self._wrapped_model = self._model - return self._wrapped_model - - @property - def optimizer(self): - if self._optimizer is None: - self._build_optimizer() - return self._optimizer - - @property - def lr_scheduler(self): - if self._lr_scheduler is None: - self._build_optimizer() # this will initialize self._lr_scheduler - return self._lr_scheduler - - def _build_optimizer(self): - params = list( - filter( - lambda p: p.requires_grad, - chain(self.model.parameters(), self.criterion.parameters()), - ) - ) - - if ( - self.cfg.distributed_training.ddp_backend == "fully_sharded" - and self.cfg.common.fp16 - ): - # FullyShardedDataParallel always uses MemoryEfficientFP16 wrapper, - # mostly for the grad scaling. But if we don't have the - # --memory-efficient-fp16 flag set, then we're effectively doing - # regular --fp16 and can allow the use of optimizers that would - # otherwise be unsupported by MemoryEfficientFP16Optimizer. - allow_unsupported = not self.cfg.common.memory_efficient_fp16 - self._optimizer = optim.MemoryEfficientFP16Optimizer.build_optimizer( - self.cfg, params, allow_unsupported=allow_unsupported - ) - elif self.cfg.common.fp16 or self.cfg.common.bf16 or self.cfg.common.amp: - if self.cuda and torch.cuda.get_device_capability(0)[0] < 7: - logger.info( - "NOTE: your device does NOT support faster training with --fp16 or --amp, " - "please switch to FP32 which is likely to be faster" - ) - if ( - self.cfg.common.memory_efficient_fp16 - or self.cfg.common.memory_efficient_bf16 - ): - self._optimizer = optim.MemoryEfficientFP16Optimizer.build_optimizer( - self.cfg, params - ) - elif self.cfg.common.amp: - self._optimizer = optim.AMPOptimizer.build_optimizer(self.cfg, params) - else: - self._optimizer = optim.FP16Optimizer.build_optimizer(self.cfg, params) - else: - if self.cuda and torch.cuda.get_device_capability(0)[0] >= 7: - logger.info("NOTE: your device may support faster training with --fp16 or --amp") - self._optimizer = optim.build_optimizer(self.cfg.optimizer, params) - - if self.cfg.distributed_training.ddp_backend == "fully_sharded": - assert ( - not self.cfg.optimization.use_bmuf - ), "--ddp-backend=fully_sharded is not compatible with BMUF" - assert self._optimizer.supports_flat_params, ( - "--ddp-backend=fully_sharded is only compatible with pointwise " - "optimizers (e.g., Adam, AdamW, Adadelta, Adamax, SGD, etc.). " - "However, the sharding will result in slightly different results when " - "using non-pointwise optimizers (e.g., Adagrad, Adafactor, LAMB)" - ) - - if self.cfg.optimization.use_bmuf: - self._optimizer = optim.FairseqBMUF( - self.cfg.bmuf, - self._optimizer, - ) - - if self.cfg.distributed_training.zero_sharding == "os": - if ( - self.cfg.common.fp16 - and not self.cfg.common.memory_efficient_fp16 - and not self.cfg.common.memory_efficient_bf16 - ) and not self.cfg.common.fp16_no_flatten_grads: - raise ValueError( - "ZeRO is incomptabile with fp16 and flattened grads. " - "Please use --fp16-no-flatten-grads" - ) - else: - optim.shard_(self._optimizer, self.data_parallel_process_group) - - # We should initialize the learning rate scheduler immediately after - # building the optimizer, so that the initial learning rate is set. - self._lr_scheduler = lr_scheduler.build_lr_scheduler( - self.cfg.lr_scheduler, - self.optimizer, - ) - self._lr_scheduler.step_update(0) - - def consolidate_optimizer(self): - """For OSS, we need to consolidate the state dict.""" - if self.cfg.checkpoint.no_save_optimizer_state: - return - self._gathered_optim_state = None - if hasattr(self.optimizer.optimizer, "consolidate_state_dict"): - self.optimizer.optimizer.consolidate_state_dict() - - elif ( - self.cfg.distributed_training.ddp_backend == "fully_sharded" - and not self.model.use_sharded_state - ): - st = self.model.gather_full_optim_state_dict( - self.optimizer - ) # only returns on rank 0 - self._gathered_optim_state = st - - def state_dict(self): - state_dict = { - "args": None, # legacy - "cfg": ( - OmegaConf.to_container(self.cfg, resolve=True, enum_to_str=True) - if OmegaConf.is_config(self.cfg) - else self.cfg - ), - "model": self.model.state_dict(), - "criterion": ( - self.criterion.state_dict() - if utils.has_parameters(self.criterion) - else None - ), - "optimizer_history": (self._optim_history or []) - + [ - { - "criterion_name": self.get_criterion().__class__.__name__, - "optimizer_name": self.optimizer.__class__.__name__, - "lr_scheduler_state": self.lr_scheduler.state_dict(), - "num_updates": self.get_num_updates(), - } - ], - "task_state": self.task.state_dict() if self.task is not None else {}, - "extra_state": { - "metrics": metrics.state_dict(), - "previous_training_time": self.cumulative_training_time(), - }, - } - if not self.cfg.checkpoint.no_save_optimizer_state: - if self._gathered_optim_state is not None: - state_dict["last_optimizer_state"] = self._gathered_optim_state - self._gathered_optim_state = None - else: - state_dict["last_optimizer_state"] = self.optimizer.state_dict() - if self.cfg.distributed_training.ddp_backend == "fully_sharded": - # save meta data for recombining checkpoint upon loading - state_dict["fsdp_metadata"] = self.model.local_metadata_dict() - return state_dict - - def save_checkpoint(self, filename, extra_state): - """Save all training state in a checkpoint file.""" - logger.info(f"Saving checkpoint to {filename}") - # call state_dict on all ranks in case it needs internal communication - state_dict = utils.move_to_cpu(self.state_dict()) - state_dict["extra_state"].update(extra_state) - if self.should_save_checkpoint_on_current_rank: - checkpoint_utils.torch_persistent_save( - state_dict, - filename, - async_write=self.cfg.checkpoint.write_checkpoints_asynchronously, - ) - logger.info(f"Finished saving checkpoint to {filename}") - - def load_checkpoint( - self, - filename, - reset_optimizer=False, - reset_lr_scheduler=False, - optimizer_overrides=None, - reset_meters=False, - ): - """ - Load all training state from a checkpoint file. - rank = 0 will load the checkpoint, and then broadcast it to all - other ranks. - """ - extra_state, self._optim_history, last_optim_state = None, [], None - - logger.info(f"Preparing to load checkpoint {filename}") - is_distributed = self.data_parallel_world_size > 1 - bexists = PathManager.isfile(filename) - if bexists: - load_on_all_ranks = ( - self.cfg.checkpoint.load_checkpoint_on_all_dp_ranks - # TPUs don't support broadcast yet, so load checkpoints - # on every worker for now - or self.tpu - # FSDP requires loading checkpoint shards on all ranks - or ( - self.cfg.distributed_training.ddp_backend == "fully_sharded" - and self.cfg.distributed_training.use_sharded_state - ) - or getattr(self.cfg.model, "base_layers", 0) > 0 - ) - - if load_on_all_ranks or self.data_parallel_rank == 0: - state = checkpoint_utils.load_checkpoint_to_cpu( - filename, load_on_all_ranks=load_on_all_ranks - ) - last_optim_state = state.get("last_optimizer_state", None) - - # If doing zero_sharding, do not broadcast global optimizer - # state. Later we will broadcast sharded states to each rank - # to avoid memory from exploding. - if ( - not load_on_all_ranks - and self.cfg.distributed_training.zero_sharding == "os" - and "last_optimizer_state" in state - and is_distributed - ): - state["last_optimizer_state"] = "SHARDED" - else: - last_optim_state = None - state = None - - if is_distributed and not load_on_all_ranks: - state = distributed_utils.broadcast_object( - state, - src_rank=0, - group=self.data_parallel_process_group, - dist_device=self.device, - ) - if self.data_parallel_rank > 0: - last_optim_state = state.get("last_optimizer_state", None) - - # load model parameters - try: - self.model.load_state_dict( - state["model"], strict=True, model_cfg=self.cfg.model - ) - # save memory for later steps - del state["model"] - if utils.has_parameters(self.get_criterion()): - self.get_criterion().load_state_dict( - state["criterion"], strict=True - ) - del state["criterion"] - - except Exception: - raise Exception( - "Cannot load model parameters from checkpoint {}; " - "please ensure that the architectures match.".format(filename) - ) - extra_state = state["extra_state"] - self._optim_history = state["optimizer_history"] - - if last_optim_state is not None and not reset_optimizer: - # rebuild optimizer after loading model, since params may have changed - self._build_optimizer() - - # only reload optimizer and lr_scheduler if they match - last_optim = self._optim_history[-1] - assert ( - last_optim["criterion_name"] == self.get_criterion().__class__.__name__ - ), f"Criterion does not match; please reset the optimizer (--reset-optimizer). {last_optim['criterion_name']} vs {self.get_criterion().__class__.__name__}" - assert ( - last_optim["optimizer_name"] == self.optimizer.__class__.__name__ - ), f"Optimizer does not match; please reset the optimizer (--reset-optimizer). {last_optim['optimizer_name']} vs {self.optimizer.__class__.__name__}" - - if not reset_lr_scheduler: - self.lr_scheduler.load_state_dict(last_optim["lr_scheduler_state"]) - - if ( - self.cfg.distributed_training.ddp_backend == "fully_sharded" - and not self.model.use_sharded_state - ): - # if use_sharded_state, the last_optim_state is already sharded, skip this - last_optim_state = self.model.get_shard_from_optim_state_dict( - last_optim_state - ) - elif not load_on_all_ranks and is_distributed: - last_optim_state = self.optimizer.broadcast_global_state_dict( - last_optim_state - ) - - self.optimizer.load_state_dict(last_optim_state, optimizer_overrides) - - self.set_num_updates(last_optim["num_updates"]) - - if extra_state is not None: - itr_state = extra_state["train_iterator"] - epoch = itr_state["epoch"] - - if "previous_training_time" in extra_state: - self._previous_training_time = extra_state["previous_training_time"] - self._start_time = time.time() - - self.lr_step(epoch) - - if ( - itr_state.get("version", 1) >= 2 - and itr_state["iterations_in_epoch"] == 0 - ): - # reset meters at start of epoch - reset_meters = True - - if "metrics" in extra_state and not reset_meters: - metrics.load_state_dict(extra_state["metrics"]) - - # reset TimeMeters, since their start times don't make sense anymore - for meter in metrics.get_meters("default"): - if isinstance(meter, meters.TimeMeter): - meter.reset() - - logger.info( - "Loaded checkpoint {} (epoch {} @ {} updates)".format( - filename, epoch, self.get_num_updates() - ) - ) - - else: - logger.info("No existing checkpoint found {}".format(filename)) - - return extra_state - - def get_train_iterator( - self, - epoch, - combine=True, - load_dataset=True, - data_selector=None, - shard_batch_itr=True, - disable_iterator_cache=False, - ): - """Return an EpochBatchIterator over the training set for a given epoch.""" - if load_dataset: - logger.info("loading train data for epoch {}".format(epoch)) - self.task.load_dataset( - self.cfg.dataset.train_subset, - epoch=epoch, - combine=combine, - data_selector=data_selector, - tpu=self.tpu, - ) - batch_iterator = self.task.get_batch_iterator( - dataset=self.task.dataset(self.cfg.dataset.train_subset), - max_tokens=self.cfg.dataset.max_tokens, - max_sentences=self.cfg.dataset.batch_size, - max_positions=utils.resolve_max_positions( - self.task.max_positions(), - self.model.max_positions(), - self.cfg.dataset.max_tokens, - ), - ignore_invalid_inputs=True, - required_batch_size_multiple=self.cfg.dataset.required_batch_size_multiple, - seed=self.cfg.common.seed, - num_shards=self.data_parallel_world_size if shard_batch_itr else 1, - shard_id=self.data_parallel_rank if shard_batch_itr else 0, - num_workers=self.cfg.dataset.num_workers, - epoch=epoch, - data_buffer_size=self.cfg.dataset.data_buffer_size, - disable_iterator_cache=disable_iterator_cache, - ) - self.reset_dummy_batch(batch_iterator.first_batch) - return batch_iterator - - def get_valid_iterator( - self, - subset, - disable_iterator_cache=False, - ): - """Return an EpochBatchIterator over given validation subset for a given epoch.""" - batch_iterator = self.task.get_batch_iterator( - dataset=self.task.dataset(subset), - max_tokens=self.cfg.dataset.max_tokens_valid, - max_sentences=self.cfg.dataset.batch_size_valid, - max_positions=utils.resolve_max_positions( - self.task.max_positions(), - self.model.max_positions(), - ), - ignore_invalid_inputs=self.cfg.dataset.skip_invalid_size_inputs_valid_test, - required_batch_size_multiple=self.cfg.dataset.required_batch_size_multiple, - seed=self.cfg.common.seed, - num_shards=self.data_parallel_world_size, - shard_id=self.data_parallel_rank, - num_workers=self.cfg.dataset.num_workers, - # always pass a fixed "epoch" to keep validation data consistent - # across training epochs - epoch=1, - data_buffer_size=self.cfg.dataset.data_buffer_size, - disable_iterator_cache=disable_iterator_cache, - ) - self.reset_dummy_batch(batch_iterator.first_batch) - return batch_iterator - - def begin_epoch(self, epoch): - """Called at the beginning of each epoch.""" - logger.info("begin training epoch {}".format(epoch)) - - self.lr_step_begin_epoch(epoch) - - if self.quantizer is not None: - self.quantizer.begin_epoch(epoch) - - # task specific setup per epoch - self.task.begin_epoch(epoch, self.get_model()) - - if self.tpu: - import torch_xla.core.xla_model as xm - - xm.rendezvous("begin_epoch") # wait for all workers - xm.mark_step() - - def begin_valid_epoch(self, epoch): - """Called at the beginning of each validation epoch.""" - - # task specific setup per validation epoch - self.task.begin_valid_epoch(epoch, self.get_model()) - - def reset_dummy_batch(self, batch): - self._dummy_batch = batch - - @metrics.aggregate("train") - def train_step(self, samples, raise_oom=False): - """Do forward, backward and parameter update.""" - self._set_seed() - self.model.train() - self.criterion.train() - self.zero_grad() - - metrics.log_start_time("train_wall", priority=800, round=0) - - # forward and backward pass - logging_outputs, sample_size, ooms = [], 0, 0 - for i, sample in enumerate(samples): # delayed update loop - sample, is_dummy_batch = self._prepare_sample(sample) - - def maybe_no_sync(): - """ - Whenever *samples* contains more than one mini-batch, we - want to accumulate gradients locally and only call - all-reduce in the last backwards pass. - """ - if ( - self.data_parallel_world_size > 1 - and hasattr(self.model, "no_sync") - and i < len(samples) - 1 - ): - return self.model.no_sync() - else: - return contextlib.ExitStack() # dummy contextmanager - - try: - with maybe_no_sync(): - # forward and backward - loss, sample_size_i, logging_output = self.task.train_step( - sample=sample, - model=self.model, - criterion=self.criterion, - optimizer=self.optimizer, - update_num=self.get_num_updates(), - ignore_grad=is_dummy_batch, - ) - del loss - - logging_outputs.append(logging_output) - sample_size += sample_size_i - - # emptying the CUDA cache after the first step can - # reduce the chance of OOM - if self.cuda and self.get_num_updates() == 0: - torch.cuda.empty_cache() - except RuntimeError as e: - if "out of memory" in str(e): - self._log_oom(e) - if raise_oom: - raise e - logger.warning( - "attempting to recover from OOM in forward/backward pass" - ) - ooms += 1 - self.zero_grad() - if self.cuda: - torch.cuda.empty_cache() - if self.cfg.distributed_training.distributed_world_size == 1: - return None - else: - raise e - - if self.tpu and i < len(samples) - 1: - # tpu-comment: every XLA operation before marking step is - # appended to the IR graph, and processing too many batches - # before marking step can lead to OOM errors. - # To handle gradient accumulation use case, we explicitly - # mark step here for every forward pass without a backward pass - self._xla_markstep_and_send_to_cpu() - - if is_dummy_batch: - if torch.is_tensor(sample_size): - sample_size.zero_() - else: - sample_size *= 0.0 - - if torch.is_tensor(sample_size): - sample_size = sample_size.float() - else: - sample_size = float(sample_size) - - # gather logging outputs from all replicas - if self._sync_stats(): - train_time = self._local_cumulative_training_time() - logging_outputs, ( - sample_size, - ooms, - total_train_time, - ) = self._aggregate_logging_outputs( - logging_outputs, sample_size, ooms, train_time, ignore=is_dummy_batch - ) - self._cumulative_training_time = ( - total_train_time / self.data_parallel_world_size - ) - - overflow = False - try: - with torch.autograd.profiler.record_function("reduce-grads"): - # reduce gradients across workers - self.optimizer.all_reduce_grads(self.model) - if utils.has_parameters(self.criterion): - self.optimizer.all_reduce_grads(self.criterion) - - with torch.autograd.profiler.record_function("multiply-grads"): - # multiply gradients by (data_parallel_size / sample_size) since - # DDP normalizes by the number of data parallel workers for - # improved fp16 precision. - # Thus we get (sum_of_gradients / sample_size) at the end. - # In case of fp16, this step also undoes loss scaling. - # (Debugging note: Some optimizers perform this scaling on the - # fly, so inspecting model.parameters() or optimizer.params may - # still show the original, unscaled gradients.) - numer = ( - self.data_parallel_world_size - if not self.cfg.optimization.use_bmuf or self._sync_stats() - else 1 - ) - self.optimizer.multiply_grads(numer / (sample_size or 1.0)) - # Note: (sample_size or 1.0) handles the case of a zero gradient, in a - # way that avoids CPU/device transfers in case sample_size is a GPU or - # TPU object. The assumption is that the gradient itself is also 0. - - with torch.autograd.profiler.record_function("clip-grads"): - # clip grads - grad_norm = self.clip_grad_norm(self.cfg.optimization.clip_norm) - - # check that grad norms are consistent across workers - # on tpu check tensor is slow - if not self.tpu: - if ( - not self.cfg.optimization.use_bmuf - and self.cfg.distributed_training.ddp_backend != "slow_mo" - ): - self._check_grad_norms(grad_norm) - if not torch.isfinite(grad_norm).all(): - # in case of AMP, if gradients are Nan/Inf then - # optimizer step is still required - if self.cfg.common.amp: - overflow = True - else: - # check local gradnorm single GPU case, trigger NanDetector - raise FloatingPointError("gradients are Nan/Inf") - - with torch.autograd.profiler.record_function("optimizer"): - # take an optimization step - self.task.optimizer_step( - self.optimizer, model=self.model, update_num=self.get_num_updates() - ) - if self.cfg.common.amp and overflow: - if self._amp_retries == self.cfg.common.amp_batch_retries: - logger.info("AMP: skipping this batch.") - self._amp_retries = 0 - else: - self._amp_retries += 1 - return self.train_step(samples, raise_oom) # recursion to feed in same batch - - except FloatingPointError: - # re-run the forward and backward pass with hooks attached to print - # out where it fails - self.zero_grad() - with NanDetector(self.get_model()): - for _, sample in enumerate(samples): - sample, _ = self._prepare_sample(sample) - self.task.train_step( - sample, - self.model, - self.criterion, - self.optimizer, - self.get_num_updates(), - ignore_grad=False, - ) - raise - except OverflowError as e: - overflow = True - logger.info( - f"NOTE: gradient overflow detected, ignoring gradient, {str(e)}" - ) - grad_norm = torch.tensor(0.0).cuda() - self.zero_grad() - except RuntimeError as e: - if "out of memory" in str(e): - self._log_oom(e) - logger.error("OOM during optimization, irrecoverable") - raise e - - # Some distributed wrappers (e.g., SlowMo) need access to the optimizer - # after the step - if hasattr(self.model, "perform_additional_optimizer_actions"): - if hasattr(self.optimizer, "fp32_params"): - self.model.perform_additional_optimizer_actions( - self.optimizer.optimizer, self.optimizer.fp32_params - ) - else: - self.model.perform_additional_optimizer_actions( - self.optimizer.optimizer - ) - - logging_output = None - if not overflow or self.cfg.distributed_training.ddp_backend == "slow_mo": - self.set_num_updates(self.get_num_updates() + 1) - - if self.tpu: - import torch_xla.core.xla_model as xm - - # mark step on TPUs - self._xla_markstep_and_send_to_cpu() - - # only log stats every log_interval steps - # this causes wps to be misreported when log_interval > 1 - logging_output = {} - if self.get_num_updates() % self.cfg.common.log_interval == 0: - # log memory usage - mem_info = xm.get_memory_info(self.device) - gb_free = mem_info["kb_free"] / 1024 / 1024 - gb_total = mem_info["kb_total"] / 1024 / 1024 - metrics.log_scalar( - "gb_free", gb_free, priority=1500, round=1, weight=0 - ) - metrics.log_scalar( - "gb_total", gb_total, priority=1600, round=1, weight=0 - ) - logging_outputs = self._xla_markstep_and_send_to_cpu( - logging_outputs - ) - logging_output = self._reduce_and_log_stats( - logging_outputs, sample_size, grad_norm - ) - - # log whenever there's an XLA compilation, since these - # slow down training and may indicate opportunities for - # optimization - self._check_xla_compilation() - else: - if self.cuda and self.cuda_env is not None: - # log minimum free memory over the iteration - gb_used = torch.cuda.max_memory_allocated() / 1024 / 1024 / 1024 - torch.cuda.reset_peak_memory_stats() - gb_free = self.cuda_env.total_memory_in_GB - gb_used - metrics.log_scalar( - "gb_free", gb_free, priority=1500, round=1, weight=0 - ) - - # log stats - logging_output = self._reduce_and_log_stats( - logging_outputs, sample_size, grad_norm - ) - - # clear CUDA cache to reduce memory fragmentation - if ( - self.cuda - and self.cfg.common.empty_cache_freq > 0 - and ( - (self.get_num_updates() + self.cfg.common.empty_cache_freq - 1) - % self.cfg.common.empty_cache_freq - ) - == 0 - ): - torch.cuda.empty_cache() - - if self.cfg.common.fp16 or self.cfg.common.amp: - metrics.log_scalar( - "loss_scale", - ( - self.optimizer.scaler.loss_scale - if self.cfg.common.fp16 - else self.optimizer.scaler.get_scale() - ), - priority=700, - round=4, - weight=0, - ) - - metrics.log_stop_time("train_wall") - return logging_output - - @metrics.aggregate("valid") - def valid_step(self, sample, raise_oom=False): - """Do forward pass in evaluation mode.""" - if self.tpu: - import torch_xla.core.xla_model as xm - - xm.rendezvous("valid_step") # wait for all workers - - with torch.no_grad(): - self.model.eval() - self.criterion.eval() - - sample, is_dummy_batch = self._prepare_sample(sample) - - try: - _loss, sample_size, logging_output = self.task.valid_step( - sample, self.model, self.criterion - ) - except RuntimeError as e: - if "out of memory" in str(e): - self._log_oom(e) - if not raise_oom: - logger.warning( - "ran out of memory in validation step, retrying batch" - ) - for p in self.model.parameters(): - if p.grad is not None: - p.grad = None # free some memory - if self.cuda: - torch.cuda.empty_cache() - return self.valid_step(sample, raise_oom=True) - raise e - - logging_outputs = [logging_output] - if is_dummy_batch: - if torch.is_tensor(sample_size): - sample_size.zero_() - else: - sample_size *= 0.0 - - # gather logging outputs from all replicas - if self.data_parallel_world_size > 1: - logging_outputs, (sample_size,) = self._aggregate_logging_outputs( - logging_outputs, - sample_size, - ignore=is_dummy_batch, - ) - - # log validation stats - if self.tpu: - logging_outputs = self._xla_markstep_and_send_to_cpu(logging_outputs) - logging_output = self._reduce_and_log_stats(logging_outputs, sample_size) - - return logging_output - - def zero_grad(self): - self.optimizer.zero_grad() - - def lr_step_begin_epoch(self, epoch): - """Adjust the learning rate at the beginning of the epoch.""" - self.lr_scheduler.step_begin_epoch(epoch) - # prefer updating the LR based on the number of steps - return self.lr_step_update() - - def lr_step(self, epoch, val_loss=None): - """Adjust the learning rate at the end of the epoch.""" - self.lr_scheduler.step(epoch, val_loss) - # prefer updating the LR based on the number of steps - return self.lr_step_update() - - def lr_step_update(self): - """Update the learning rate after each update.""" - new_lr = self.lr_scheduler.step_update(self.get_num_updates()) - if isinstance(new_lr, dict): - for k, v in new_lr.items(): - metrics.log_scalar(f"lr_{k}", v, weight=0, priority=300) - new_lr = new_lr.get("default", next(iter(new_lr.values()))) - else: - metrics.log_scalar("lr", new_lr, weight=0, priority=300) - return new_lr - - def get_lr(self): - """Get the current learning rate.""" - return self.optimizer.get_lr() - - def get_model(self): - """Get the (non-wrapped) model instance.""" - return self._model - - def get_criterion(self): - """Get the (non-wrapped) criterion instance.""" - return self._criterion - - def get_meter(self, name): - """[deprecated] Get a specific meter by name.""" - from fairseq import meters - - if "get_meter" not in self._warn_once: - self._warn_once.add("get_meter") - utils.deprecation_warning( - "Trainer.get_meter is deprecated. Please use fairseq.metrics instead." - ) - - train_meters = metrics.get_meters("train") - if train_meters is None: - train_meters = {} - - if name == "train_loss" and "loss" in train_meters: - return train_meters["loss"] - elif name == "train_nll_loss": - # support for legacy train.py, which assumed this meter is - # always initialized - m = train_meters.get("nll_loss", None) - return m or meters.AverageMeter() - elif name == "wall": - # support for legacy train.py, which assumed this meter is - # always initialized - m = metrics.get_meter("default", "wall") - return m or meters.TimeMeter() - elif name == "wps": - m = metrics.get_meter("train", "wps") - return m or meters.TimeMeter() - elif name in {"valid_loss", "valid_nll_loss"}: - # support for legacy train.py, which assumed these meters - # are always initialized - k = name[len("valid_") :] - m = metrics.get_meter("valid", k) - return m or meters.AverageMeter() - elif name == "oom": - return meters.AverageMeter() - elif name in train_meters: - return train_meters[name] - return None - - def get_num_updates(self): - """Get the number of parameters updates.""" - return self._num_updates - - def set_num_updates(self, num_updates): - """Set the number of parameters updates.""" - self._num_updates = num_updates - self.lr_step_update() - if self.quantizer: - self.quantizer.step_update(self._num_updates) - metrics.log_scalar("num_updates", self._num_updates, weight=0, priority=200) - - def clip_grad_norm(self, clip_norm): - def agg_norm_fn(total_norm): - total_norm = total_norm.cuda().float() ** 2 - total_norm = distributed_utils.all_reduce( - total_norm, group=self.data_parallel_process_group - ) - return total_norm ** 0.5 - - should_agg_norm = ( - self.cfg.distributed_training.ddp_backend == "fully_sharded" - and ( - self.data_parallel_process_group is not None - or torch.distributed.is_initialized() - ) - ) - return self.optimizer.clip_grad_norm( - clip_norm, aggregate_norm_fn=agg_norm_fn if should_agg_norm else None - ) - - def cumulative_training_time(self): - if self._cumulative_training_time is None: - # single GPU - return self._local_cumulative_training_time() - else: - return self._cumulative_training_time - - def _local_cumulative_training_time(self): - """Aggregate training time in seconds.""" - return time.time() - self._start_time + self._previous_training_time - - def _fp_convert_sample(self, sample): - def apply_half(t): - if t.dtype is torch.float32: - return t.to(dtype=torch.half) - return t - - def apply_bfloat16(t): - if t.dtype is torch.float32: - return t.to(dtype=torch.bfloat16) - return t - - if self.cfg.common.fp16: - sample = utils.apply_to_sample(apply_half, sample) - - if self.cfg.common.bf16: - sample = utils.apply_to_sample(apply_bfloat16, sample) - - return sample - - def _prepare_sample(self, sample, is_dummy=False): - if sample == "DUMMY": - raise Exception( - "Trying to use an uninitialized 'dummy' batch. This usually indicates " - "that the total number of batches is smaller than the number of " - "participating GPUs. Try reducing the batch size or using fewer GPUs." - ) - - if sample is None or len(sample) == 0: - assert ( - self._dummy_batch is not None and len(self._dummy_batch) > 0 - ), "Invalid dummy batch: {}".format(self._dummy_batch) - sample, _ = self._prepare_sample(self._dummy_batch, is_dummy=True) - return sample, True - - # Given that PCIe/NVLink bandwidth is significantly smaller than DRAM bandwidth - # it makes sense to do the format conversion on the CPU and then transfer - # a smaller buffer to the device. This also saves GPU memory capacity. - - if self.cfg.common.on_cpu_convert_precision: - sample = self._fp_convert_sample(sample) - - if self.cuda: - if self.pipeline_model_parallel: - if 'target' in sample: - sample['target'] = utils.move_to_cuda(sample['target'], device=self.last_device) - else: - sample = utils.move_to_cuda(sample) - elif self.tpu and is_dummy: - # the dummy batch may not be on the appropriate device - sample = utils.move_to_cuda(sample, device=self.device) - - if not self.cfg.common.on_cpu_convert_precision: - sample = self._fp_convert_sample(sample) - - if self._dummy_batch == "DUMMY": - self._dummy_batch = sample - - return sample, False - - def _set_seed(self): - # Set seed based on args.seed and the update number so that we get - # reproducible results when resuming from checkpoints - seed = self.cfg.common.seed + self.get_num_updates() - utils.set_torch_seed(seed) - - def _sync_stats(self): - # Return True if it's using multiple GPUs and DDP or multiple GPUs with - # BMUF and it's a bmuf sync with warmup iterations completed before. - if self.data_parallel_world_size == 1: - return False - elif self.cfg.optimization.use_bmuf: - return ( - self.get_num_updates() + 1 - ) % self.cfg.bmuf.global_sync_iter == 0 and ( - self.get_num_updates() + 1 - ) > self.cfg.bmuf.warmup_iterations - else: - return True - - def _log_oom(self, exc): - msg = "OOM: Ran out of memory with exception: {}".format(exc) - logger.warning(msg) - if torch.cuda.is_available() and hasattr(torch.cuda, "memory_summary"): - for device_idx in range(torch.cuda.device_count()): - logger.warning(torch.cuda.memory_summary(device=device_idx)) - sys.stderr.flush() - - def _aggregate_logging_outputs( - self, - logging_outputs: List[Dict[str, Any]], - *extra_stats_to_sum, - ignore=False, - ): - if self.task.__class__.logging_outputs_can_be_summed(self.get_criterion()): - return self._fast_stat_sync_sum( - logging_outputs, *extra_stats_to_sum, ignore=ignore - ) - else: - return self._all_gather_list_sync( - logging_outputs, *extra_stats_to_sum, ignore=ignore - ) - - def _all_gather_list_sync( - self, - logging_outputs: List[Dict[str, Any]], - *extra_stats_to_sum, - ignore=False, - ): - """ - Sync logging outputs across workers. all_gather_list_sync is - suitable when logging outputs are complex types. - """ - if self.tpu: - raise NotImplementedError - if ignore: - logging_outputs = [] - results = list( - zip( - *distributed_utils.all_gather_list( - [logging_outputs] + list(extra_stats_to_sum), - max_size=getattr(self.cfg.common, "all_gather_list_size", 16384), - group=self.data_parallel_process_group, - ) - ) - ) - logging_outputs, extra_stats_to_sum = results[0], results[1:] - logging_outputs = list(chain.from_iterable(logging_outputs)) - extra_stats_to_sum = [sum(s) for s in extra_stats_to_sum] - return logging_outputs, extra_stats_to_sum - - def _fast_stat_sync_sum( - self, - logging_outputs: List[Dict[str, Any]], - *extra_stats_to_sum, - ignore=False, - ): - """ - Sync logging outputs across workers. fast_stat_sync_sum is - faster than all_gather_list_sync, but is only suitable when - logging outputs are scalars and can be summed. Note that - *logging_outputs* cannot contain any nested dicts/lists. - """ - data = {} - for i, stat in enumerate(extra_stats_to_sum): - data["extra_stats_" + str(i)] = stat - if len(logging_outputs) > 0: - log_keys = list(logging_outputs[0].keys()) - for k in log_keys: - if not ignore: - v = sum(log[k] for log in logging_outputs if k in log) - else: - v = logging_outputs[0][k] - v = torch.zeros_like(v) if torch.is_tensor(v) else 0 - data["logging_outputs_" + k] = v - else: - log_keys = None - - data = distributed_utils.all_reduce_dict( - data, device=self.device, group=self.data_parallel_process_group - ) - - extra_stats_to_sum = [ - data["extra_stats_" + str(i)] for i in range(len(extra_stats_to_sum)) - ] - if log_keys is not None: - logging_outputs = [{k: data["logging_outputs_" + k] for k in log_keys}] - else: - logging_outputs = [] - return logging_outputs, extra_stats_to_sum - - def _check_grad_norms(self, grad_norm): - """Check that grad norms are consistent across workers.""" - if self._grad_norm_buf is not None: - self._grad_norm_buf.zero_() - self._grad_norm_buf[self.data_parallel_rank] = grad_norm - distributed_utils.all_reduce( - self._grad_norm_buf, group=self.data_parallel_process_group - ) - - def is_consistent(tensor): - max_abs_diff = torch.max(torch.abs(tensor - tensor[0])) - return ( - (torch.isfinite(tensor).all() - and (max_abs_diff / (tensor[0] + 1e-6) < 1e-6).all()) - or - (self.cfg.common.amp and not torch.isfinite(tensor).all()) - # in case of amp non-finite grads are fine - ) - - if not is_consistent(self._grad_norm_buf): - pretty_detail = "\n".join( - "rank {:3d} = {:.8f}".format(r, n) - for r, n in enumerate(self._grad_norm_buf.tolist()) - ) - error_detail = "grad_norm across the workers:\n{}\n".format( - pretty_detail - ) - # use FloatingPointError to trigger NanDetector - raise FloatingPointError( - "Fatal error: gradients are inconsistent between workers. " - "Try --ddp-backend=legacy_ddp. " - "Or are you mixing up different generation of GPUs in training?" - + "\n" - + "-" * 80 - + "\n{}\n".format(error_detail) - + "-" * 80 - ) - - def _reduce_and_log_stats(self, logging_outputs, sample_size, grad_norm=None): - if grad_norm is not None and ( - not torch.is_tensor(grad_norm) or torch.isfinite(grad_norm) - ): - metrics.log_speed("ups", 1.0, priority=100, round=2) - metrics.log_scalar("gnorm", grad_norm, priority=400, round=3) - if self.cfg.optimization.clip_norm > 0: - metrics.log_scalar( - "clip", - torch.where( - grad_norm > self.cfg.optimization.clip_norm, - grad_norm.new_tensor(100), - grad_norm.new_tensor(0), - ), - priority=500, - round=1, - ) - - with metrics.aggregate() as agg: - if logging_outputs is not None: - self.task.reduce_metrics(logging_outputs, self.get_criterion()) - del logging_outputs - - # extra warning for criterions that don't properly log a loss value - if "loss" not in agg: - if "loss" not in self._warn_once: - self._warn_once.add("loss") - logger.warning( - "Criterion.reduce_metrics did not log a 'loss' value, " - "which may break some functionality" - ) - metrics.log_scalar("loss", -1) - - # support legacy interface - if self.tpu: - logging_output = {} - else: - logging_output = agg.get_smoothed_values() - logging_output["sample_size"] = sample_size - for key_to_delete in ["ppl", "wps", "wpb", "bsz"]: - if key_to_delete in logging_output: - del logging_output[key_to_delete] - return logging_output - - def _check_xla_compilation(self): - import torch_xla.debug.metrics as met - - compile_stats = met.metric_data("CompileTime") - if compile_stats is None: - return - num_xla_compiles = compile_stats[0] - if num_xla_compiles > self._num_xla_compiles: - logger.warning( - "XLA compilation detected on device #{}; too many of these can lead " - "to slow training, but we expect a few in the beginning".format( - self.cfg.distributed_training.distributed_rank - ) - ) - self._num_xla_compiles = num_xla_compiles - - def _xla_markstep_and_send_to_cpu(self, data=None): - import torch_xla.core.xla_model as xm - - xm.mark_step() - if data is not None: - from fairseq.utils import xla_device_to_cpu - - return xla_device_to_cpu(data) - - -def _catalog_shared_params(module, memo=None, prefix=""): - if memo is None: - first_call = True - memo = {} - else: - first_call = False - for name, param in module._parameters.items(): - param_prefix = prefix + ("." if prefix else "") + name - if param not in memo: - memo[param] = [] - memo[param].append(param_prefix) - for name, m in module._modules.items(): - if m is None: - continue - submodule_prefix = prefix + ("." if prefix else "") + name - _catalog_shared_params(m, memo, submodule_prefix) - if first_call: - return [x for x in memo.values() if len(x) > 1] - - -def _get_module_by_path(module, path): - path = path.split(".") - for name in path: - module = getattr(module, name) - return module - - -def _set_module_by_path(module, path, value): - path = path.split(".") - for name in path[:-1]: - module = getattr(module, name) - setattr(module, path[-1], value) diff --git a/spaces/gradio/HuBERT/scripts/test_fsdp.sh b/spaces/gradio/HuBERT/scripts/test_fsdp.sh deleted file mode 100644 index 1f428a035e4474427ded991f8e8307ea59f61f69..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/scripts/test_fsdp.sh +++ /dev/null @@ -1,24 +0,0 @@ -#!/usr/bin/env bash -rm -rf fsdp_dummy -mkdir -p fsdp_dummy -CUDA_VISIBLE_DEVICES=0,1,2,3 fairseq-train /private/home/sshleifer/data-bin/stories_mmap \ - --ddp-backend fully_sharded --fp16 --fp16-init-scale 4 \ - --cpu-offload --checkpoint-activations \ - --task language_modeling --tokens-per-sample 256 --batch-size 8 \ - --arch transformer_lm_gpt2_tiny \ - --optimizer cpu_adam --adam-betas "(0.9,0.98)" \ - --lr 0.0001 --lr-scheduler polynomial_decay --warmup-updates 5 --total-num-update 10 \ - --max-update 5 --log-format json --log-interval 1 \ - --save-interval-updates 5 --save-dir fsdp_dummy --disable-validation \ - --restore-file x.pt "$@" - -# Now we try to load the checkpoint -CUDA_VISIBLE_DEVICES=0,1 fairseq-train /private/home/sshleifer/data-bin/stories_mmap \ - --ddp-backend fully_sharded --fp16 --fp16-init-scale 4 \ - --cpu-offload --checkpoint-activations \ - --task language_modeling --tokens-per-sample 256 --batch-size 8 \ - --arch transformer_lm_gpt2_tiny \ - --optimizer cpu_adam --adam-betas "(0.9,0.98)" \ - --lr 0.0001 --lr-scheduler polynomial_decay --warmup-updates 5 --total-num-update 10 \ - --max-update 2 --log-format json --log-interval 1 \ - --save-interval-updates 2 --save-dir fsdp_dummy diff --git a/spaces/guoyww/AnimateDiff/download_bashscripts/3-RcnzCartoon.sh b/spaces/guoyww/AnimateDiff/download_bashscripts/3-RcnzCartoon.sh deleted file mode 100644 index 07f4f69d399e10b0a618501d7f72bcf7da571dd0..0000000000000000000000000000000000000000 --- a/spaces/guoyww/AnimateDiff/download_bashscripts/3-RcnzCartoon.sh +++ /dev/null @@ -1,2 +0,0 @@ -#!/bin/bash -wget https://civitai.com/api/download/models/71009 -P models/DreamBooth_LoRA/ --content-disposition --no-check-certificate \ No newline at end of file diff --git a/spaces/h2oai/h2ogpt-chatbot/gradio_utils/__init__.py b/spaces/h2oai/h2ogpt-chatbot/gradio_utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/h2oai/h2ogpt-chatbot2/README.md b/spaces/h2oai/h2ogpt-chatbot2/README.md deleted file mode 100644 index 420745b2dcb76877686b17a0e2f090721503fa43..0000000000000000000000000000000000000000 --- a/spaces/h2oai/h2ogpt-chatbot2/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: H2ogpt Chatbot -emoji: 📚 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.41.2 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: h2oai/h2ogpt-chatbot ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/h2oai/wave-tour/examples/stat_small_series_area.py b/spaces/h2oai/wave-tour/examples/stat_small_series_area.py deleted file mode 100644 index dea80fe806c30a5ee95bcf793351c2c049b6f525..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/stat_small_series_area.py +++ /dev/null @@ -1,44 +0,0 @@ -# Stat / Series / Small / Area -# Create a small stat card displaying a primary value and a series plot. -# #stat_card #series -# --- -import time - -from faker import Faker - -from synth import FakeCategoricalSeries -from h2o_wave import site, ui, data - -page = site['/demo'] - -colors = '$red $pink $blue $azure $cyan $teal $mint $green $lime $yellow $amber $orange $tangerine'.split() -curves = 'linear smooth step step-after step-before'.split() -fake = Faker() -cards = [] -for i in range(len(curves)): - f = FakeCategoricalSeries() - cat, val, pc = f.next() - c = page.add(f'example{i}', ui.small_series_stat_card( - box=f'1 {i + 1} 1 1', - title=fake.cryptocurrency_name(), - value='=${{intl qux minimum_fraction_digits=2 maximum_fraction_digits=2}}', - data=dict(qux=val, quux=pc), - plot_category='foo', - plot_type='area', - plot_value='qux', - plot_color=colors[i], - plot_data=data('foo qux', -15), - plot_zero_value=0, - plot_curve=curves[i], - )) - cards.append((f, c)) -page.save() - -while True: - time.sleep(1) - for f, c in cards: - cat, val, pc = f.next() - c.data.qux = val - c.data.quux = pc - c.plot_data[-1] = [cat, val] - page.save() diff --git a/spaces/h2oai/wave-tour/examples/table_groupby.py b/spaces/h2oai/wave-tour/examples/table_groupby.py deleted file mode 100644 index 4a8c8f80634700a0a98f61ea02c5a124793d0cc2..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/table_groupby.py +++ /dev/null @@ -1,60 +0,0 @@ -# Table / Group by -# Allow grouping a table by column values. -# #table -# --- -import random -from faker import Faker -from h2o_wave import main, app, Q, ui - -fake = Faker() - -_id = 0 - - -class Issue: - def __init__(self, text: str, status: str, progress: float, icon: str, notifications: str): - global _id - _id += 1 - self.id = f'I{_id}' - self.text = text - self.status = status - self.views = 0 - self.progress = progress - self.icon = icon - self.notifications = notifications - - -# Create some issues -issues = [ - Issue( - text=fake.sentence(), - status=('Closed' if i % 2 == 0 else 'Open'), - progress=random.random(), - icon=('BoxCheckmarkSolid' if random.random() > 0.5 else 'BoxMultiplySolid'), - notifications=('Off' if random.random() > 0.5 else 'On')) for i in range(100) -] - -# Create columns for our issue table. -columns = [ - ui.table_column(name='text', label='Issue'), - ui.table_column(name='status', label='Status'), - ui.table_column(name='notifications', label='Notifications'), - ui.table_column(name='done', label='Done', cell_type=ui.icon_table_cell_type()), - ui.table_column(name='views', label='Views'), - ui.table_column(name='progress', label='Progress', cell_type=ui.progress_table_cell_type()), -] - - -@app('/demo') -async def serve(q: Q): - q.page['form'] = ui.form_card(box='1 1 -1 7', items=[ - ui.table( - name='issues', - columns=columns, - rows=[ui.table_row( - name=issue.id, - cells=[issue.text, issue.status, issue.notifications, issue.icon, str(issue.views), - str(issue.progress)]) for issue in issues], - groupable=True, - )]) - await q.page.save() diff --git a/spaces/h4d35/CosineSim/README.md b/spaces/h4d35/CosineSim/README.md deleted file mode 100644 index 191c5fe5811d2a9a19f947f62d1c0ce0d4f2e770..0000000000000000000000000000000000000000 --- a/spaces/h4d35/CosineSim/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: CosineSim -emoji: 🌖 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/hackathon-pln-es/extractive-qa-biomedicine/app.py b/spaces/hackathon-pln-es/extractive-qa-biomedicine/app.py deleted file mode 100644 index df6a72ff0f132246f5202ca40652273d1067bca5..0000000000000000000000000000000000000000 --- a/spaces/hackathon-pln-es/extractive-qa-biomedicine/app.py +++ /dev/null @@ -1,150 +0,0 @@ -import gradio as gr -from transformers import pipeline -import torch - -title = "Extractive QA Biomedicine" -description = """ -

-Recent research has made available Spanish Language Models trained on Biomedical corpus. This project explores the use of these new models to generate extractive Question Answering models for Biomedicine, and compares their effectiveness with general masked language models. - -The models were trained on the SQUAD_ES Dataset (automatic translation of the Stanford Question Answering Dataset into Spanish). SQUAD v2 version was chosen in order to include questions that cannot be answered based on a provided context. - -The models were evaluated on BIOMED_SQUAD_ES_V2 Dataset , a subset of the SQUAD_ES evaluation dataset containing questions related to the Biomedical domain. - -The project is aligned with goal number 3 of the Sustainable Development Goals promoted by the United Nations: "Ensure a healthy life and promote well-being for all at all ages", since this research can lead to the development of tools that facilitate the access to health information by doctors and Spanish-speaking people from all over the world. - -In the following Demo, the four trained models can be tested to answer a question given a context (the confidence score - from 0 to 1 - of the predicted answer is also displayed): - -

-""" -article = """ -

-

Results

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
ModelBase Model Domainexactf1HasAns_exactHasAns_f1NoAns_exactNoAns_f1
hackathon-pln-es/roberta-base-bne-squad2-esGeneral67.634175.698853.736770.052681.217481.2174
hackathon-pln-es/roberta-base-biomedical-clinical-es-squad2-esBiomedical66.842675.234653.024970.003180.347880.3478
hackathon-pln-es/roberta-base-biomedical-es-squad2-esBiomedical67.634174.561247.686861.701287.1304 87.1304
hackathon-pln-es/biomedtra-small-es-squad2-esBiomedical34.476744.329445.373765.30723.826123.8261
- -

Challenges

- -
  • Question Answering is a complex task to understand, as it requires not only pre-processing the inputs, but also post-processing the outputs. Moreover, the metrics used are quite specific. -
  • There is not as much documentation and tutorials available for QA as for other more popular NLP tasks. In particular, the examples provided are often focused on the SQUAD v1 format and not on SQUAD v2, the format selected for this project. -
  • Before the Hackathon, there was no Biomedical QA dataset in Spanish publicly available (particularly with the SQUAD V2 format). It was necessary to create a validation Biomedical Dataset using the SQUAD_ES Dataset. - - -

    Conclusion and Future Work

    - -If F1 Score is considered, the results show that there may be no advantage in using domain-specific masked language models to generate Biomedical QA models. - - -However, the F1 Scores reported for the Biomedical roberta-based models are not far below from those of the general roberta-based model. - -If only unanswerable questions are taken into account, the model with the best F1 Score is hackathon-pln-es/roberta-base-biomedical-es-squad2-es. - - -The model hackathon-pln-es/biomedtra-small-es-squad2-es, on the contrary, shows inability to correctly identify unanswerable questions. - -As future work, the following experiments could be carried out: -
      -
    • Create Biomedical masked-language models adapted from a general model, to preserve words and features of Spanish that are also present in Biomedical questions and articles. The Biomedical base models used in the project were trained from scratch from a Biomedical corpus. -
    • Create a Biomedical training dataset with SQUAD v2 format. -
    • Generate a new and larger Spanish Biomedical validation dataset, not translated from English as in the case of SQUAD_ES Dataset. -
    • Ensamble different models. -
    -

    - -

    Author

    -Santiago Maximo -""" - -device = 0 if torch.cuda.is_available() else -1 -MODEL_NAMES = ["hackathon-pln-es/roberta-base-bne-squad2-es", - "hackathon-pln-es/roberta-base-biomedical-clinical-es-squad2-es", - "hackathon-pln-es/roberta-base-biomedical-es-squad2-es", - "hackathon-pln-es/biomedtra-small-es-squad2-es"] - -examples = [ - [MODEL_NAMES[2], "¿Qué cidippido se utiliza como descripción de los ctenóforos en la mayoría de los libros de texto?","Para un filo con relativamente pocas especies, los ctenóforos tienen una amplia gama de planes corporales. Las especies costeras necesitan ser lo suficientemente duras para soportar las olas y remolcar partículas de sedimentos, mientras que algunas especies oceánicas son tan frágiles que es muy difícil capturarlas intactas para su estudio. Además, las especies oceánicas no conservan bien, y son conocidas principalmente por fotografías y notas de observadores. Por lo tanto, la mayor atención se ha concentrado recientemente en tres géneros costeros: Pleurobrachia, Beroe y Mnemiopsis. Al menos dos libros de texto basan sus descripciones de ctenóforos en los cidipépidos Pleurobrachia."], - [MODEL_NAMES[0], "¿Dónde se atasca un fagocito en un patógeno?", "La fagocitosis es una característica importante de la inmunidad celular innata llevada a cabo por células llamadas fagocitos que absorben, o comen, patógenos o partículas. Los fagocitos generalmente patrullan el cuerpo en busca de patógenos, pero pueden ser llamados a lugares específicos por citoquinas. Una vez que un patógeno ha sido absorbido por un fagocito, queda atrapado en una vesícula intracelular llamada fagosoma, que posteriormente se fusiona con otra vesícula llamada lisosoma para formar un fagocito."], - -] - -def getanswer(model_name, question, context): - - question_answerer = pipeline("question-answering", model=model_name, device=device) - - response = question_answerer({ - 'question': question, - 'context': context - }) - return response['answer'],response['score'] - -face = gr.Interface( - fn=getanswer, - inputs=[ - gr.inputs.Radio( - label="Pick a QA Model", - choices=MODEL_NAMES, - ), - gr.inputs.Textbox(lines=1, placeholder="Question Here… "), - gr.inputs.Textbox(lines=10, placeholder="Context Here… ") - ], - outputs=[ - gr.outputs.Textbox(label="Answer"), - gr.outputs.Textbox(label="Score"), - ], - layout="vertical", - title=title, - examples=examples, - description=description, - article=article, - allow_flagging ="never" -) -face.launch() \ No newline at end of file diff --git a/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/training/profile.py b/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/training/profile.py deleted file mode 100644 index f10372cdef306e5e199db432b23062df1c098cf9..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/training/profile.py +++ /dev/null @@ -1,158 +0,0 @@ -import argparse - -import torch -import open_clip -import pandas as pd -from fvcore.nn import FlopCountAnalysis, flop_count_str, ActivationCountAnalysis - - -parser = argparse.ArgumentParser(description='OpenCLIP Profiler') - -# benchmark specific args -parser.add_argument('--model', metavar='NAME', default='', - help='model(s) to profile') -parser.add_argument('--results-file', default='', type=str, metavar='FILENAME', - help='Output csv file for results') - - -def profile_fvcore( - model, - image_input_size=(3, 224, 224), - text_input_size=(77,), - batch_size=1, - detailed=False, - force_cpu=False -): - if force_cpu: - model = model.to('cpu') - device, dtype = next(model.parameters()).device, next(model.parameters()).dtype - example_image_input = torch.ones((batch_size,) + image_input_size, device=device, dtype=dtype) - example_text_input = torch.ones((batch_size,) + text_input_size, device=device, dtype=torch.int64) - fca = FlopCountAnalysis(model, (example_image_input, example_text_input)) - aca = ActivationCountAnalysis(model, (example_image_input, example_text_input)) - if detailed: - fcs = flop_count_str(fca) - print(fcs) - return fca.total(), aca.total() - - -def profile_fvcore_text( - model, - text_input_size=(77,), - batch_size=1, - detailed=False, - force_cpu=False -): - if force_cpu: - model = model.to('cpu') - device = next(model.parameters()).device - example_input = torch.ones((batch_size,) + text_input_size, device=device, dtype=torch.int64) - fca = FlopCountAnalysis(model, example_input) - aca = ActivationCountAnalysis(model, example_input) - if detailed: - fcs = flop_count_str(fca) - print(fcs) - return fca.total(), aca.total() - - -def profile_fvcore_image( - model, - image_input_size=(3, 224, 224), - batch_size=1, - detailed=False, - force_cpu=False -): - if force_cpu: - model = model.to('cpu') - device, dtype = next(model.parameters()).device, next(model.parameters()).dtype - example_input = torch.ones((batch_size,) + image_input_size, device=device, dtype=dtype) - fca = FlopCountAnalysis(model, example_input) - aca = ActivationCountAnalysis(model, example_input) - if detailed: - fcs = flop_count_str(fca) - print(fcs) - return fca.total(), aca.total() - - -def count_params(model): - return sum([m.numel() for m in model.parameters()]) - - -def profile_model(model_name): - model = open_clip.create_model(model_name, force_custom_text=True, pretrained_hf=False) - model.eval() - if torch.cuda.is_available(): - model = model.cuda() - - if isinstance(model.visual.image_size, (tuple, list)): - image_input_size = (3,) + tuple(model.visual.image_size[-2:]) - else: - image_input_size = (3, model.visual.image_size, model.visual.image_size) - text_input_size = (77,) - - results = {} - results['model'] = model_name - results['image_size'] = image_input_size[1] - - model_cfg = open_clip.get_model_config(model_name) - if model_cfg: - vision_cfg = open_clip.CLIPVisionCfg(**model_cfg['vision_cfg']) - text_cfg = open_clip.CLIPTextCfg(**model_cfg['text_cfg']) - results['image_width'] = int(vision_cfg.width) - results['text_width'] = int(text_cfg.width) - results['embed_dim'] = int(model_cfg['embed_dim']) - else: - results['image_width'] = 0 - results['text_width'] = 0 - results['embed_dim'] = 0 - - retries = 2 - while retries: - retries -= 1 - try: - macs, acts = profile_fvcore( - model, image_input_size=image_input_size, text_input_size=text_input_size, force_cpu=not retries) - - image_macs, image_acts = profile_fvcore_image( - model.visual, image_input_size=image_input_size, force_cpu=not retries) - - text_macs, text_acts = profile_fvcore_text( - model.text, text_input_size=text_input_size, force_cpu=not retries) - - results['gmacs'] = round(macs / 1e9, 2) - results['macts'] = round(acts / 1e6, 2) - results['mparams'] = round(count_params(model) / 1e6, 2) - results['image_gmacs'] = round(image_macs / 1e9, 2) - results['image_macts'] = round(image_acts / 1e6, 2) - results['image_mparams'] = round(count_params(model.visual) / 1e6, 2) - results['text_gmacs'] = round(text_macs / 1e9, 2) - results['text_macts'] = round(text_acts / 1e6, 2) - results['text_mparams'] = round(count_params(model.text) / 1e6, 2) - except RuntimeError as e: - pass - return results - - -def main(): - args = parser.parse_args() - - # FIXME accept a text file name to allow lists of models in txt/csv - if args.model == 'all': - parsed_model = open_clip.list_models() - else: - parsed_model = args.model.split(',') - - results = [] - for m in parsed_model: - row = profile_model(m) - results.append(row) - - df = pd.DataFrame(results, columns=results[0].keys()) - df = df.sort_values('gmacs') - print(df) - if args.results_file: - df.to_csv(args.results_file, index=False) - - -if __name__ == '__main__': - main() diff --git a/spaces/hamelcubsfan/AutoGPT/benchmark/__init__.py b/spaces/hamelcubsfan/AutoGPT/benchmark/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/solver/__init__.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/solver/__init__.py deleted file mode 100644 index 75f40530cccb6b989d33193de92a6c26a07cf751..0000000000000000000000000000000000000000 --- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/solver/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -from .build import make_optimizer -from .build import make_lr_scheduler -from .lr_scheduler import WarmupMultiStepLR diff --git a/spaces/harshhpareek/bertscore/app.py b/spaces/harshhpareek/bertscore/app.py deleted file mode 100644 index e24884e8afcbeb0c2a68c59d4e7db4eeb3fd8c43..0000000000000000000000000000000000000000 --- a/spaces/harshhpareek/bertscore/app.py +++ /dev/null @@ -1,6 +0,0 @@ -import evaluate -from evaluate.utils import launch_gradio_widget - - -module = evaluate.load("bertscore") -launch_gradio_widget(module) diff --git a/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/Bill-Goldberg-Theme-Song-Free-Download.md b/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/Bill-Goldberg-Theme-Song-Free-Download.md deleted file mode 100644 index b3a679287c23385f5ceffe52fb1b70283863d66d..0000000000000000000000000000000000000000 --- a/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/Bill-Goldberg-Theme-Song-Free-Download.md +++ /dev/null @@ -1,47 +0,0 @@ -## Bill Goldberg Theme Song Free Download - - - - ![Bill Goldberg Theme Song Free Download](https://i.imgur.com/vJJUjEf.jpg) - - - -**Download >>> [https://ditzcosupo.blogspot.com/?d=2twsji](https://ditzcosupo.blogspot.com/?d=2twsji)** - - - -# How to Download Bill Goldberg Theme Song for Free - - - -Bill Goldberg is one of the most popular and intense wrestlers of all time. He rose to fame in WCW with a lengthy undefeated streak in singles competition from 1997 to 1998, becoming a one-time WCW World Heavyweight Champion, two-time WCW United States Heavyweight Champion, and one-time WCW World Tag Team Champion. He also wrestled for WWE, becoming a one-time World Heavyweight Champion and a two-time WWE Universal Champion. He is widely regarded as the inventor of the spear signature move in wrestling, which he popularized and executed with great skill. - - - -One of the things that made Goldberg stand out was his entrance theme song, "Invasion", which was composed by Jim Johnston. The song featured a powerful guitar riff, a siren sound, and chants of "Goldberg" from the crowd. The song perfectly matched Goldberg's intensity and charisma, and created an atmosphere of anticipation and excitement whenever he appeared. - - - -If you are a fan of Goldberg and his theme song, you might be wondering how to download it for free. There are many websites that offer free downloads of wrestling theme songs, but not all of them are safe or legal. Some of them might contain viruses, malware, or spyware that can harm your computer or device. Some of them might also violate copyright laws and infringe on the rights of the original creators. - - - -To avoid these risks, you should only download Goldberg's theme song from reputable and authorized sources. One of them is YouTube, where you can find several videos of Goldberg's entrance with his theme song playing in the background. You can use a YouTube downloader tool or app to convert the video into an MP3 file and save it on your device. However, you should be careful not to download any copyrighted content without permission from the owner. - - - -Another option is to use a streaming service like Spotify or Apple Music, where you can find Goldberg's theme song as part of various wrestling playlists. You can listen to the song online or offline, depending on your subscription plan. However, you should be aware that you cannot download the song as a separate file or share it with others. - - - -The best way to enjoy Goldberg's theme song is to watch him perform live in the ring. He is still active as a wrestler and occasionally makes appearances on WWE shows. You can check out his official website [billgoldberg.com](https://billgoldberg.com/) for his latest news and updates. You can also follow him on social media platforms like Twitter, Instagram, and Facebook. - - - -Goldberg's theme song is more than just a music track. It is a symbol of his legacy and impact on the wrestling industry. It is a tribute to his strength, skill, and passion. It is a reminder of his unforgettable moments and achievements. It is a part of his identity and personality. - - - -If you want to download Bill Goldberg's theme song for free, you should do it responsibly and legally. You should respect his work and his rights as an artist and a wrestler. You should also support him by watching his matches and cheering for him whenever he steps into the ring. - - 1b8d091108 \ No newline at end of file diff --git a/spaces/huybery/deep-thinking/models/meta_optimizer.py b/spaces/huybery/deep-thinking/models/meta_optimizer.py deleted file mode 100644 index d3fe520ffed657d94e6f7e539f43850ced244420..0000000000000000000000000000000000000000 --- a/spaces/huybery/deep-thinking/models/meta_optimizer.py +++ /dev/null @@ -1,78 +0,0 @@ -import torch - - -class MomentumOptim: - def __init__(self, step_size=0.01, momentum=0.9): - self.step_size = step_size - self.momentum = momentum - self.m = None # velocity - - def init(self): - self.m = None - - def upd_m(self, old_m, g): - return g + self.momentum * old_m - - def upd(self, old_x, m): - return old_x + self.step_size * m - - def __call__(self, old_xs, new_xs): - pesudo_gs = [new_x - old_x for old_x, new_x in zip(old_xs, new_xs)] - - if not self.m: - self.m = pesudo_gs - else: - self.m = [self.upd_m(old_m, g) for old_m, g in zip(self.m, pesudo_gs)] - - updated_kv = [self.upd(old_x, m) for old_x, m in zip(old_xs, self.m)] - return updated_kv - - -class AttnOptimWrapper: - def __init__(self, llm, model_type, optimizer="momentum", **optimizer_args): - self.model = llm - self.kv = None - self.model_type = model_type - - if optimizer == "momentum": - self.optim_k = MomentumOptim(**optimizer_args) - self.optim_v = MomentumOptim(**optimizer_args) - else: - raise ValueError() - - def init(self): - self.optim_k.init() - self.optim_v.init() - - @torch.no_grad() - def step(self, ctx_ids): - L = len(ctx_ids) - - ctx_ids = ctx_ids.unsqueeze(0) # [1, L] - mask = torch.ones_like(ctx_ids) - if self.kv is not None: - mask = mask.repeat(1, 2) # [1, 2*L] - - next_kv = self.model( - input_ids=ctx_ids, - attention_mask=mask, - past_key_values=self.kv, - use_cache=True, - ).past_key_values # kv @ (old_ctx + new_ctx) - - cur_kv = [] - for layer_k, layer_v in next_kv: - # [B, num_head, 2*L, head_hidden] - cur_kv.append([layer_k[:, :, -L:, :], layer_v[:, :, -L:, :]]) # kv @ (new_ctx) - - if not self.kv: - self.kv = cur_kv - else: - old_ks, old_vs = zip(*self.kv) - cur_ks, cur_vs = zip(*cur_kv) - - upd_ks = self.optim_k(old_ks, cur_ks) - upd_vs = self.optim_v(old_vs, cur_vs) - self.kv = list(zip(upd_ks, upd_vs)) - - return self.kv diff --git a/spaces/hysts/ControlNet-with-Anything-v4/app_hough.py b/spaces/hysts/ControlNet-with-Anything-v4/app_hough.py deleted file mode 100644 index ef87a73ca6c757eea4352aeafbd45fdad0189599..0000000000000000000000000000000000000000 --- a/spaces/hysts/ControlNet-with-Anything-v4/app_hough.py +++ /dev/null @@ -1,97 +0,0 @@ -# This file is adapted from https://github.com/lllyasviel/ControlNet/blob/f4748e3630d8141d7765e2bd9b1e348f47847707/gradio_hough2image.py -# The original license file is LICENSE.ControlNet in this repo. -import gradio as gr - - -def create_demo(process, max_images=12, default_num_images=3): - with gr.Blocks() as demo: - with gr.Row(): - gr.Markdown('## Control Stable Diffusion with Hough Line Maps') - with gr.Row(): - with gr.Column(): - input_image = gr.Image(source='upload', type='numpy') - prompt = gr.Textbox(label='Prompt') - run_button = gr.Button(label='Run') - with gr.Accordion('Advanced options', open=False): - num_samples = gr.Slider(label='Images', - minimum=1, - maximum=max_images, - value=default_num_images, - step=1) - image_resolution = gr.Slider(label='Image Resolution', - minimum=256, - maximum=512, - value=512, - step=256) - detect_resolution = gr.Slider(label='Hough Resolution', - minimum=128, - maximum=512, - value=512, - step=1) - mlsd_value_threshold = gr.Slider( - label='Hough value threshold (MLSD)', - minimum=0.01, - maximum=2.0, - value=0.1, - step=0.01) - mlsd_distance_threshold = gr.Slider( - label='Hough distance threshold (MLSD)', - minimum=0.01, - maximum=20.0, - value=0.1, - step=0.01) - num_steps = gr.Slider(label='Steps', - minimum=1, - maximum=100, - value=20, - step=1) - guidance_scale = gr.Slider(label='Guidance Scale', - minimum=0.1, - maximum=30.0, - value=9.0, - step=0.1) - seed = gr.Slider(label='Seed', - minimum=-1, - maximum=2147483647, - step=1, - randomize=True) - a_prompt = gr.Textbox( - label='Added Prompt', - value='best quality, extremely detailed') - n_prompt = gr.Textbox( - label='Negative Prompt', - value= - 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality' - ) - with gr.Column(): - result = gr.Gallery(label='Output', - show_label=False, - elem_id='gallery').style(grid=2, - height='auto') - inputs = [ - input_image, - prompt, - a_prompt, - n_prompt, - num_samples, - image_resolution, - detect_resolution, - num_steps, - guidance_scale, - seed, - mlsd_value_threshold, - mlsd_distance_threshold, - ] - prompt.submit(fn=process, inputs=inputs, outputs=result) - run_button.click(fn=process, - inputs=inputs, - outputs=result, - api_name='hough') - return demo - - -if __name__ == '__main__': - from model import Model - model = Model() - demo = create_demo(model.process_hough) - demo.queue().launch() diff --git a/spaces/ianpan/bone-age-greulich-and-pyle/README.md b/spaces/ianpan/bone-age-greulich-and-pyle/README.md deleted file mode 100644 index 8b650b68b7bb174814384ec0cd18615f21d9966b..0000000000000000000000000000000000000000 --- a/spaces/ianpan/bone-age-greulich-and-pyle/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Deep Learning Model for Pediatric Bone Age -emoji: 💻 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.8.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/idosal/oai-proxy/src/info-page.ts b/spaces/idosal/oai-proxy/src/info-page.ts deleted file mode 100644 index 20f9a4f9157276da3cbb632bb8cbe51dbe2bca21..0000000000000000000000000000000000000000 --- a/spaces/idosal/oai-proxy/src/info-page.ts +++ /dev/null @@ -1,51 +0,0 @@ -import { Request, Response } from "express"; -import showdown from "showdown"; -import { keys } from "./keys"; - -export const handleInfoPage = (req: Request, res: Response) => { - // Huggingface puts spaces behind some cloudflare ssl proxy, so `req.protocol` is `http` but the correct URL is actually `https` - const host = req.get("host"); - const isHuggingface = host?.includes("hf.space"); - const protocol = isHuggingface ? "https" : req.protocol; - res.send(getInfoPageHtml(protocol + "://" + host)); -}; - -function getInfoPageHtml(host: string) { - const keylist = keys.list(); - const info = { - message: "OpenAI Reverse Proxy", - uptime: process.uptime(), - timestamp: Date.now(), - baseUrl: host, - kobold: host + "/proxy/kobold" + " (not yet implemented)", - openai: host + "/proxy/openai", - keys: { - all: keylist.length, - active: keylist.filter((k) => !k.isDisabled).length, - trial: keylist.filter((k) => k.isTrial).length, - gpt4: keylist.filter((k) => k.isGpt4).length, - proompts: keylist.reduce((acc, k) => acc + k.promptCount, 0), - }, - }; - - const readme = require("fs").readFileSync("README.md", "utf8"); - const readmeBody = readme.split("---")[2]; - const converter = new showdown.Converter(); - const html = converter.makeHtml(readmeBody); - - const pageBody = ` - - - - OpenAI Reverse Proxy - - -

    Service Info

    -
    ${JSON.stringify(info, null, 2)}
    - -`; - - return pageBody; -} diff --git a/spaces/iitolstykh/age_gender_estimation_demo/app.py b/spaces/iitolstykh/age_gender_estimation_demo/app.py deleted file mode 100644 index 92884f0c44758a51b9a315fb4c5b7e3b6285ad20..0000000000000000000000000000000000000000 --- a/spaces/iitolstykh/age_gender_estimation_demo/app.py +++ /dev/null @@ -1,129 +0,0 @@ -#!/usr/bin/env python - -import os -import shlex -import subprocess - -if os.getenv('SYSTEM') == 'spaces': - git_repo = "https://github.com/WildChlamydia/MiVOLO.git" - subprocess.call(shlex.split(f'pip install git+{git_repo}')) - -import pathlib -import os -import gradio as gr -import huggingface_hub -import numpy as np -import functools -from dataclasses import dataclass - -from mivolo.predictor import Predictor - - -@dataclass -class Cfg: - detector_weights: str - checkpoint: str - device: str = "cpu" - with_persons: bool = True - disable_faces: bool = False - draw: bool = True - - -DESCRIPTION = """ -# MiVOLO: Multi-input Transformer for Age and Gender Estimation - -This is an official demo for https://github.com/WildChlamydia/MiVOLO.\n -Telegram channel: https://t.me/+K0i2fLGpVKBjNzUy (Russian language) -""" - -HF_TOKEN = os.getenv('HF_TOKEN') - - -def load_models(): - detector_path = huggingface_hub.hf_hub_download('iitolstykh/demo_yolov8_detector', - 'yolov8x_person_face.pt', - use_auth_token=HF_TOKEN) - - age_gender_path = huggingface_hub.hf_hub_download('iitolstykh/demo_xnet_volo_cross', - 'checkpoint-377.pth.tar', - use_auth_token=HF_TOKEN) - - predictor_cfg = Cfg(detector_path, age_gender_path) - predictor = Predictor(predictor_cfg) - - return predictor - - -def detect( - image: np.ndarray, - score_threshold: float, - iou_threshold: float, - mode: str, - predictor: Predictor -) -> np.ndarray: - # input is rgb image, output must be rgb too - - predictor.detector.detector_kwargs['conf'] = score_threshold - predictor.detector.detector_kwargs['iou'] = iou_threshold - - if mode == "Use persons and faces": - use_persons = True - disable_faces = False - elif mode == "Use persons only": - use_persons = True - disable_faces = True - elif mode == "Use faces only": - use_persons = False - disable_faces = False - - predictor.age_gender_model.meta.use_persons = use_persons - predictor.age_gender_model.meta.disable_faces = disable_faces - - image = image[:, :, ::-1] # RGB -> BGR - detected_objects, out_im = predictor.recognize(image) - return out_im[:, :, ::-1] # BGR -> RGB - - -def clear(): - return None, 0.4, 0.7, "Use persons and faces", None - - -predictor = load_models() - -image_dir = pathlib.Path('images') -examples = [[path.as_posix(), 0.4, 0.7, "Use persons and faces"] for path in sorted(image_dir.glob('*.jpg'))] - -func = functools.partial(detect, predictor=predictor) - -with gr.Blocks( - theme=gr.themes.Default(), - css="style.css" -) as demo: - gr.Markdown(DESCRIPTION) - with gr.Row(): - with gr.Column(): - image = gr.Image(label='Input', type='numpy') - score_threshold = gr.Slider(0, 1, value=0.4, step=0.05, label='Detector Score Threshold') - iou_threshold = gr.Slider(0, 1, value=0.7, step=0.05, label='NMS Iou Threshold') - mode = gr.Radio(["Use persons and faces", "Use persons only", "Use faces only"], - value="Use persons and faces", - label="Inference mode", - info="What to use for gender and age recognition") - - with gr.Row(): - clear_button = gr.Button("Clear") - with gr.Column(): - run_button = gr.Button("Submit", variant="primary") - with gr.Column(): - result = gr.Image(label='Output', type='numpy') - - inputs = [image, score_threshold, iou_threshold, mode] - gr.Examples(examples=examples, - inputs=inputs, - outputs=result, - fn=func, - cache_examples=False) - run_button.click(fn=func, inputs=inputs, outputs=result, api_name='predict') - clear_button.click(fn=clear, inputs=None, outputs=[image, score_threshold, iou_threshold, mode, result]) - -demo.queue(max_size=15).launch() diff --git a/spaces/innnky/soft-vits-singingvc/monotonic_align/__init__.py b/spaces/innnky/soft-vits-singingvc/monotonic_align/__init__.py deleted file mode 100644 index 3d7009c40fea3a98168e3e3bc9ae061e91327422..0000000000000000000000000000000000000000 --- a/spaces/innnky/soft-vits-singingvc/monotonic_align/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -import numpy as np -import torch -from .monotonic_align.core import maximum_path_c - - -def maximum_path(neg_cent, mask): - """ Cython optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(np.float32) - path = np.zeros(neg_cent.shape, dtype=np.int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(np.int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(np.int32) - maximum_path_c(path, neg_cent, t_t_max, t_s_max) - return torch.from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Absolutyzm Czy Republika Sprawdzian Doc.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Absolutyzm Czy Republika Sprawdzian Doc.md deleted file mode 100644 index 0c4215976337790fff85e8965500512921a7b6f3..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Absolutyzm Czy Republika Sprawdzian Doc.md +++ /dev/null @@ -1,13 +0,0 @@ -

    Absolutyzm Czy Republika Sprawdzian Doc


    Downloadhttps://urlin.us/2uEvQG



    -
    -13 Dec 2020 - Linkin Park Roads Untraveled Mp3 320kbps 75 Absolutyzm Czy Republika Sprawdzian Doc BR6 MPB A CAPELLA GOM Encoder Crack15 Download Race. I can't wait till this year. -To watch the "Summer Walker - Get Up" video, we've got to put "Praying For Rain" to the middle of the video. -I'm so happy that this song reached the Top 10. -A little bit of rain would be a wonderfully good finishing touch on this song. -"Summer Walker" is by "A-Sides" from the album "Pays" (1999). -On this channel, I upload new and popular mp3 downloads, and high quality mp3 songs. -Free Download "Summer Walker - Get Up" song. -Summer Walker - Get Up Artist Summer Walker 8a78ff9644
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Biosoft PrimerPlex V2 11 21103.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Biosoft PrimerPlex V2 11 21103.md deleted file mode 100644 index 228b6e6ecc5f8ef40e077274b7ee22a8cafc5211..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Biosoft PrimerPlex V2 11 21103.md +++ /dev/null @@ -1,97 +0,0 @@ - -

    Biosoft PrimerPlex V2 11 21103: A Powerful Tool for Multiplex PCR Design

    -

    Multiplex PCR is a technique that allows simultaneous amplification of multiple targets in a single reaction. This can save time, cost and resources, as well as increase the throughput and accuracy of the analysis. However, designing primers for multiplex PCR can be challenging, as they need to be compatible with each other and avoid cross-reactivity, primer-dimer formation and nonspecific amplification.

    -

    That's where Biosoft PrimerPlex V2 11 21103 comes in. Biosoft PrimerPlex V2 11 21103 is an efficient and sophisticated software for designing oligos for multiplex analysis on suspension array systems such as Luminex 100, Luminex 200 and Bio-Plex 200. Based on Luminex xMAP® technology, these systems offer a versatile platform for multiplex nucleic acid detection in the 96-well format.

    -

    Biosoft PrimerPlex V2 11 21103


    DOWNLOAD https://urlin.us/2uEx5u



    -

    How Biosoft PrimerPlex V2 11 21103 Works

    -

    Biosoft PrimerPlex V2 11 21103 uses a proprietary algorithm to design optimal and compatible multiplex primer sets under uniform reaction conditions. It takes into account various factors such as primer melting temperature, GC content, secondary structure, specificity, homology and multiplexing level. It also allows the user to customize the primer design parameters and select the target regions of interest.

    -

    Biosoft PrimerPlex V2 11 21103 can design primers for various applications, such as:

    -
      -
    • Multiplex PCR for target amplification for next-generation sequencing (NGS)
    • -
    • Allele-specific primer extension (ASPE) primers for high-throughput SNP genotyping
    • -
    • Multiplex PCR for pathogen detection and identification
    • -
    • Multiplex PCR for gene expression analysis
    • -
    • Multiplex PCR for copy number variation (CNV) analysis
    • -
    -

    Benefits of Biosoft PrimerPlex V2 11 21103

    -

    Biosoft PrimerPlex V2 11 21103 offers several benefits for multiplex PCR design, such as:

    -
      -
    • It can design up to 1000 primers in a single run
    • -
    • It can design primers for both DNA and RNA targets
    • -
    • It can design primers for both singleplex and multiplex reactions
    • -
    • It can design primers for both conventional and real-time PCR
    • -
    • It can design primers for both standard and long-range PCR
    • -
    • It can design primers for both forward and reverse orientation
    • -
    • It can design primers with or without tails and tags
    • -
    • It can design primers with or without mismatches and degeneracy
    • -
    • It can design primers with or without restriction sites and overhangs
    • -
    • It can design primers with or without GC clamps and spacers
    • -
    • It can design primers with or without internal probes and quenchers
    • -
    • It can design primers with or without universal adapters and barcodes
    • -
    • It can design primers with or without Tm balancing and concentration optimization
    • -
    • It can design primers with or without cross-reactivity and primer-dimer checking
    • -
    • It can design primers with or without secondary structure and homology analysis
    • -
    • It can design primers with or without specificity and sensitivity testing
    • -
    • It can design primers with or without BLAST search and alignment
    • -
    • It can design primers with or without annotation and visualization
    • -
    • It can export the primer design results in various formats, such as Excel, PDF, FASTA, GenBank, etc.
    • -
    • It can import the primer design inputs from various sources, such as text files, databases, online servers, etc.
    • -
    - -

    Conclusion

    - -

    Biosoft PrimerPlex V2 11 21103 is a powerful tool for multiplex PCR design that can help you achieve your research goals faster and easier. Whether you are working on NGS, SNP genotyping, pathogen detection, gene expression, CNV analysis or any other multiplex application, Biosoft PrimerPlex V2 11 21103 can provide you with optimal and compatible multiplex primer sets that will ensure high-quality results. To learn more about Biosoft PrimerPlex V2 11 21103 and download a free trial version, visit https://www.premierbiosoft.com/primerplex/index.html.

    -

    What Customers Say About Biosoft PrimerPlex V2 11 21103

    -

    Biosoft PrimerPlex V2 11 21103 has received positive feedback and reviews from many customers who have used it for their multiplex PCR design needs. Here are some of the testimonials from satisfied users:

    -
    -

    "I have been using PrimerPlex for designing primers for multiplex PCR and NGS target amplification. It is a very useful and user-friendly software that saves me a lot of time and effort. It designs optimal and compatible primers for multiple targets under uniform reaction conditions. It also provides various analyses and tests to ensure primer specificity and quality. I highly recommend PrimerPlex to anyone who needs to design primers for multiplex applications."

    -

    -Dr. John Smith, Molecular Biologist, ABC Research Institute -
    -
    -

    "PrimerPlex is a great tool for designing ASPE primers for high-throughput SNP genotyping. It allows me to design primers with different features and options, such as tails, tags, mismatches, degeneracy, etc. It also checks for cross-reactivity and primer-dimer formation among the primer sets. It helps me to achieve accurate and reliable SNP genotyping results with minimal cost and resources."

    -Ms. Jane Doe, Geneticist, XYZ Biotech Company -
    -
    -

    "I have been using PrimerPlex for designing primers for multiplex PCR for pathogen detection and identification. It is a fast and accurate software that can design up to 1000 primers in a single run. It also allows me to customize the primer design parameters and select the target regions of interest. It performs various analyses and tests on the primer sets, such as BLAST search, alignment, annotation and visualization. It helps me to detect and identify multiple pathogens in a single reaction with high sensitivity and specificity."

    -Dr. Alice Lee, Microbiologist, LMN Hospital -
    - -

    Conclusion

    - -

    Biosoft PrimerPlex V2 11 21103 is a powerful tool for multiplex PCR design that can help you achieve your research goals faster and easier. Whether you are working on NGS, SNP genotyping, pathogen detection, gene expression, CNV analysis or any other multiplex application, Biosoft PrimerPlex V2 11 21103 can provide you with optimal and compatible multiplex primer sets that will ensure high-quality results. To learn more about Biosoft PrimerPlex V2 11 21103 and download a free trial version, visit https://www.premierbiosoft.com/primerplex/index.html.

    -

    How to Learn Biosoft PrimerPlex V2 11 21103

    -

    If you want to learn how to use Biosoft PrimerPlex V2 11 21103 for your multiplex PCR design needs, you can access various resources and tutorials that are available online. Here are some of the ways you can learn Biosoft PrimerPlex V2 11 21103:

    - - -

    Conclusion

    - -

    Biosoft PrimerPlex V2 11 21103 is a powerful tool for multiplex PCR design that can help you achieve your research goals faster and easier. Whether you are working on NGS, SNP genotyping, pathogen detection, gene expression, CNV analysis or any other multiplex application, Biosoft PrimerPlex V2 11 21103 can provide you with optimal and compatible multiplex primer sets that will ensure high-quality results. To learn more about Biosoft PrimerPlex V2 11 21103 and download a free trial version, visit https://www.premierbiosoft.com/primerplex/index.html.

    -

    How Biosoft PrimerPlex V2 11 21103 Compares with Other Software

    -

    Biosoft PrimerPlex V2 11 21103 is not the only software for multiplex PCR design, but it is one of the best and most advanced ones. Here are some of the features that make Biosoft PrimerPlex V2 11 21103 stand out from other software:

    -
      -
    • It can design primers for both DNA and RNA targets, while some other software can only design primers for DNA targets.
    • -
    • It can design primers for both singleplex and multiplex reactions, while some other software can only design primers for singleplex reactions.
    • -
    • It can design primers for both conventional and real-time PCR, while some other software can only design primers for conventional PCR.
    • -
    • It can design primers for both standard and long-range PCR, while some other software can only design primers for standard PCR.
    • -
    • It can design primers with various features and options, such as tails, tags, mismatches, degeneracy, restriction sites, overhangs, GC clamps, spacers, internal probes, quenchers, universal adapters, barcodes, Tm balancing and concentration optimization, while some other software have limited or no options for these features.
    • -
    • It can perform various analyses and tests on the primer sets, such as cross-reactivity and primer-dimer checking, secondary structure and homology analysis, specificity and sensitivity testing, BLAST search and alignment, annotation and visualization, while some other software have limited or no options for these analyses and tests.
    • -
    • It can export the primer design results in various formats, such as Excel, PDF, FASTA, GenBank, etc., while some other software have limited or no options for exporting the results.
    • -
    • It can import the primer design inputs from various sources, such as text files, databases, online servers, etc., while some other software have limited or no options for importing the inputs.
    • -
    - -

    Conclusion

    - -

    Biosoft PrimerPlex V2 11 21103 is a powerful tool for multiplex PCR design that can help you achieve your research goals faster and easier. Whether you are working on NGS, SNP genotyping, pathogen detection, gene expression, CNV analysis or any other multiplex application, Biosoft PrimerPlex V2 11 21103 can provide you with optimal and compatible multiplex primer sets that will ensure high-quality results. To learn more about Biosoft PrimerPlex V2 11 21103 and download a free trial version, visit https://www.premierbiosoft.com/primerplex/index.html.

    -

    Conclusion

    -

    In this article, we have introduced Biosoft PrimerPlex V2 11 21103, a powerful tool for multiplex PCR design that can help you achieve your research goals faster and easier. We have explained what multiplex PCR is and why it is useful for various applications. We have also described how Biosoft PrimerPlex V2 11 21103 works and what benefits it offers for multiplex PCR design. We have also shown you how to use Biosoft PrimerPlex V2 11 21103 and how to learn more about it. Finally, we have compared Biosoft PrimerPlex V2 11 21103 with other software and highlighted its unique features and advantages.

    -

    If you are looking for a reliable and trusted software for multiplex PCR design, you should definitely try Biosoft PrimerPlex V2 11 21103. It is fast, accurate, versatile, flexible, comprehensive, thorough, informative, helpful, easy and user-friendly. It is also affordable, cost-effective, supported and updated. It can design primers for various applications, targets and systems. It can design primers with various features and options. It can perform various analyses and tests on the primer sets. It can export the primer design results in various formats. It can import the primer design inputs from various sources.

    -

    Biosoft PrimerPlex V2 11 21103 is the ultimate solution for multiplex PCR design that can help you achieve your research goals faster and easier. To learn more about Biosoft PrimerPlex V2 11 21103 and download a free trial version, visit https://www.premierbiosoft.com/primerplex/index.html.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Cara Upgrade Windows 7 Sp1 Ke Sp2.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Cara Upgrade Windows 7 Sp1 Ke Sp2.md deleted file mode 100644 index 32809026b9c9d1c64d5465fd7c931ba4281d65cd..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Cara Upgrade Windows 7 Sp1 Ke Sp2.md +++ /dev/null @@ -1,22 +0,0 @@ - -

    Cara Upgrade Windows 7 SP1 ke SP2 Secara Offline

    -

    Windows 7 adalah salah satu sistem operasi yang masih banyak digunakan oleh banyak orang. Meskipun sudah agak jadul, Windows 7 memiliki kelebihan seperti kompatibilitas dengan berbagai aplikasi dan software, serta tidak terlalu boros kuota internet. Namun, Windows 7 juga memiliki kekurangan seperti beberapa celah keamanan dan keterbatasan fitur yang bisa berbahaya bagi pengguna.

    -

    cara upgrade windows 7 sp1 ke sp2


    Download »»» https://urlin.us/2uEwzQ



    -

    Oleh karena itu, sangat penting bagi pengguna Windows 7 untuk melakukan update secara berkala agar sistem operasi mereka tetap aman dan optimal. Salah satu update yang dianjurkan adalah Service Pack 2 (SP2), yang merupakan kumpulan dari semua update keamanan dan non-keamanan yang dirilis setelah Service Pack 1 (SP1). Sayangnya, Microsoft tidak merilis SP2 secara resmi untuk Windows 7, melainkan hanya menyediakan convenience rollup update, yang esensinya sama saja dengan SP2.

    -

    Lalu, bagaimana cara upgrade Windows 7 SP1 ke SP2 secara offline? Berikut adalah langkah-langkahnya:

    -
      -
    1. Pastikan Windows 7 anda sudah terinstall SP1. Jika belum, anda bisa mendownload dan menginstall SP1 secara offline dari situs resmi Microsoft[^1^] [^2^]. Pilih versi yang sesuai dengan jenis Windows 7 anda, apakah 32-bit atau 64-bit. Anda bisa mengeceknya dengan klik kanan pada My Computer lalu pilih Properties.
    2. -
    3. Download convenience rollup update untuk Windows 7 SP1 dari situs resmi Microsoft[^3^]. Pilih versi yang sesuai dengan jenis Windows 7 anda, apakah 32-bit atau 64-bit. File yang didownload berformat .MSU dan tidak perlu software tambahan untuk menjalankannya.
    4. -
    5. Setelah download selesai, double klik pada file .MSU dan ikuti instruksi yang muncul di layar. Proses ini mungkin akan memakan waktu cukup lama tergantung pada spesifikasi komputer anda.
    6. -
    7. Setelah proses selesai, komputer anda akan diminta untuk restart agar update dapat diterapkan secara sempurna.
    8. -
    9. Setelah restart, anda bisa mengecek apakah update berhasil dengan membuka Control Panel\\System and Security\\Windows Update dan melihat riwayat update yang telah terinstall. Anda juga bisa membuka Control Panel\\System and Security\\System dan melihat versi Windows 7 anda di bagian bawah.
    10. -
    -

    Selamat, anda telah berhasil upgrade Windows 7 SP1 ke SP2 secara offline. Semoga artikel ini bermanfaat dan selamat mencoba!

    - -

    Upgrade ke SP2 tidak hanya meningkatkan keamanan dan kinerja Windows 7 anda, tetapi juga memungkinkan anda untuk menginstall beberapa aplikasi dan software yang membutuhkan SP2 sebagai syarat minimum. Misalnya, iTunes, yang merupakan aplikasi untuk mengelola data pada perangkat iPhone, membutuhkan Windows 7 SP1 atau lebih tinggi untuk dapat berjalan dengan baik. Jadi, jika anda ingin menggunakan iTunes di Windows 7, anda harus upgrade ke SP2 terlebih dahulu.

    -

    -

    Selain itu, upgrade ke SP2 juga membantu anda untuk mempersiapkan diri jika anda ingin upgrade ke Windows 10 di masa depan. Windows 10 adalah sistem operasi terbaru dari Microsoft yang memiliki banyak fitur canggih dan menarik, seperti Cortana, Edge, Continuum, dan lain-lain. Windows 10 juga lebih aman dan stabil daripada Windows 7, serta mendapatkan update secara rutin dari Microsoft.

    -

    Jika anda tertarik untuk upgrade ke Windows 10, anda bisa melakukannya secara gratis jika anda memiliki lisensi asli dari Windows 7 SP1 atau SP2. Anda bisa mendownload alat bantu upgrade dari situs resmi Microsoft dan mengikuti langkah-langkah yang diberikan. Namun, sebelum upgrade ke Windows 10, pastikan anda membackup data penting anda terlebih dahulu dan mengecek apakah komputer anda memenuhi spesifikasi minimum untuk menjalankan Windows 10.

    -

    Demikianlah cara upgrade Windows 7 SP1 ke SP2 secara offline dan beberapa manfaat yang bisa anda dapatkan dari upgrade tersebut. Semoga artikel ini bermanfaat dan selamat mencoba!

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Emergency 2013 Unlock Code.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Emergency 2013 Unlock Code.md deleted file mode 100644 index 116f20cfe453fac3e07655eab7605c66be014896..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Emergency 2013 Unlock Code.md +++ /dev/null @@ -1,6 +0,0 @@ -

    emergency 2013 unlock code


    DOWNLOADhttps://urlin.us/2uEwpt



    -
    -June 2, 2013 - now says "only emergency calls" and does NOT ask me for an unlock code anymore. The original ATT SIM card is working fine. And one more thing: I just ... oh, damn! It tells me "Your phone is temporarily locked". I cannot use it. They require me to provide "unlock" but all I want to do is use my phone or I have to call ATT and get his code and I can't call ATT and get it. I can't just make an emergency call because it's blocked. I can't use the internet because it's blocked. I can't use any apps. It's simple. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/inreVtussa/clothingai/Examples/Circuit Wizard 1.5 Pro Torrent.md b/spaces/inreVtussa/clothingai/Examples/Circuit Wizard 1.5 Pro Torrent.md deleted file mode 100644 index 4a42e489492eef25bb1088c47dc14a3d63280372..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Circuit Wizard 1.5 Pro Torrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

    circuit wizard 1.5 pro torrent


    Download Ziphttps://tiurll.com/2uCms2



    - - d5da3c52bf
    -
    -
    -

    diff --git a/spaces/isididiidid/chatgpt-next-webiii/Dockerfile b/spaces/isididiidid/chatgpt-next-webiii/Dockerfile deleted file mode 100644 index d225d0ebaf6ae8dad42c0949c85d0917cbd60089..0000000000000000000000000000000000000000 --- a/spaces/isididiidid/chatgpt-next-webiii/Dockerfile +++ /dev/null @@ -1,4 +0,0 @@ -FROM yidadaa/chatgpt-next-web -ENV OPENAI_API_KEY=sk-lijiacai -ENV BASE_URL=https://lijiacai-openai-agent.hf.space - diff --git a/spaces/ixxan/multilingual-vqa/README.md b/spaces/ixxan/multilingual-vqa/README.md deleted file mode 100644 index e328c44c9be970272ddddfec3c71a30c6d5959bd..0000000000000000000000000000000000000000 --- a/spaces/ixxan/multilingual-vqa/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Multilingual Vqa -emoji: 👀 -colorFrom: green -colorTo: green -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jackli888/stable-diffusion-webui/extensions/deforum/CONTRIBUTING.md b/spaces/jackli888/stable-diffusion-webui/extensions/deforum/CONTRIBUTING.md deleted file mode 100644 index c5df29a13e839422129b9e5e1919cafac4a651e5..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/extensions/deforum/CONTRIBUTING.md +++ /dev/null @@ -1,7 +0,0 @@ -# Contributing - -As a part of the Deforum team I (kabachuha) want this script extension to remain a part of the Deforum project. - -Thus, if you want to submit feature request or bugfix, unless it only relates to automatic1111's porting issues, consider making a PR first to the parent repository notebook https://github.com/deforum/stable-diffusion. - -Also, you may want to inforum the dev team about your work via Discord https://discord.gg/deforum to ensure that no one else is working on the same stuff. diff --git a/spaces/jimschat/VITS-Umamusume-voice-synthesizer/models.py b/spaces/jimschat/VITS-Umamusume-voice-synthesizer/models.py deleted file mode 100644 index 7dcd22edf811b952514080f5f06cc43d635ead28..0000000000000000000000000000000000000000 --- a/spaces/jimschat/VITS-Umamusume-voice-synthesizer/models.py +++ /dev/null @@ -1,542 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions - -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - emotion_embedding): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emotion_embedding = emotion_embedding - - if self.n_vocab!=0: - self.emb = nn.Embedding(n_vocab, hidden_channels) - if emotion_embedding: - self.emotion_emb = nn.Linear(1024, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, emotion_embedding=None): - if self.n_vocab!=0: - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - if emotion_embedding is not None: - x = x + self.emotion_emb(emotion_embedding.unsqueeze(1)) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - emotion_embedding=False, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - emotion_embedding) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None, emotion_embedding=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, emotion_embedding) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - diff --git a/spaces/jinhybr/OCR-LayoutLM-v3-Document-Parser/app.py b/spaces/jinhybr/OCR-LayoutLM-v3-Document-Parser/app.py deleted file mode 100644 index 3d8be4e341bd44dea3cc9e707ebdb44dc9468d08..0000000000000000000000000000000000000000 --- a/spaces/jinhybr/OCR-LayoutLM-v3-Document-Parser/app.py +++ /dev/null @@ -1,130 +0,0 @@ -import os - -os.system('pip install pip --upgrade') -os.system('pip install -q git+https://github.com/huggingface/transformers.git') - - -os.system("pip install pyyaml==5.1") -# workaround: install old version of pytorch since detectron2 hasn't released packages for pytorch 1.9 (issue: https://github.com/facebookresearch/detectron2/issues/3158) -os.system( - "pip install torch==1.8.0+cu101 torchvision==0.9.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html" -) - -# install detectron2 that matches pytorch 1.8 -# See https://detectron2.readthedocs.io/tutorials/install.html for instructions -os.system( - "pip install -q detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.8/index.html" -) - -## install PyTesseract -os.system("pip install -q pytesseract") - -import gradio as gr -import numpy as np -from transformers import LayoutLMv3Processor, LayoutLMv3ForTokenClassification -from datasets import load_dataset -from PIL import Image, ImageDraw, ImageFont - -processor = LayoutLMv3Processor.from_pretrained("microsoft/layoutlmv3-base") -model = LayoutLMv3ForTokenClassification.from_pretrained( - "jinhybr/OCR-LayoutLMv3" -) - -# load image example -dataset = load_dataset("nielsr/funsd", split="test") -image = Image.open(dataset[0]["image_path"]).convert("RGB") -image = Image.open("./example_lm3.png") -image.save("document.png") - -labels = dataset.features["ner_tags"].feature.names -id2label = {v: k for v, k in enumerate(labels)} -label2color = { - "question": "blue", - "answer": "green", - "header": "orange", - "other": "violet", -} - - -def unnormalize_box(bbox, width, height): - return [ - width * (bbox[0] / 1000), - height * (bbox[1] / 1000), - width * (bbox[2] / 1000), - height * (bbox[3] / 1000), - ] - - -def iob_to_label(label): - label = label[2:] - if not label: - return "other" - return label - - -def process_image(image): - width, height = image.size - - # encode - encoding = processor( - image, truncation=True, return_offsets_mapping=True, return_tensors="pt" - ) - offset_mapping = encoding.pop("offset_mapping") - - # forward pass - outputs = model(**encoding) - - # get predictions - predictions = outputs.logits.argmax(-1).squeeze().tolist() - token_boxes = encoding.bbox.squeeze().tolist() - - # only keep non-subword predictions - is_subword = np.array(offset_mapping.squeeze().tolist())[:, 0] != 0 - true_predictions = [ - id2label[pred] for idx, pred in enumerate(predictions) if not is_subword[idx] - ] - true_boxes = [ - unnormalize_box(box, width, height) - for idx, box in enumerate(token_boxes) - if not is_subword[idx] - ] - - # draw predictions over the image - draw = ImageDraw.Draw(image) - font = ImageFont.load_default() - for prediction, box in zip(true_predictions, true_boxes): - predicted_label = iob_to_label(prediction).lower() - draw.rectangle(box, outline=label2color[predicted_label]) - draw.text( - (box[0] + 10, box[1] - 10), - text=predicted_label, - fill=label2color[predicted_label], - font=font, - ) - - return image - - -title = "OCR Document Parser : Information Extraction - Fine Tuned LayoutLMv3 Model" -description = "Demo for Microsoft's LayoutLMv3, a Transformer for state-of-the-art document image understanding tasks. This particular model is fine-tuned on FUNSD, a dataset of manually annotated forms. It annotates the words appearing in the image as QUESTION/ANSWER/HEADER/OTHER. To use it, simply upload an image or use the example image below and click 'Submit'. Results will show up in a few seconds. If you want to make the output bigger, right-click on it and select 'Open image in new tab'." -article = "

    LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking | Github Repo

    " -examples = [["document.png"]] - -css = ".output-image, .input-image {height: 40rem !important; width: 100% !important;}" -# css = "@media screen and (max-width: 600px) { .output_image, .input_image {height:20rem !important; width: 100% !important;} }" -# css = ".output_image, .input_image {height: 600px !important}" - -css = ".image-preview {height: auto !important;}" - -iface = gr.Interface( - fn=process_image, - inputs=gr.inputs.Image(type="pil"), - outputs=gr.outputs.Image(type="pil", label="annotated image"), - title=title, - description=description, - article=article, - examples=examples, - css=css, - enable_queue=True, -) -iface.launch(debug=True) \ No newline at end of file diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/aiohttp/client_reqrep.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/aiohttp/client_reqrep.py deleted file mode 100644 index 28b8a28d0d80a3c374de204d25ab460427b3154c..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/aiohttp/client_reqrep.py +++ /dev/null @@ -1,1134 +0,0 @@ -import asyncio -import codecs -import functools -import io -import re -import sys -import traceback -import warnings -from hashlib import md5, sha1, sha256 -from http.cookies import CookieError, Morsel, SimpleCookie -from types import MappingProxyType, TracebackType -from typing import ( - TYPE_CHECKING, - Any, - Dict, - Iterable, - List, - Mapping, - Optional, - Tuple, - Type, - Union, - cast, -) - -import attr -from multidict import CIMultiDict, CIMultiDictProxy, MultiDict, MultiDictProxy -from yarl import URL - -from . import hdrs, helpers, http, multipart, payload -from .abc import AbstractStreamWriter -from .client_exceptions import ( - ClientConnectionError, - ClientOSError, - ClientResponseError, - ContentTypeError, - InvalidURL, - ServerFingerprintMismatch, -) -from .formdata import FormData -from .helpers import ( - PY_36, - BaseTimerContext, - BasicAuth, - HeadersMixin, - TimerNoop, - noop, - reify, - set_result, -) -from .http import SERVER_SOFTWARE, HttpVersion10, HttpVersion11, StreamWriter -from .log import client_logger -from .streams import StreamReader -from .typedefs import ( - DEFAULT_JSON_DECODER, - JSONDecoder, - LooseCookies, - LooseHeaders, - RawHeaders, -) - -try: - import ssl - from ssl import SSLContext -except ImportError: # pragma: no cover - ssl = None # type: ignore[assignment] - SSLContext = object # type: ignore[misc,assignment] - -try: - import cchardet as chardet -except ImportError: # pragma: no cover - import charset_normalizer as chardet # type: ignore[no-redef] - - -__all__ = ("ClientRequest", "ClientResponse", "RequestInfo", "Fingerprint") - - -if TYPE_CHECKING: # pragma: no cover - from .client import ClientSession - from .connector import Connection - from .tracing import Trace - - -json_re = re.compile(r"^application/(?:[\w.+-]+?\+)?json") - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class ContentDisposition: - type: Optional[str] - parameters: "MappingProxyType[str, str]" - filename: Optional[str] - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class RequestInfo: - url: URL - method: str - headers: "CIMultiDictProxy[str]" - real_url: URL = attr.ib() - - @real_url.default - def real_url_default(self) -> URL: - return self.url - - -class Fingerprint: - HASHFUNC_BY_DIGESTLEN = { - 16: md5, - 20: sha1, - 32: sha256, - } - - def __init__(self, fingerprint: bytes) -> None: - digestlen = len(fingerprint) - hashfunc = self.HASHFUNC_BY_DIGESTLEN.get(digestlen) - if not hashfunc: - raise ValueError("fingerprint has invalid length") - elif hashfunc is md5 or hashfunc is sha1: - raise ValueError( - "md5 and sha1 are insecure and " "not supported. Use sha256." - ) - self._hashfunc = hashfunc - self._fingerprint = fingerprint - - @property - def fingerprint(self) -> bytes: - return self._fingerprint - - def check(self, transport: asyncio.Transport) -> None: - if not transport.get_extra_info("sslcontext"): - return - sslobj = transport.get_extra_info("ssl_object") - cert = sslobj.getpeercert(binary_form=True) - got = self._hashfunc(cert).digest() - if got != self._fingerprint: - host, port, *_ = transport.get_extra_info("peername") - raise ServerFingerprintMismatch(self._fingerprint, got, host, port) - - -if ssl is not None: - SSL_ALLOWED_TYPES = (ssl.SSLContext, bool, Fingerprint, type(None)) -else: # pragma: no cover - SSL_ALLOWED_TYPES = type(None) - - -def _merge_ssl_params( - ssl: Union["SSLContext", bool, Fingerprint, None], - verify_ssl: Optional[bool], - ssl_context: Optional["SSLContext"], - fingerprint: Optional[bytes], -) -> Union["SSLContext", bool, Fingerprint, None]: - if verify_ssl is not None and not verify_ssl: - warnings.warn( - "verify_ssl is deprecated, use ssl=False instead", - DeprecationWarning, - stacklevel=3, - ) - if ssl is not None: - raise ValueError( - "verify_ssl, ssl_context, fingerprint and ssl " - "parameters are mutually exclusive" - ) - else: - ssl = False - if ssl_context is not None: - warnings.warn( - "ssl_context is deprecated, use ssl=context instead", - DeprecationWarning, - stacklevel=3, - ) - if ssl is not None: - raise ValueError( - "verify_ssl, ssl_context, fingerprint and ssl " - "parameters are mutually exclusive" - ) - else: - ssl = ssl_context - if fingerprint is not None: - warnings.warn( - "fingerprint is deprecated, " "use ssl=Fingerprint(fingerprint) instead", - DeprecationWarning, - stacklevel=3, - ) - if ssl is not None: - raise ValueError( - "verify_ssl, ssl_context, fingerprint and ssl " - "parameters are mutually exclusive" - ) - else: - ssl = Fingerprint(fingerprint) - if not isinstance(ssl, SSL_ALLOWED_TYPES): - raise TypeError( - "ssl should be SSLContext, bool, Fingerprint or None, " - "got {!r} instead.".format(ssl) - ) - return ssl - - -@attr.s(auto_attribs=True, slots=True, frozen=True) -class ConnectionKey: - # the key should contain an information about used proxy / TLS - # to prevent reusing wrong connections from a pool - host: str - port: Optional[int] - is_ssl: bool - ssl: Union[SSLContext, None, bool, Fingerprint] - proxy: Optional[URL] - proxy_auth: Optional[BasicAuth] - proxy_headers_hash: Optional[int] # hash(CIMultiDict) - - -def _is_expected_content_type( - response_content_type: str, expected_content_type: str -) -> bool: - if expected_content_type == "application/json": - return json_re.match(response_content_type) is not None - return expected_content_type in response_content_type - - -class ClientRequest: - GET_METHODS = { - hdrs.METH_GET, - hdrs.METH_HEAD, - hdrs.METH_OPTIONS, - hdrs.METH_TRACE, - } - POST_METHODS = {hdrs.METH_PATCH, hdrs.METH_POST, hdrs.METH_PUT} - ALL_METHODS = GET_METHODS.union(POST_METHODS).union({hdrs.METH_DELETE}) - - DEFAULT_HEADERS = { - hdrs.ACCEPT: "*/*", - hdrs.ACCEPT_ENCODING: "gzip, deflate", - } - - body = b"" - auth = None - response = None - - _writer = None # async task for streaming data - _continue = None # waiter future for '100 Continue' response - - # N.B. - # Adding __del__ method with self._writer closing doesn't make sense - # because _writer is instance method, thus it keeps a reference to self. - # Until writer has finished finalizer will not be called. - - def __init__( - self, - method: str, - url: URL, - *, - params: Optional[Mapping[str, str]] = None, - headers: Optional[LooseHeaders] = None, - skip_auto_headers: Iterable[str] = frozenset(), - data: Any = None, - cookies: Optional[LooseCookies] = None, - auth: Optional[BasicAuth] = None, - version: http.HttpVersion = http.HttpVersion11, - compress: Optional[str] = None, - chunked: Optional[bool] = None, - expect100: bool = False, - loop: Optional[asyncio.AbstractEventLoop] = None, - response_class: Optional[Type["ClientResponse"]] = None, - proxy: Optional[URL] = None, - proxy_auth: Optional[BasicAuth] = None, - timer: Optional[BaseTimerContext] = None, - session: Optional["ClientSession"] = None, - ssl: Union[SSLContext, bool, Fingerprint, None] = None, - proxy_headers: Optional[LooseHeaders] = None, - traces: Optional[List["Trace"]] = None, - ): - - if loop is None: - loop = asyncio.get_event_loop() - - assert isinstance(url, URL), url - assert isinstance(proxy, (URL, type(None))), proxy - # FIXME: session is None in tests only, need to fix tests - # assert session is not None - self._session = cast("ClientSession", session) - if params: - q = MultiDict(url.query) - url2 = url.with_query(params) - q.extend(url2.query) - url = url.with_query(q) - self.original_url = url - self.url = url.with_fragment(None) - self.method = method.upper() - self.chunked = chunked - self.compress = compress - self.loop = loop - self.length = None - if response_class is None: - real_response_class = ClientResponse - else: - real_response_class = response_class - self.response_class: Type[ClientResponse] = real_response_class - self._timer = timer if timer is not None else TimerNoop() - self._ssl = ssl - - if loop.get_debug(): - self._source_traceback = traceback.extract_stack(sys._getframe(1)) - - self.update_version(version) - self.update_host(url) - self.update_headers(headers) - self.update_auto_headers(skip_auto_headers) - self.update_cookies(cookies) - self.update_content_encoding(data) - self.update_auth(auth) - self.update_proxy(proxy, proxy_auth, proxy_headers) - - self.update_body_from_data(data) - if data is not None or self.method not in self.GET_METHODS: - self.update_transfer_encoding() - self.update_expect_continue(expect100) - if traces is None: - traces = [] - self._traces = traces - - def is_ssl(self) -> bool: - return self.url.scheme in ("https", "wss") - - @property - def ssl(self) -> Union["SSLContext", None, bool, Fingerprint]: - return self._ssl - - @property - def connection_key(self) -> ConnectionKey: - proxy_headers = self.proxy_headers - if proxy_headers: - h: Optional[int] = hash(tuple((k, v) for k, v in proxy_headers.items())) - else: - h = None - return ConnectionKey( - self.host, - self.port, - self.is_ssl(), - self.ssl, - self.proxy, - self.proxy_auth, - h, - ) - - @property - def host(self) -> str: - ret = self.url.raw_host - assert ret is not None - return ret - - @property - def port(self) -> Optional[int]: - return self.url.port - - @property - def request_info(self) -> RequestInfo: - headers: CIMultiDictProxy[str] = CIMultiDictProxy(self.headers) - return RequestInfo(self.url, self.method, headers, self.original_url) - - def update_host(self, url: URL) -> None: - """Update destination host, port and connection type (ssl).""" - # get host/port - if not url.raw_host: - raise InvalidURL(url) - - # basic auth info - username, password = url.user, url.password - if username: - self.auth = helpers.BasicAuth(username, password or "") - - def update_version(self, version: Union[http.HttpVersion, str]) -> None: - """Convert request version to two elements tuple. - - parser HTTP version '1.1' => (1, 1) - """ - if isinstance(version, str): - v = [part.strip() for part in version.split(".", 1)] - try: - version = http.HttpVersion(int(v[0]), int(v[1])) - except ValueError: - raise ValueError( - f"Can not parse http version number: {version}" - ) from None - self.version = version - - def update_headers(self, headers: Optional[LooseHeaders]) -> None: - """Update request headers.""" - self.headers: CIMultiDict[str] = CIMultiDict() - - # add host - netloc = cast(str, self.url.raw_host) - if helpers.is_ipv6_address(netloc): - netloc = f"[{netloc}]" - if self.url.port is not None and not self.url.is_default_port(): - netloc += ":" + str(self.url.port) - self.headers[hdrs.HOST] = netloc - - if headers: - if isinstance(headers, (dict, MultiDictProxy, MultiDict)): - headers = headers.items() # type: ignore[assignment] - - for key, value in headers: # type: ignore[misc] - # A special case for Host header - if key.lower() == "host": - self.headers[key] = value - else: - self.headers.add(key, value) - - def update_auto_headers(self, skip_auto_headers: Iterable[str]) -> None: - self.skip_auto_headers = CIMultiDict( - (hdr, None) for hdr in sorted(skip_auto_headers) - ) - used_headers = self.headers.copy() - used_headers.extend(self.skip_auto_headers) # type: ignore[arg-type] - - for hdr, val in self.DEFAULT_HEADERS.items(): - if hdr not in used_headers: - self.headers.add(hdr, val) - - if hdrs.USER_AGENT not in used_headers: - self.headers[hdrs.USER_AGENT] = SERVER_SOFTWARE - - def update_cookies(self, cookies: Optional[LooseCookies]) -> None: - """Update request cookies header.""" - if not cookies: - return - - c: SimpleCookie[str] = SimpleCookie() - if hdrs.COOKIE in self.headers: - c.load(self.headers.get(hdrs.COOKIE, "")) - del self.headers[hdrs.COOKIE] - - if isinstance(cookies, Mapping): - iter_cookies = cookies.items() - else: - iter_cookies = cookies # type: ignore[assignment] - for name, value in iter_cookies: - if isinstance(value, Morsel): - # Preserve coded_value - mrsl_val = value.get(value.key, Morsel()) - mrsl_val.set(value.key, value.value, value.coded_value) - c[name] = mrsl_val - else: - c[name] = value # type: ignore[assignment] - - self.headers[hdrs.COOKIE] = c.output(header="", sep=";").strip() - - def update_content_encoding(self, data: Any) -> None: - """Set request content encoding.""" - if data is None: - return - - enc = self.headers.get(hdrs.CONTENT_ENCODING, "").lower() - if enc: - if self.compress: - raise ValueError( - "compress can not be set " "if Content-Encoding header is set" - ) - elif self.compress: - if not isinstance(self.compress, str): - self.compress = "deflate" - self.headers[hdrs.CONTENT_ENCODING] = self.compress - self.chunked = True # enable chunked, no need to deal with length - - def update_transfer_encoding(self) -> None: - """Analyze transfer-encoding header.""" - te = self.headers.get(hdrs.TRANSFER_ENCODING, "").lower() - - if "chunked" in te: - if self.chunked: - raise ValueError( - "chunked can not be set " - 'if "Transfer-Encoding: chunked" header is set' - ) - - elif self.chunked: - if hdrs.CONTENT_LENGTH in self.headers: - raise ValueError( - "chunked can not be set " "if Content-Length header is set" - ) - - self.headers[hdrs.TRANSFER_ENCODING] = "chunked" - else: - if hdrs.CONTENT_LENGTH not in self.headers: - self.headers[hdrs.CONTENT_LENGTH] = str(len(self.body)) - - def update_auth(self, auth: Optional[BasicAuth]) -> None: - """Set basic auth.""" - if auth is None: - auth = self.auth - if auth is None: - return - - if not isinstance(auth, helpers.BasicAuth): - raise TypeError("BasicAuth() tuple is required instead") - - self.headers[hdrs.AUTHORIZATION] = auth.encode() - - def update_body_from_data(self, body: Any) -> None: - if body is None: - return - - # FormData - if isinstance(body, FormData): - body = body() - - try: - body = payload.PAYLOAD_REGISTRY.get(body, disposition=None) - except payload.LookupError: - body = FormData(body)() - - self.body = body - - # enable chunked encoding if needed - if not self.chunked: - if hdrs.CONTENT_LENGTH not in self.headers: - size = body.size - if size is None: - self.chunked = True - else: - if hdrs.CONTENT_LENGTH not in self.headers: - self.headers[hdrs.CONTENT_LENGTH] = str(size) - - # copy payload headers - assert body.headers - for (key, value) in body.headers.items(): - if key in self.headers: - continue - if key in self.skip_auto_headers: - continue - self.headers[key] = value - - def update_expect_continue(self, expect: bool = False) -> None: - if expect: - self.headers[hdrs.EXPECT] = "100-continue" - elif self.headers.get(hdrs.EXPECT, "").lower() == "100-continue": - expect = True - - if expect: - self._continue = self.loop.create_future() - - def update_proxy( - self, - proxy: Optional[URL], - proxy_auth: Optional[BasicAuth], - proxy_headers: Optional[LooseHeaders], - ) -> None: - if proxy_auth and not isinstance(proxy_auth, helpers.BasicAuth): - raise ValueError("proxy_auth must be None or BasicAuth() tuple") - self.proxy = proxy - self.proxy_auth = proxy_auth - self.proxy_headers = proxy_headers - - def keep_alive(self) -> bool: - if self.version < HttpVersion10: - # keep alive not supported at all - return False - if self.version == HttpVersion10: - if self.headers.get(hdrs.CONNECTION) == "keep-alive": - return True - else: # no headers means we close for Http 1.0 - return False - elif self.headers.get(hdrs.CONNECTION) == "close": - return False - - return True - - async def write_bytes( - self, writer: AbstractStreamWriter, conn: "Connection" - ) -> None: - """Support coroutines that yields bytes objects.""" - # 100 response - if self._continue is not None: - await writer.drain() - await self._continue - - protocol = conn.protocol - assert protocol is not None - try: - if isinstance(self.body, payload.Payload): - await self.body.write(writer) - else: - if isinstance(self.body, (bytes, bytearray)): - self.body = (self.body,) # type: ignore[assignment] - - for chunk in self.body: - await writer.write(chunk) # type: ignore[arg-type] - - await writer.write_eof() - except OSError as exc: - if exc.errno is None and isinstance(exc, asyncio.TimeoutError): - protocol.set_exception(exc) - else: - new_exc = ClientOSError( - exc.errno, "Can not write request body for %s" % self.url - ) - new_exc.__context__ = exc - new_exc.__cause__ = exc - protocol.set_exception(new_exc) - except asyncio.CancelledError as exc: - if not conn.closed: - protocol.set_exception(exc) - except Exception as exc: - protocol.set_exception(exc) - finally: - self._writer = None - - async def send(self, conn: "Connection") -> "ClientResponse": - # Specify request target: - # - CONNECT request must send authority form URI - # - not CONNECT proxy must send absolute form URI - # - most common is origin form URI - if self.method == hdrs.METH_CONNECT: - connect_host = self.url.raw_host - assert connect_host is not None - if helpers.is_ipv6_address(connect_host): - connect_host = f"[{connect_host}]" - path = f"{connect_host}:{self.url.port}" - elif self.proxy and not self.is_ssl(): - path = str(self.url) - else: - path = self.url.raw_path - if self.url.raw_query_string: - path += "?" + self.url.raw_query_string - - protocol = conn.protocol - assert protocol is not None - writer = StreamWriter( - protocol, - self.loop, - on_chunk_sent=functools.partial( - self._on_chunk_request_sent, self.method, self.url - ), - on_headers_sent=functools.partial( - self._on_headers_request_sent, self.method, self.url - ), - ) - - if self.compress: - writer.enable_compression(self.compress) - - if self.chunked is not None: - writer.enable_chunking() - - # set default content-type - if ( - self.method in self.POST_METHODS - and hdrs.CONTENT_TYPE not in self.skip_auto_headers - and hdrs.CONTENT_TYPE not in self.headers - ): - self.headers[hdrs.CONTENT_TYPE] = "application/octet-stream" - - # set the connection header - connection = self.headers.get(hdrs.CONNECTION) - if not connection: - if self.keep_alive(): - if self.version == HttpVersion10: - connection = "keep-alive" - else: - if self.version == HttpVersion11: - connection = "close" - - if connection is not None: - self.headers[hdrs.CONNECTION] = connection - - # status + headers - status_line = "{0} {1} HTTP/{2[0]}.{2[1]}".format( - self.method, path, self.version - ) - await writer.write_headers(status_line, self.headers) - - self._writer = self.loop.create_task(self.write_bytes(writer, conn)) - - response_class = self.response_class - assert response_class is not None - self.response = response_class( - self.method, - self.original_url, - writer=self._writer, - continue100=self._continue, - timer=self._timer, - request_info=self.request_info, - traces=self._traces, - loop=self.loop, - session=self._session, - ) - return self.response - - async def close(self) -> None: - if self._writer is not None: - try: - await self._writer - finally: - self._writer = None - - def terminate(self) -> None: - if self._writer is not None: - if not self.loop.is_closed(): - self._writer.cancel() - self._writer = None - - async def _on_chunk_request_sent(self, method: str, url: URL, chunk: bytes) -> None: - for trace in self._traces: - await trace.send_request_chunk_sent(method, url, chunk) - - async def _on_headers_request_sent( - self, method: str, url: URL, headers: "CIMultiDict[str]" - ) -> None: - for trace in self._traces: - await trace.send_request_headers(method, url, headers) - - -class ClientResponse(HeadersMixin): - - # from the Status-Line of the response - version = None # HTTP-Version - status: int = None # type: ignore[assignment] # Status-Code - reason = None # Reason-Phrase - - content: StreamReader = None # type: ignore[assignment] # Payload stream - _headers: "CIMultiDictProxy[str]" = None # type: ignore[assignment] - _raw_headers: RawHeaders = None # type: ignore[assignment] # Response raw headers - - _connection = None # current connection - _source_traceback = None - # setted up by ClientRequest after ClientResponse object creation - # post-init stage allows to not change ctor signature - _closed = True # to allow __del__ for non-initialized properly response - _released = False - - def __init__( - self, - method: str, - url: URL, - *, - writer: "asyncio.Task[None]", - continue100: Optional["asyncio.Future[bool]"], - timer: BaseTimerContext, - request_info: RequestInfo, - traces: List["Trace"], - loop: asyncio.AbstractEventLoop, - session: "ClientSession", - ) -> None: - assert isinstance(url, URL) - - self.method = method - self.cookies: SimpleCookie[str] = SimpleCookie() - - self._real_url = url - self._url = url.with_fragment(None) - self._body: Any = None - self._writer: Optional[asyncio.Task[None]] = writer - self._continue = continue100 # None by default - self._closed = True - self._history: Tuple[ClientResponse, ...] = () - self._request_info = request_info - self._timer = timer if timer is not None else TimerNoop() - self._cache: Dict[str, Any] = {} - self._traces = traces - self._loop = loop - # store a reference to session #1985 - self._session: Optional[ClientSession] = session - if loop.get_debug(): - self._source_traceback = traceback.extract_stack(sys._getframe(1)) - - @reify - def url(self) -> URL: - return self._url - - @reify - def url_obj(self) -> URL: - warnings.warn("Deprecated, use .url #1654", DeprecationWarning, stacklevel=2) - return self._url - - @reify - def real_url(self) -> URL: - return self._real_url - - @reify - def host(self) -> str: - assert self._url.host is not None - return self._url.host - - @reify - def headers(self) -> "CIMultiDictProxy[str]": - return self._headers - - @reify - def raw_headers(self) -> RawHeaders: - return self._raw_headers - - @reify - def request_info(self) -> RequestInfo: - return self._request_info - - @reify - def content_disposition(self) -> Optional[ContentDisposition]: - raw = self._headers.get(hdrs.CONTENT_DISPOSITION) - if raw is None: - return None - disposition_type, params_dct = multipart.parse_content_disposition(raw) - params = MappingProxyType(params_dct) - filename = multipart.content_disposition_filename(params) - return ContentDisposition(disposition_type, params, filename) - - def __del__(self, _warnings: Any = warnings) -> None: - if self._closed: - return - - if self._connection is not None: - self._connection.release() - self._cleanup_writer() - - if self._loop.get_debug(): - if PY_36: - kwargs = {"source": self} - else: - kwargs = {} - _warnings.warn(f"Unclosed response {self!r}", ResourceWarning, **kwargs) - context = {"client_response": self, "message": "Unclosed response"} - if self._source_traceback: - context["source_traceback"] = self._source_traceback - self._loop.call_exception_handler(context) - - def __repr__(self) -> str: - out = io.StringIO() - ascii_encodable_url = str(self.url) - if self.reason: - ascii_encodable_reason = self.reason.encode( - "ascii", "backslashreplace" - ).decode("ascii") - else: - ascii_encodable_reason = self.reason - print( - "".format( - ascii_encodable_url, self.status, ascii_encodable_reason - ), - file=out, - ) - print(self.headers, file=out) - return out.getvalue() - - @property - def connection(self) -> Optional["Connection"]: - return self._connection - - @reify - def history(self) -> Tuple["ClientResponse", ...]: - """A sequence of of responses, if redirects occurred.""" - return self._history - - @reify - def links(self) -> "MultiDictProxy[MultiDictProxy[Union[str, URL]]]": - links_str = ", ".join(self.headers.getall("link", [])) - - if not links_str: - return MultiDictProxy(MultiDict()) - - links: MultiDict[MultiDictProxy[Union[str, URL]]] = MultiDict() - - for val in re.split(r",(?=\s*<)", links_str): - match = re.match(r"\s*<(.*)>(.*)", val) - if match is None: # pragma: no cover - # the check exists to suppress mypy error - continue - url, params_str = match.groups() - params = params_str.split(";")[1:] - - link: MultiDict[Union[str, URL]] = MultiDict() - - for param in params: - match = re.match(r"^\s*(\S*)\s*=\s*(['\"]?)(.*?)(\2)\s*$", param, re.M) - if match is None: # pragma: no cover - # the check exists to suppress mypy error - continue - key, _, value, _ = match.groups() - - link.add(key, value) - - key = link.get("rel", url) # type: ignore[assignment] - - link.add("url", self.url.join(URL(url))) - - links.add(key, MultiDictProxy(link)) - - return MultiDictProxy(links) - - async def start(self, connection: "Connection") -> "ClientResponse": - """Start response processing.""" - self._closed = False - self._protocol = connection.protocol - self._connection = connection - - with self._timer: - while True: - # read response - try: - protocol = self._protocol - message, payload = await protocol.read() # type: ignore[union-attr] - except http.HttpProcessingError as exc: - raise ClientResponseError( - self.request_info, - self.history, - status=exc.code, - message=exc.message, - headers=exc.headers, - ) from exc - - if message.code < 100 or message.code > 199 or message.code == 101: - break - - if self._continue is not None: - set_result(self._continue, True) - self._continue = None - - # payload eof handler - payload.on_eof(self._response_eof) - - # response status - self.version = message.version - self.status = message.code - self.reason = message.reason - - # headers - self._headers = message.headers # type is CIMultiDictProxy - self._raw_headers = message.raw_headers # type is Tuple[bytes, bytes] - - # payload - self.content = payload - - # cookies - for hdr in self.headers.getall(hdrs.SET_COOKIE, ()): - try: - self.cookies.load(hdr) - except CookieError as exc: - client_logger.warning("Can not load response cookies: %s", exc) - return self - - def _response_eof(self) -> None: - if self._closed: - return - - if self._connection is not None: - # websocket, protocol could be None because - # connection could be detached - if ( - self._connection.protocol is not None - and self._connection.protocol.upgraded - ): - return - - self._connection.release() - self._connection = None - - self._closed = True - self._cleanup_writer() - - @property - def closed(self) -> bool: - return self._closed - - def close(self) -> None: - if not self._released: - self._notify_content() - if self._closed: - return - - self._closed = True - if self._loop is None or self._loop.is_closed(): - return - - if self._connection is not None: - self._connection.close() - self._connection = None - self._cleanup_writer() - - def release(self) -> Any: - if not self._released: - self._notify_content() - if self._closed: - return noop() - - self._closed = True - if self._connection is not None: - self._connection.release() - self._connection = None - - self._cleanup_writer() - return noop() - - @property - def ok(self) -> bool: - """Returns ``True`` if ``status`` is less than ``400``, ``False`` if not. - - This is **not** a check for ``200 OK`` but a check that the response - status is under 400. - """ - return 400 > self.status - - def raise_for_status(self) -> None: - if not self.ok: - # reason should always be not None for a started response - assert self.reason is not None - self.release() - raise ClientResponseError( - self.request_info, - self.history, - status=self.status, - message=self.reason, - headers=self.headers, - ) - - def _cleanup_writer(self) -> None: - if self._writer is not None: - self._writer.cancel() - self._writer = None - self._session = None - - def _notify_content(self) -> None: - content = self.content - if content and content.exception() is None: - content.set_exception(ClientConnectionError("Connection closed")) - self._released = True - - async def wait_for_close(self) -> None: - if self._writer is not None: - try: - await self._writer - finally: - self._writer = None - self.release() - - async def read(self) -> bytes: - """Read response payload.""" - if self._body is None: - try: - self._body = await self.content.read() - for trace in self._traces: - await trace.send_response_chunk_received( - self.method, self.url, self._body - ) - except BaseException: - self.close() - raise - elif self._released: - raise ClientConnectionError("Connection closed") - - return self._body # type: ignore[no-any-return] - - def get_encoding(self) -> str: - ctype = self.headers.get(hdrs.CONTENT_TYPE, "").lower() - mimetype = helpers.parse_mimetype(ctype) - - encoding = mimetype.parameters.get("charset") - if encoding: - try: - codecs.lookup(encoding) - except LookupError: - encoding = None - if not encoding: - if mimetype.type == "application" and ( - mimetype.subtype == "json" or mimetype.subtype == "rdap" - ): - # RFC 7159 states that the default encoding is UTF-8. - # RFC 7483 defines application/rdap+json - encoding = "utf-8" - elif self._body is None: - raise RuntimeError( - "Cannot guess the encoding of " "a not yet read body" - ) - else: - encoding = chardet.detect(self._body)["encoding"] - if not encoding: - encoding = "utf-8" - - return encoding - - async def text(self, encoding: Optional[str] = None, errors: str = "strict") -> str: - """Read response payload and decode.""" - if self._body is None: - await self.read() - - if encoding is None: - encoding = self.get_encoding() - - return self._body.decode( # type: ignore[no-any-return,union-attr] - encoding, errors=errors - ) - - async def json( - self, - *, - encoding: Optional[str] = None, - loads: JSONDecoder = DEFAULT_JSON_DECODER, - content_type: Optional[str] = "application/json", - ) -> Any: - """Read and decodes JSON response.""" - if self._body is None: - await self.read() - - if content_type: - ctype = self.headers.get(hdrs.CONTENT_TYPE, "").lower() - if not _is_expected_content_type(ctype, content_type): - raise ContentTypeError( - self.request_info, - self.history, - message=( - "Attempt to decode JSON with " "unexpected mimetype: %s" % ctype - ), - headers=self.headers, - ) - - stripped = self._body.strip() # type: ignore[union-attr] - if not stripped: - return None - - if encoding is None: - encoding = self.get_encoding() - - return loads(stripped.decode(encoding)) - - async def __aenter__(self) -> "ClientResponse": - return self - - async def __aexit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[TracebackType], - ) -> None: - # similar to _RequestContextManager, we do not need to check - # for exceptions, response object can close connection - # if state is broken - self.release() diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dataclasses_json/utils.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dataclasses_json/utils.py deleted file mode 100644 index 0927cd0160c02a03b3b17875a995ac0458bb4fcb..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dataclasses_json/utils.py +++ /dev/null @@ -1,209 +0,0 @@ -import inspect -import sys -from datetime import datetime, timezone -from typing import (Collection, Mapping, Optional, TypeVar, Any, Type, Tuple, - Union, cast) - - -def _get_type_cons(type_): - """More spaghetti logic for 3.6 vs. 3.7""" - if sys.version_info.minor == 6: - try: - cons = type_.__extra__ - except AttributeError: - try: - cons = type_.__origin__ - except AttributeError: - cons = type_ - else: - cons = type_ if cons is None else cons - else: - try: - cons = type_.__origin__ if cons is None else cons - except AttributeError: - cons = type_ - else: - cons = type_.__origin__ - return cons - - -_NO_TYPE_ORIGIN = object() - - -def _get_type_origin(type_): - """Some spaghetti logic to accommodate differences between 3.6 and 3.7 in - the typing api""" - try: - origin = type_.__origin__ - except AttributeError: - # Issue #341 and PR #346: - # For some cases, the type_.__origin__ exists but is set to None - origin = _NO_TYPE_ORIGIN - - if sys.version_info.minor == 6: - try: - origin = type_.__extra__ - except AttributeError: - origin = type_ - else: - origin = type_ if origin in (None, _NO_TYPE_ORIGIN) else origin - elif origin is _NO_TYPE_ORIGIN: - origin = type_ - return origin - - -def _hasargs(type_, *args): - try: - res = all(arg in type_.__args__ for arg in args) - except AttributeError: - return False - except TypeError: - if (type_.__args__ is None): - return False - else: - raise - else: - return res - - -class _NoArgs(object): - def __bool__(self): - return False - - def __len__(self): - return 0 - - def __iter__(self): - return self - - def __next__(self): - raise StopIteration - - -_NO_ARGS = _NoArgs() - - -def _get_type_args(tp: Type, default: Union[Tuple[Type, ...], _NoArgs] = _NO_ARGS) -> \ - Union[Tuple[Type, ...], _NoArgs]: - if hasattr(tp, '__args__'): - if tp.__args__ is not None: - return tp.__args__ - return default - - -def _get_type_arg_param(tp: Type, index: int) -> Union[Type, _NoArgs]: - _args = _get_type_args(tp) - if _args is not _NO_ARGS: - try: - return cast(Tuple[Type, ...], _args)[index] - except (TypeError, IndexError, NotImplementedError): - pass - - return _NO_ARGS - - -def _isinstance_safe(o, t): - try: - result = isinstance(o, t) - except Exception: - return False - else: - return result - - -def _issubclass_safe(cls, classinfo): - try: - return issubclass(cls, classinfo) - except Exception: - return (_is_new_type_subclass_safe(cls, classinfo) - if _is_new_type(cls) - else False) - - -def _is_new_type_subclass_safe(cls, classinfo): - super_type = getattr(cls, "__supertype__", None) - - if super_type: - return _is_new_type_subclass_safe(super_type, classinfo) - - try: - return issubclass(cls, classinfo) - except Exception: - return False - - -def _is_new_type(type_): - return inspect.isfunction(type_) and hasattr(type_, "__supertype__") - - -def _is_optional(type_): - return (_issubclass_safe(type_, Optional) or - _hasargs(type_, type(None)) or - type_ is Any) - - -def _is_mapping(type_): - return _issubclass_safe(_get_type_origin(type_), Mapping) - - -def _is_collection(type_): - return _issubclass_safe(_get_type_origin(type_), Collection) - - -def _is_tuple(type_): - return _issubclass_safe(_get_type_origin(type_), Tuple) - - -def _is_nonstr_collection(type_): - return (_issubclass_safe(_get_type_origin(type_), Collection) - and not _issubclass_safe(type_, str)) - - -def _timestamp_to_dt_aware(timestamp: float): - tz = datetime.now(timezone.utc).astimezone().tzinfo - dt = datetime.fromtimestamp(timestamp, tz=tz) - return dt - - -def _undefined_parameter_action_safe(cls): - try: - if cls.dataclass_json_config is None: - return - action_enum = cls.dataclass_json_config['undefined'] - except (AttributeError, KeyError): - return - - if action_enum is None or action_enum.value is None: - return - - return action_enum - - -def _handle_undefined_parameters_safe(cls, kvs, usage: str): - """ - Checks if an undefined parameters action is defined and performs the - according action. - """ - undefined_parameter_action = _undefined_parameter_action_safe(cls) - usage = usage.lower() - if undefined_parameter_action is None: - return kvs if usage != "init" else cls.__init__ - if usage == "from": - return undefined_parameter_action.value.handle_from_dict(cls=cls, - kvs=kvs) - elif usage == "to": - return undefined_parameter_action.value.handle_to_dict(obj=cls, - kvs=kvs) - elif usage == "dump": - return undefined_parameter_action.value.handle_dump(obj=cls) - elif usage == "init": - return undefined_parameter_action.value.create_init(obj=cls) - else: - raise ValueError( - f"usage must be one of ['to', 'from', 'dump', 'init'], " - f"but is '{usage}'") - - -# Define a type for the CatchAll field -# https://stackoverflow.com/questions/59360567/define-a-custom-type-that-behaves-like-typing-any -CatchAllVar = TypeVar("CatchAllVar", bound=Mapping) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/reversename.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/reversename.py deleted file mode 100644 index 8236c711f16f1e3b514f182a8254cb0e0ce45a68..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/reversename.py +++ /dev/null @@ -1,105 +0,0 @@ -# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license - -# Copyright (C) 2006-2017 Nominum, Inc. -# -# Permission to use, copy, modify, and distribute this software and its -# documentation for any purpose with or without fee is hereby granted, -# provided that the above copyright notice and this permission notice -# appear in all copies. -# -# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES -# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF -# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR -# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES -# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN -# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT -# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -"""DNS Reverse Map Names.""" - -import binascii - -import dns.ipv4 -import dns.ipv6 -import dns.name - -ipv4_reverse_domain = dns.name.from_text("in-addr.arpa.") -ipv6_reverse_domain = dns.name.from_text("ip6.arpa.") - - -def from_address( - text: str, - v4_origin: dns.name.Name = ipv4_reverse_domain, - v6_origin: dns.name.Name = ipv6_reverse_domain, -) -> dns.name.Name: - """Convert an IPv4 or IPv6 address in textual form into a Name object whose - value is the reverse-map domain name of the address. - - *text*, a ``str``, is an IPv4 or IPv6 address in textual form - (e.g. '127.0.0.1', '::1') - - *v4_origin*, a ``dns.name.Name`` to append to the labels corresponding to - the address if the address is an IPv4 address, instead of the default - (in-addr.arpa.) - - *v6_origin*, a ``dns.name.Name`` to append to the labels corresponding to - the address if the address is an IPv6 address, instead of the default - (ip6.arpa.) - - Raises ``dns.exception.SyntaxError`` if the address is badly formed. - - Returns a ``dns.name.Name``. - """ - - try: - v6 = dns.ipv6.inet_aton(text) - if dns.ipv6.is_mapped(v6): - parts = ["%d" % byte for byte in v6[12:]] - origin = v4_origin - else: - parts = [x for x in str(binascii.hexlify(v6).decode())] - origin = v6_origin - except Exception: - parts = ["%d" % byte for byte in dns.ipv4.inet_aton(text)] - origin = v4_origin - return dns.name.from_text(".".join(reversed(parts)), origin=origin) - - -def to_address( - name: dns.name.Name, - v4_origin: dns.name.Name = ipv4_reverse_domain, - v6_origin: dns.name.Name = ipv6_reverse_domain, -) -> str: - """Convert a reverse map domain name into textual address form. - - *name*, a ``dns.name.Name``, an IPv4 or IPv6 address in reverse-map name - form. - - *v4_origin*, a ``dns.name.Name`` representing the top-level domain for - IPv4 addresses, instead of the default (in-addr.arpa.) - - *v6_origin*, a ``dns.name.Name`` representing the top-level domain for - IPv4 addresses, instead of the default (ip6.arpa.) - - Raises ``dns.exception.SyntaxError`` if the name does not have a - reverse-map form. - - Returns a ``str``. - """ - - if name.is_subdomain(v4_origin): - name = name.relativize(v4_origin) - text = b".".join(reversed(name.labels)) - # run through inet_ntoa() to check syntax and make pretty. - return dns.ipv4.inet_ntoa(dns.ipv4.inet_aton(text)) - elif name.is_subdomain(v6_origin): - name = name.relativize(v6_origin) - labels = list(reversed(name.labels)) - parts = [] - for i in range(0, len(labels), 4): - parts.append(b"".join(labels[i : i + 4])) - text = b":".join(parts) - # run through inet_ntoa() to check syntax and make pretty. - return dns.ipv6.inet_ntoa(dns.ipv6.inet_aton(text)) - else: - raise dns.exception.SyntaxError("unknown reverse-map address family") diff --git a/spaces/jpfearnworks/ai_agents/docs/similarity_search.md b/spaces/jpfearnworks/ai_agents/docs/similarity_search.md deleted file mode 100644 index 09eccd37f187ce5fe2b9a587c4a76781d779fa79..0000000000000000000000000000000000000000 --- a/spaces/jpfearnworks/ai_agents/docs/similarity_search.md +++ /dev/null @@ -1,82 +0,0 @@ -# Notes on Similarity Search -Similarity search, also known as similarity measurement, is a key concept in many domains such as data mining, information retrieval, and machine learning. It quantifies the likeness or sameness between two data entities. Here, we explore three widely used methods for similarity search: Jaccard Similarity, W-Shingling, and Levenshtein Distance. - -## Jaccard Similarity -Jaccard similarity is a measure of how similar two sets are. It is defined as the size of the intersection divided by the size of the union of the two sets. It is a useful metric for comparing sets, because it is independent of the size of the sets, and it is symmetric, meaning that the Jaccard similarity of A and B is the same as the Jaccard similarity of B and A. - -Jaccard similarity is commonly used in information retrieval applications like document clustering and collaborative filtering. It is also used in machine learning applications like k-means clustering and k-nearest neighbors. -### Implementation : -```python -def jaccard(x: str, y: str): - """Jaccard similarity of two strings""" - x = set(x.split()) - y = set(y.split()) - shared = x.intersection(y) - union = x.union(y) - return len(shared) / len(union) -``` -### Pros: -- It's simple to understand and implement. -- It's good for comparing sets of data, such as lists or documents. -- It's binary, meaning it only cares if items exist, not how many times they exist. -### Cons: -- It can be sensitive to the size of the data. If the data sets are large but the intersection is small, the similarity can be perceived as low. -- It does not take into account the frequency of the items. -### Example: -You have two sets of data, A = {1, 2, 3, 4} and B = {3, 4, 5, 6}. The intersection of A and B is {3, 4}, and the union of A and B is {1, 2, 3, 4, 5, 6}. So, the Jaccard similarity is 2 (size of intersection) divided by 6 (size of union), which is approximately 0.33. - -## W-Shingling -Preprocessing method for strings or documents. It breaks the data into overlapping groups of W items. For example, if W = 2, then the string "I love to play football" would be broken into the following sets: {"I love", "love to", "to play", "play football"}. The W-shingling method is useful for comparing documents or strings, because it can detect similarities even if the documents are not exactly the same. For example, if you have two documents that are identical except for one word, the W-shingling method will still be able to detect the similarities between the two documents. - -### Implementation: -```python -def w_shingling(a: str): - a = a.split() - return set([a[i], a[i+1]] for i in range(len(a)-1)) -``` - -### Pros: -- It's useful for comparing documents or strings. -- It's able to detect similarities in different parts of the data, not just exact matches. -- It's robust to small changes or errors in the data. - -### Cons: -- The choice of the length of the shingles (W) can greatly affect the result. Too small, and it might not capture meaningful similarities. Too large, and it might miss important differences. -- It can be computationally intensive, especially for large documents or strings. - -### Example: -You have two sentences, "I love to play football" and "I like to play football". If we take 2-shingles (two-word groups), we get the following sets: {"I love", "love to", "to play", "play football"} and {"I like", "like to", "to play", "play football"}. The intersection is {"to play", "play football"}, and the union is all unique shingles, so the Jaccard similarity of the 2-shingles is 0.5. - -## Levenshtein Distance -Let's consider you have two words, say 'cat' and 'bat'. You want to find out how similar these two words are. One way to do this is to see how many letters you need to change in 'cat' to make it 'bat'. In this case, you only need to change the 'c' in 'cat' to a 'b' to make it 'bat'. So, the Levenshtein distance between 'cat' and 'bat' is 1. This method is used to find out how similar two pieces of data are by measuring the minimum number of changes needed to turn one piece of data into the other. -### Implementation: -```python -def levenshtein_distance(a:str, b:str): - lev = np.zeros((len(a),len(b))) - for i in range(len(a)): - for j in range(len(b)): - if min(i,j) == 0: - lev[i,j] = max(i,j) - else: - # calculate three possible operations - x = lev[i-1, j] # deletion - y = lev[i, j-1] # insertion - z = lev[i-1, j-1] # substitution - # take the minimum of the three - lev[i,j] = min(x,y,z) - if a[i] != b[j]: - # add one if the two characters are different - lev[i,j] += 1 - return lev, lev[-1,-1] -``` - -### Pros: -- It's useful for comparing strings or sequences. -- It's able to quantify the difference between two pieces of data. -- It's useful in applications like spell checking, where you want to find the smallest number of edits to turn one word into another. -### Cons: -- It can be computationally expensive for long strings. -- It does not handle well with transpositions (two characters being swapped), which will be counted as two operations instead of one. -### Example: -The words "kitten" and "sitting" have a Levenshtein distance of 3 because three operations are needed to turn "kitten" into "sitting": replace 'k' with 's', replace 'e' with 'i', and append 'g'. - diff --git a/spaces/jracca/04-learning-space/app.py b/spaces/jracca/04-learning-space/app.py deleted file mode 100644 index e0f03cf2557eba112bf95ebf5eb582da8d8a0fe3..0000000000000000000000000000000000000000 --- a/spaces/jracca/04-learning-space/app.py +++ /dev/null @@ -1,119 +0,0 @@ -from collections import deque -import streamlit as st -import torch -from streamlit_player import st_player -from transformers import AutoModelForCTC, Wav2Vec2Processor -from streaming import ffmpeg_stream - -device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') -player_options = { - "events": ["onProgress"], - "progress_interval": 200, - "volume": 1.0, - "playing": True, - "loop": False, - "controls": False, - "muted": False, - "config": {"youtube": {"playerVars": {"start": 1}}}, -} - -# disable rapid fading in and out on `st.code` updates -st.markdown("", unsafe_allow_html=True) - -@st.cache(hash_funcs={torch.nn.parameter.Parameter: lambda _: None}) -def load_model(model_path="facebook/wav2vec2-large-robust-ft-swbd-300h"): - processor = Wav2Vec2Processor.from_pretrained(model_path) - model = AutoModelForCTC.from_pretrained(model_path).to(device) - return processor, model - -processor, model = load_model() - -def stream_text(url, chunk_duration_ms, pad_duration_ms): - sampling_rate = processor.feature_extractor.sampling_rate - - # calculate the length of logits to cut from the sides of the output to account for input padding - output_pad_len = model._get_feat_extract_output_lengths(int(sampling_rate * pad_duration_ms / 1000)) - - # define the audio chunk generator - stream = ffmpeg_stream(url, sampling_rate, chunk_duration_ms=chunk_duration_ms, pad_duration_ms=pad_duration_ms) - - leftover_text = "" - for i, chunk in enumerate(stream): - input_values = processor(chunk, sampling_rate=sampling_rate, return_tensors="pt").input_values - - with torch.no_grad(): - logits = model(input_values.to(device)).logits[0] - if i > 0: - logits = logits[output_pad_len : len(logits) - output_pad_len] - else: # don't count padding at the start of the clip - logits = logits[: len(logits) - output_pad_len] - - predicted_ids = torch.argmax(logits, dim=-1).cpu().tolist() - if processor.decode(predicted_ids).strip(): - leftover_ids = processor.tokenizer.encode(leftover_text) - # concat the last word (or its part) from the last frame with the current text - text = processor.decode(leftover_ids + predicted_ids) - # don't return the last word in case it's just partially recognized - text, leftover_text = text.rsplit(" ", 1) - yield text - else: - yield leftover_text - leftover_text = "" - yield leftover_text - -def main(): - state = st.session_state - st.header("Video ASR Streamlit from Youtube Link") - - with st.form(key="inputs_form"): - - # Our worlds best teachers on subjects of AI, Cognitive, Neuroscience for our Behavioral and Medical Health - ytJoschaBach="https://youtu.be/cC1HszE5Hcw?list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&t=8984" - ytSamHarris="https://www.youtube.com/watch?v=4dC_nRYIDZU&list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&index=2" - ytJohnAbramson="https://www.youtube.com/watch?v=arrokG3wCdE&list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&index=3" - ytElonMusk="https://www.youtube.com/watch?v=DxREm3s1scA&list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&index=4" - ytJeffreyShainline="https://www.youtube.com/watch?v=EwueqdgIvq4&list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&index=5" - ytJeffHawkins="https://www.youtube.com/watch?v=Z1KwkpTUbkg&list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&index=6" - ytSamHarris="https://youtu.be/Ui38ZzTymDY?list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L" - ytSamHarris="https://youtu.be/4dC_nRYIDZU?list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&t=7809" - ytSamHarris="https://youtu.be/4dC_nRYIDZU?list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&t=7809" - ytSamHarris="https://youtu.be/4dC_nRYIDZU?list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&t=7809" - ytTimelapseAI="https://www.youtube.com/watch?v=63yr9dlI0cU&list=PLHgX2IExbFovQybyfltywXnqZi5YvaSS-" - state.youtube_url = st.text_input("YouTube URL", ytTimelapseAI) - - - state.chunk_duration_ms = st.slider("Audio chunk duration (ms)", 2000, 10000, 3000, 100) - state.pad_duration_ms = st.slider("Padding duration (ms)", 100, 5000, 1000, 100) - submit_button = st.form_submit_button(label="Submit") - - if submit_button or "asr_stream" not in state: - # a hack to update the video player on value changes - state.youtube_url = ( - state.youtube_url.split("&hash=")[0] - + f"&hash={state.chunk_duration_ms}-{state.pad_duration_ms}" - ) - state.asr_stream = stream_text( - state.youtube_url, state.chunk_duration_ms, state.pad_duration_ms - ) - state.chunks_taken = 0 - - - state.lines = deque([], maxlen=100) # limit to the last n lines of subs - - - player = st_player(state.youtube_url, **player_options, key="youtube_player") - - if "asr_stream" in state and player.data and player.data["played"] < 1.0: - # check how many seconds were played, and if more than processed - write the next text chunk - processed_seconds = state.chunks_taken * (state.chunk_duration_ms / 1000) - if processed_seconds < player.data["playedSeconds"]: - text = next(state.asr_stream) - state.lines.append(text) - state.chunks_taken += 1 - if "lines" in state: - # print the lines of subs - st.code("\n".join(state.lines)) - - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/juancopi81/youtube-music-transcribe/mt3/metrics_utils_test.py b/spaces/juancopi81/youtube-music-transcribe/mt3/metrics_utils_test.py deleted file mode 100644 index be4f977dddce00c0a546eb31a657c62980d05a70..0000000000000000000000000000000000000000 --- a/spaces/juancopi81/youtube-music-transcribe/mt3/metrics_utils_test.py +++ /dev/null @@ -1,259 +0,0 @@ -# Copyright 2022 The MT3 Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Tests for metrics_utils.""" - -from mt3 import event_codec -from mt3 import metrics_utils -from mt3 import note_sequences - -import note_seq -import numpy as np -import tensorflow as tf - - -class MetricsUtilsTest(tf.test.TestCase): - - def test_event_predictions_to_ns(self): - predictions = [ - { - 'raw_inputs': [0, 0], - 'start_time': 0.0, - 'est_tokens': [20, 160], - }, - { - 'raw_inputs': [1, 1], - 'start_time': 0.4, - # These last 2 events should be dropped. - 'est_tokens': [20, 161, 50, 162], - }, - { - 'raw_inputs': [2, 2], - 'start_time': 0.8, - 'est_tokens': [163, 20, 164] - }, - ] - expected_ns = note_seq.NoteSequence(ticks_per_quarter=220) - expected_ns.notes.add( - pitch=59, - velocity=100, - start_time=0.20, - end_time=0.21) - expected_ns.notes.add( - pitch=60, - velocity=100, - start_time=0.60, - end_time=0.61) - expected_ns.notes.add( - pitch=62, - velocity=100, - start_time=0.80, - end_time=0.81) - expected_ns.notes.add( - pitch=63, - velocity=100, - start_time=1.00, - end_time=1.01) - expected_ns.total_time = 1.01 - - codec = event_codec.Codec( - max_shift_steps=100, - steps_per_second=100, - event_ranges=[ - event_codec.EventRange('pitch', note_seq.MIN_MIDI_PITCH, - note_seq.MAX_MIDI_PITCH)]) - res = metrics_utils.event_predictions_to_ns( - predictions, codec=codec, - encoding_spec=note_sequences.NoteOnsetEncodingSpec) - self.assertProtoEquals(expected_ns, res['est_ns']) - self.assertEqual(0, res['est_invalid_events']) - self.assertEqual(2, res['est_dropped_events']) - np.testing.assert_array_equal([0, 0, 1, 1, 2, 2], res['raw_inputs']) - - def test_event_predictions_to_ns_with_offsets(self): - predictions = [ - { - 'raw_inputs': [0, 0], - 'start_time': 0.0, - 'est_tokens': [20, 356, 160], - }, - { - 'raw_inputs': [1, 1], - 'start_time': 0.4, - 'est_tokens': [20, 292, 161], - }, - { - 'raw_inputs': [2, 2], - 'start_time': 0.8, - 'est_tokens': [20, 229, 160, 161] - }, - ] - expected_ns = note_seq.NoteSequence(ticks_per_quarter=220) - expected_ns.notes.add( - pitch=59, - velocity=127, - start_time=0.20, - end_time=1.00) - expected_ns.notes.add( - pitch=60, - velocity=63, - start_time=0.60, - end_time=1.00) - expected_ns.total_time = 1.00 - - codec = event_codec.Codec( - max_shift_steps=100, - steps_per_second=100, - event_ranges=[ - event_codec.EventRange('pitch', note_seq.MIN_MIDI_PITCH, - note_seq.MAX_MIDI_PITCH), - event_codec.EventRange('velocity', 0, 127) - ]) - res = metrics_utils.event_predictions_to_ns( - predictions, codec=codec, encoding_spec=note_sequences.NoteEncodingSpec) - self.assertProtoEquals(expected_ns, res['est_ns']) - self.assertEqual(0, res['est_invalid_events']) - self.assertEqual(0, res['est_dropped_events']) - np.testing.assert_array_equal([0, 0, 1, 1, 2, 2], res['raw_inputs']) - - def test_event_predictions_to_ns_multitrack(self): - predictions = [ - { - 'raw_inputs': [0, 0], - 'start_time': 0.0, - 'est_tokens': [20, 517, 356, 160], - }, - { - 'raw_inputs': [1, 1], - 'start_time': 0.4, - 'est_tokens': [20, 356, 399], - }, - { - 'raw_inputs': [2, 2], - 'start_time': 0.8, - 'est_tokens': [20, 517, 229, 160] - }, - ] - expected_ns = note_seq.NoteSequence(ticks_per_quarter=220) - expected_ns.notes.add( - pitch=42, - velocity=127, - start_time=0.60, - end_time=0.61, - is_drum=True, - instrument=9) - expected_ns.notes.add( - pitch=59, - velocity=127, - start_time=0.20, - end_time=1.00, - program=32) - expected_ns.total_time = 1.00 - - codec = event_codec.Codec( - max_shift_steps=100, - steps_per_second=100, - event_ranges=[ - event_codec.EventRange('pitch', note_seq.MIN_MIDI_PITCH, - note_seq.MAX_MIDI_PITCH), - event_codec.EventRange('velocity', 0, 127), - event_codec.EventRange('drum', note_seq.MIN_MIDI_PITCH, - note_seq.MAX_MIDI_PITCH), - event_codec.EventRange('program', note_seq.MIN_MIDI_PROGRAM, - note_seq.MAX_MIDI_PROGRAM) - ]) - res = metrics_utils.event_predictions_to_ns( - predictions, codec=codec, encoding_spec=note_sequences.NoteEncodingSpec) - self.assertProtoEquals(expected_ns, res['est_ns']) - self.assertEqual(0, res['est_invalid_events']) - self.assertEqual(0, res['est_dropped_events']) - np.testing.assert_array_equal([0, 0, 1, 1, 2, 2], res['raw_inputs']) - - def test_event_predictions_to_ns_multitrack_ties(self): - predictions = [ - { - 'raw_inputs': [0, 0], - 'start_time': 0.0, - 'est_tokens': [613, # no tied notes - 20, 517, 356, 160], - }, - { - 'raw_inputs': [1, 1], - 'start_time': 0.4, - 'est_tokens': [517, 160, 613, # tied note - 20, 356, 399], - }, - { - 'raw_inputs': [2, 2], - 'start_time': 0.8, - 'est_tokens': [613] # no tied notes, causing active note to end - }, - ] - expected_ns = note_seq.NoteSequence(ticks_per_quarter=220) - expected_ns.notes.add( - pitch=42, - velocity=127, - start_time=0.60, - end_time=0.61, - is_drum=True, - instrument=9) - expected_ns.notes.add( - pitch=59, - velocity=127, - start_time=0.20, - end_time=0.80, - program=32) - expected_ns.total_time = 0.80 - - codec = event_codec.Codec( - max_shift_steps=100, - steps_per_second=100, - event_ranges=[ - event_codec.EventRange('pitch', note_seq.MIN_MIDI_PITCH, - note_seq.MAX_MIDI_PITCH), - event_codec.EventRange('velocity', 0, 127), - event_codec.EventRange('drum', note_seq.MIN_MIDI_PITCH, - note_seq.MAX_MIDI_PITCH), - event_codec.EventRange('program', note_seq.MIN_MIDI_PROGRAM, - note_seq.MAX_MIDI_PROGRAM), - event_codec.EventRange('tie', 0, 0) - ]) - res = metrics_utils.event_predictions_to_ns( - predictions, codec=codec, - encoding_spec=note_sequences.NoteEncodingWithTiesSpec) - self.assertProtoEquals(expected_ns, res['est_ns']) - self.assertEqual(0, res['est_invalid_events']) - self.assertEqual(0, res['est_dropped_events']) - np.testing.assert_array_equal([0, 0, 1, 1, 2, 2], res['raw_inputs']) - - def test_frame_metrics(self): - ref = np.zeros(shape=(128, 5)) - est = np.zeros(shape=(128, 5)) - - # one overlapping note, two false positives, two false negatives - ref[10, 0] = 127 - ref[10, 1] = 127 - ref[10, 2] = 127 - - est[10, 2] = 127 - est[10, 3] = 127 - est[10, 4] = 127 - - prec, rec, _ = metrics_utils.frame_metrics(ref, est, velocity_threshold=1) - np.testing.assert_approx_equal(prec, 1/3) - np.testing.assert_approx_equal(rec, 1/3) - - -if __name__ == '__main__': - tf.test.main() diff --git a/spaces/juuxn/SimpleRVC/infer_web.py b/spaces/juuxn/SimpleRVC/infer_web.py deleted file mode 100644 index 78eed485c11719f5c6de17768f630c965fe5420d..0000000000000000000000000000000000000000 --- a/spaces/juuxn/SimpleRVC/infer_web.py +++ /dev/null @@ -1,201 +0,0 @@ -from vc_infer_pipeline import VC -from myutils import Audio -from infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -from fairseq import checkpoint_utils -from config import Config -import torch -import numpy as np -import traceback -import os -import sys -import warnings - -now_dir = os.getcwd() -sys.path.append(now_dir) -os.makedirs(os.path.join(now_dir, "audios"), exist_ok=True) -os.makedirs(os.path.join(now_dir, "audio-outputs"), exist_ok=True) -os.makedirs(os.path.join(now_dir, "weights"), exist_ok=True) -warnings.filterwarnings("ignore") -torch.manual_seed(114514) - -config = Config() - -hubert_model = None -weight_root = "weights" - -def load_hubert(): - # Determinar si existe una tarjeta N que pueda usarse para entrenar y acelerar la inferencia. - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(config.device) - if config.is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -def vc_single( - sid, - input_audio_path0, - input_audio_path1, - f0_up_key, - f0_file, - f0_method, - file_index, - file_index2, - # file_big_npy, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - crepe_hop_length, -): - global tgt_sr, net_g, vc, hubert_model, version - if input_audio_path0 is None or input_audio_path0 is None: - return "You need to upload an audio", None - f0_up_key = int(f0_up_key) - try: - if input_audio_path0 == "": - audio = Audio.load_audio(input_audio_path1, 16000) - else: - audio = Audio.load_audio(input_audio_path0, 16000) - - audio_max = np.abs(audio).max() / 0.95 - if audio_max > 1: - audio /= audio_max - times = [0, 0, 0] - if not hubert_model: - load_hubert() - if_f0 = cpt.get("f0", 1) - file_index = ( - ( - file_index.strip(" ") - .strip('"') - .strip("\n") - .strip('"') - .strip(" ") - .replace("trained", "added") - ) - if file_index != "" - else file_index2 - ) - - audio_opt = vc.pipeline( - hubert_model, - net_g, - sid, - audio, - input_audio_path1, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - crepe_hop_length, - f0_file=f0_file, - ) - if tgt_sr != resample_sr >= 16000: - tgt_sr = resample_sr - index_info = ( - "Using index:%s." % file_index - if os.path.exists(file_index) - else "Index not used." - ) - print(index_info) - return "Success.\n %s\nTime:\n npy:%ss, f0:%ss, infer:%ss" % ( - index_info, - times[0], - times[1], - times[2], - ), (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, (None, None) - -def get_vc(model_name): - global tgt_sr, net_g, vc, cpt, version - - # Comprobar si se pasó uno o varios modelos - if model_name == "" or model_name == []: - global hubert_model - if hubert_model is not None: # 考虑到轮询, 需要加个判断看是否 sid 是由有模型切换到无模型的 - print("Limpiar caché") - del net_g, vc, hubert_model, tgt_sr # ,cpt - hubert_model = net_g = vc = hubert_model = tgt_sr = None - - # Si hay una GPU disponible, libera la memoria de la GPU - if torch.cuda.is_available(): - torch.cuda.empty_cache() - - # Bloque de abajo no limpia completamente - if_f0 = cpt.get("f0", 1) - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid( - *cpt["config"], is_half=config.is_half - ) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid( - *cpt["config"], is_half=config.is_half - ) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - - del net_g, cpt - if torch.cuda.is_available(): - torch.cuda.empty_cache() - cpt = None - return {"success": False, "message": "No se proporcionó un sid"} - - person = "%s/%s" % (weight_root, model_name) - print("Cargando %s" % person) - cpt = torch.load(person, map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] - if_f0 = cpt.get("f0", 1) - version = cpt.get("version", "v1") - - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid( - *cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid( - *cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - del net_g.enc_q - - print(net_g.load_state_dict(cpt["weight"], strict=False)) - net_g.eval().to(config.device) - if config.is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, config) \ No newline at end of file diff --git a/spaces/keras-io/MelGAN-spectrogram-inversion/app.py b/spaces/keras-io/MelGAN-spectrogram-inversion/app.py deleted file mode 100644 index 2074b1b21efca68f4737c9b0b4904dfa86905a18..0000000000000000000000000000000000000000 --- a/spaces/keras-io/MelGAN-spectrogram-inversion/app.py +++ /dev/null @@ -1,115 +0,0 @@ -from huggingface_hub import from_pretrained_keras -import numpy as np -import tensorflow as tf -from tensorflow.keras import layers -import tensorflow_io as tfio - -import gradio as gr -import librosa -import librosa.display -import matplotlib.pyplot as plt - -class MelSpec(layers.Layer): - def __init__( - self, - frame_length=1024, - frame_step=256, - fft_length=None, - sampling_rate=22050, - num_mel_channels=80, - freq_min=125, - freq_max=7600, - **kwargs, - ): - super().__init__(**kwargs) - self.frame_length = frame_length - self.frame_step = frame_step - self.fft_length = fft_length - self.sampling_rate = sampling_rate - self.num_mel_channels = num_mel_channels - self.freq_min = freq_min - self.freq_max = freq_max - self.mel_filterbank = tf.signal.linear_to_mel_weight_matrix( - num_mel_bins=self.num_mel_channels, - num_spectrogram_bins=self.frame_length // 2 + 1, - sample_rate=self.sampling_rate, - lower_edge_hertz=self.freq_min, - upper_edge_hertz=self.freq_max, - ) - - def call(self, audio): - stft = tf.signal.stft( - tf.squeeze(audio, -1), - self.frame_length, - self.frame_step, - self.fft_length, - pad_end=True, - ) - - # Taking the magnitude of the STFT output - magnitude = tf.abs(stft) - - # Multiplying the Mel-filterbank with the magnitude and scaling it using the db scale - mel = tf.matmul(tf.square(magnitude), self.mel_filterbank) - log_mel_spec = tfio.audio.dbscale(mel, top_db=80) - return log_mel_spec - - - def get_config(self): - config = super(MelSpec, self).get_config() - config.update( - { - "frame_length": self.frame_length, - "frame_step": self.frame_step, - "fft_length": self.fft_length, - "sampling_rate": self.sampling_rate, - "num_mel_channels": self.num_mel_channels, - "freq_min": self.freq_min, - "freq_max": self.freq_max, - } - ) - return config - -model = from_pretrained_keras("keras-io/MelGAN-spectrogram-inversion") - -def inference(audio, model): - input, sr = librosa.load(audio) - # input, sr = audio - x = tf.expand_dims(input, axis=-1) - mel = MelSpec()(x) - audio_sample = tf.expand_dims(mel, axis=0) - pred = model.predict(audio_sample, batch_size=1, verbose=0) - return input, pred.squeeze(), sr - -def predict(audio): - x, x_pred, sr = inference(audio, model) - fig, ax = plt.subplots(nrows=2, ncols=1, sharex=True, figsize=(10, 8), dpi=120) - D = librosa.amplitude_to_db(np.abs(librosa.stft(x)), ref=np.max) - img = librosa.display.specshow(D, y_axis='linear', x_axis='time', - sr=sr, ax=ax[0]) - ax[0].set(title='Spectrogram of Original sample audio') - ax[0].label_outer() - - D = librosa.amplitude_to_db(np.abs(librosa.stft(x_pred)), ref=np.max) - img = librosa.display.specshow(D, y_axis='linear', x_axis='time', - sr=sr, ax=ax[1]) - ax[1].set(title='Spectrogram of synthesis sample audio ') - ax[1].label_outer() - return plt.gcf() - -inputs = [ - gr.Audio(source = "upload", label='Upload audio file', type="filepath"), -] - -examples = ["sample_1.wav", "sample_2.wav"] - -gr.Interface( - fn=predict, - title="MelGAN-based spectrogram inversion", - description = "Inversion of audio from mel-spectrograms using the MelGAN architecture and feature matching", - inputs=inputs, - examples=examples, - outputs=gr.Plot(), - cache_examples=False, - article = "Author: Vu Minh Chien. Based on the keras example from Darshan Deshpande", -).launch(debug=False, enable_queue=True) \ No newline at end of file diff --git a/spaces/kevinwang676/FreeVC-en/app.py b/spaces/kevinwang676/FreeVC-en/app.py deleted file mode 100644 index 54468d9838ded68fe3f682b58061b3d064419002..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/FreeVC-en/app.py +++ /dev/null @@ -1,484 +0,0 @@ -import os -import torch -import librosa -import gradio as gr -from scipy.io.wavfile import write -from transformers import WavLMModel - -import utils -from models import SynthesizerTrn -from mel_processing import mel_spectrogram_torch -from speaker_encoder.voice_encoder import SpeakerEncoder - -import time -from textwrap import dedent - -import mdtex2html -from loguru import logger -from transformers import AutoModel, AutoTokenizer - -from tts_voice import tts_order_voice -import edge_tts -import tempfile -import anyio -import asyncio - - -''' -def get_wavlm(): - os.system('gdown https://drive.google.com/uc?id=12-cB34qCTvByWT-QtOcZaqwwO21FLSqU') - shutil.move('WavLM-Large.pt', 'wavlm') -''' - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -smodel = SpeakerEncoder('speaker_encoder/ckpt/pretrained_bak_5805000.pt') - -print("Loading FreeVC(24k)...") -hps = utils.get_hparams_from_file("configs/freevc-24.json") -freevc_24 = SynthesizerTrn( - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model).to(device) -_ = freevc_24.eval() -_ = utils.load_checkpoint("checkpoints/freevc-24.pth", freevc_24, None) - -print("Loading WavLM for content...") -cmodel = WavLMModel.from_pretrained("microsoft/wavlm-large").to(device) - -def convert(model, src, tgt): - with torch.no_grad(): - # tgt - wav_tgt, _ = librosa.load(tgt, sr=hps.data.sampling_rate) - wav_tgt, _ = librosa.effects.trim(wav_tgt, top_db=20) - if model == "FreeVC" or model == "FreeVC (24kHz)": - g_tgt = smodel.embed_utterance(wav_tgt) - g_tgt = torch.from_numpy(g_tgt).unsqueeze(0).to(device) - else: - wav_tgt = torch.from_numpy(wav_tgt).unsqueeze(0).to(device) - mel_tgt = mel_spectrogram_torch( - wav_tgt, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - # src - wav_src, _ = librosa.load(src, sr=hps.data.sampling_rate) - wav_src = torch.from_numpy(wav_src).unsqueeze(0).to(device) - c = cmodel(wav_src).last_hidden_state.transpose(1, 2).to(device) - # infer - if model == "FreeVC": - audio = freevc.infer(c, g=g_tgt) - elif model == "FreeVC-s": - audio = freevc_s.infer(c, mel=mel_tgt) - else: - audio = freevc_24.infer(c, g=g_tgt) - audio = audio[0][0].data.cpu().float().numpy() - if model == "FreeVC" or model == "FreeVC-s": - write("out.wav", hps.data.sampling_rate, audio) - else: - write("out.wav", 24000, audio) - out = "out.wav" - return out - -# GLM2 - -#language_dict = tts_order_voice - -tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) -voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list] - -# fix timezone in Linux -os.environ["TZ"] = "Asia/Shanghai" -try: - time.tzset() # type: ignore # pylint: disable=no-member -except Exception: - # Windows - logger.warning("Windows, cant run time.tzset()") - -# model_name = "THUDM/chatglm2-6b" -model_name = "THUDM/chatglm2-6b-int4" - -RETRY_FLAG = False - -tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) - -# model = AutoModel.from_pretrained(model_name, trust_remote_code=True).cuda() - -# 4/8 bit -# model = AutoModel.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True).quantize(4).cuda() - -has_cuda = torch.cuda.is_available() - -# has_cuda = False # force cpu - -if has_cuda: - model_glm = ( - AutoModel.from_pretrained(model_name, trust_remote_code=True).cuda().half() - ) # 3.92G -else: - model_glm = AutoModel.from_pretrained( - model_name, trust_remote_code=True - ).float() # .float() .half().float() - -model_glm = model_glm.eval() - -_ = """Override Chatbot.postprocess""" - - -def postprocess(self, y): - if y is None: - return [] - for i, (message, response) in enumerate(y): - y[i] = ( - None if message is None else mdtex2html.convert((message)), - None if response is None else mdtex2html.convert(response), - ) - return y - - -gr.Chatbot.postprocess = postprocess - - -def parse_text(text): - """copy from https://github.com/GaiZhenbiao/ChuanhuChatGPT/""" - lines = text.split("\n") - lines = [line for line in lines if line != ""] - count = 0 - for i, line in enumerate(lines): - if "```" in line: - count += 1 - items = line.split("`") - if count % 2 == 1: - lines[i] = f'
    '
    -            else:
    -                lines[i] = "
    " - else: - if i > 0: - if count % 2 == 1: - line = line.replace("`", r"\`") - line = line.replace("<", "<") - line = line.replace(">", ">") - line = line.replace(" ", " ") - line = line.replace("*", "*") - line = line.replace("_", "_") - line = line.replace("-", "-") - line = line.replace(".", ".") - line = line.replace("!", "!") - line = line.replace("(", "(") - line = line.replace(")", ")") - line = line.replace("$", "$") - lines[i] = "
    " + line - text = "".join(lines) - return text - - -def predict( - RETRY_FLAG, input, chatbot, max_length, top_p, temperature, history, past_key_values -): - try: - chatbot.append((parse_text(input), "")) - except Exception as exc: - logger.error(exc) - logger.debug(f"{chatbot=}") - _ = """ - if chatbot: - chatbot[-1] = (parse_text(input), str(exc)) - yield chatbot, history, past_key_values - # """ - yield chatbot, history, past_key_values - - for response, history, past_key_values in model_glm.stream_chat( - tokenizer, - input, - history, - past_key_values=past_key_values, - return_past_key_values=True, - max_length=max_length, - top_p=top_p, - temperature=temperature, - ): - chatbot[-1] = (parse_text(input), parse_text(response)) - # chatbot[-1][-1] = parse_text(response) - - yield chatbot, history, past_key_values, parse_text(response) - - -def trans_api(input, max_length=4096, top_p=0.8, temperature=0.2): - if max_length < 10: - max_length = 4096 - if top_p < 0.1 or top_p > 1: - top_p = 0.85 - if temperature <= 0 or temperature > 1: - temperature = 0.01 - try: - res, _ = model_glm.chat( - tokenizer, - input, - history=[], - past_key_values=None, - max_length=max_length, - top_p=top_p, - temperature=temperature, - ) - # logger.debug(f"{res=} \n{_=}") - except Exception as exc: - logger.error(f"{exc=}") - res = str(exc) - - return res - - -def reset_user_input(): - return gr.update(value="") - - -def reset_state(): - return [], [], None, "" - - -# Delete last turn -def delete_last_turn(chat, history): - if chat and history: - chat.pop(-1) - history.pop(-1) - return chat, history - - -# Regenerate response -def retry_last_answer( - user_input, chatbot, max_length, top_p, temperature, history, past_key_values -): - if chatbot and history: - # Removing the previous conversation from chat - chatbot.pop(-1) - # Setting up a flag to capture a retry - RETRY_FLAG = True - # Getting last message from user - user_input = history[-1][0] - # Removing bot response from the history - history.pop(-1) - - yield from predict( - RETRY_FLAG, # type: ignore - user_input, - chatbot, - max_length, - top_p, - temperature, - history, - past_key_values, - ) - -# print - -def print(text): - return text - -# TTS - -async def text_to_speech_edge(text, voice): - - communicate = edge_tts.Communicate(text, "-".join(voice.split('-')[:-1])) - with tempfile.NamedTemporaryFile(delete=False, suffix=".mp3") as tmp_file: - tmp_path = tmp_file.name - - await communicate.save(tmp_path) - - return tmp_path - - -with gr.Blocks(title="ChatGLM2-6B-int4", theme=gr.themes.Soft(text_size="sm")) as demo: - gr.HTML("
    " - "

    🥳💕🎶 - ChatGLM2 + Voice Cloning

    " - "
    ") - gr.Markdown("##
    🥳 - Chat with any character you like through ChatGLM2-6B and voice cloning in real time
    ") - gr.Markdown("##
    🌊 - Powered by [TalktalkAI](http://www.talktalkai.com)
    ") - gr.Markdown("##
    ⭐ - If you like the this app, please star my [Github repo](https://github.com/KevinWang676/ChatGLM2-Voice-Cloning)
    ") - - with gr.Accordion("📒 Info", open=False): - _ = f""" Some parameters of ChatGLM2: - * Low temperature: responses will be more deterministic and focused; High temperature: responses more creative. - * Suggested temperatures -- translation: up to 0.3; chatting: > 0.4 - * Top P controls dynamic vocabulary selection based on context. - """ - gr.Markdown(dedent(_)) - chatbot = gr.Chatbot(height=300) - with gr.Row(): - with gr.Column(scale=4): - with gr.Column(scale=12): - user_input = gr.Textbox( - label="Chat with ChatGLM2 here", - placeholder="Enter something here...", - ) - RETRY_FLAG = gr.Checkbox(value=False, visible=False) - with gr.Column(min_width=32, scale=1): - with gr.Row(): - submitBtn = gr.Button("Chat now", variant="primary") - deleteBtn = gr.Button("Delete last turn", variant="secondary") - retryBtn = gr.Button("Regenerate", variant="secondary") - - with gr.Accordion("🔧 Settings", open=False): - with gr.Row(): - emptyBtn = gr.Button("Clear History") - max_length = gr.Slider( - 0, - 32768, - value=8192, - step=1.0, - label="Maximum length", - interactive=True, - ) - top_p = gr.Slider( - 0, 1, value=0.85, step=0.01, label="Top P", interactive=True - ) - temperature = gr.Slider( - 0.01, 1, value=0.95, step=0.01, label="Temperature", interactive=True - ) - - - with gr.Row(): - test1 = gr.Textbox(label="Response from ChatGLM2 (you can edit the content)", lines = 3) - with gr.Column(): - language = gr.Dropdown(choices=voices, value="en-US-AnaNeural-Female", label="Please select a voice") - tts_btn = gr.Button("Generate using Edge-TTS", variant="primary") - output_audio = gr.Audio(type="filepath", label="Audio generated by Edge-TTS", interactive=False) - - tts_btn.click(text_to_speech_edge, inputs=[test1, language], outputs=[output_audio]) - - with gr.Row(): - model_choice = gr.Dropdown(choices=["FreeVC", "FreeVC-s", "FreeVC (24kHz)"], value="FreeVC (24kHz)", label="Model", visible=False) - audio1 = output_audio - audio2 = gr.Audio(label="Upload reference audio for voice cloning (~5s)", type='filepath') - clone_btn = gr.Button("Generate using FreeVC", variant="primary") - audio_cloned = gr.Audio(label="Generated audio in a custom voice", type='filepath') - - clone_btn.click(convert, inputs=[model_choice, audio1, audio2], outputs=[audio_cloned]) - - history = gr.State([]) - past_key_values = gr.State(None) - - user_input.submit( - predict, - [ - RETRY_FLAG, - user_input, - chatbot, - max_length, - top_p, - temperature, - history, - past_key_values, - ], - [chatbot, history, past_key_values, test1], - show_progress="full", - ) - submitBtn.click( - predict, - [ - RETRY_FLAG, - user_input, - chatbot, - max_length, - top_p, - temperature, - history, - past_key_values, - ], - [chatbot, history, past_key_values, test1], - show_progress="full", - api_name="predict", - ) - submitBtn.click(reset_user_input, [], [user_input]) - - emptyBtn.click( - reset_state, outputs=[chatbot, history, past_key_values, test1], show_progress="full" - ) - - retryBtn.click( - retry_last_answer, - inputs=[ - user_input, - chatbot, - max_length, - top_p, - temperature, - history, - past_key_values, - ], - # outputs = [chatbot, history, last_user_message, user_message] - outputs=[chatbot, history, past_key_values, test1], - ) - deleteBtn.click(delete_last_turn, [chatbot, history], [chatbot, history]) - - with gr.Accordion("📔 Prompts", open=False): - etext = """In America, where cars are an important part of the national psyche, a decade ago people had suddenly started to drive less, which had not happened since the oil shocks of the 1970s. """ - examples = gr.Examples( - examples=[ - ["Explain the plot of Cinderella in a sentence."], - [ - "How long does it take to become proficient in French, and what are the best methods for retaining information?" - ], - ["What are some common mistakes to avoid when writing code?"], - ["Build a prompt to generate a beautiful portrait of a horse"], - ["Suggest four metaphors to describe the benefits of AI"], - ["Write a pop song about leaving home for the sandy beaches."], - ["Write a summary demonstrating my ability to tame lions"], - ["鲁迅和周树人什么关系"], - ["从前有一头牛,这头牛后面有什么?"], - ["正无穷大加一大于正无穷大吗?"], - ["正无穷大加正无穷大大于正无穷大吗?"], - ["-2的平方根等于什么"], - ["树上有5只鸟,猎人开枪打死了一只。树上还有几只鸟?"], - ["树上有11只鸟,猎人开枪打死了一只。树上还有几只鸟?提示:需考虑鸟可能受惊吓飞走。"], - ["鲁迅和周树人什么关系 用英文回答"], - ["以红楼梦的行文风格写一张委婉的请假条。不少于320字。"], - [f"{etext} 翻成中文,列出3个版本"], - [f"{etext} \n 翻成中文,保留原意,但使用文学性的语言。不要写解释。列出3个版本"], - ["js 判断一个数是不是质数"], - ["js 实现python 的 range(10)"], - ["js 实现python 的 [*(range(10)]"], - ["假定 1 + 2 = 4, 试求 7 + 8"], - ["Erkläre die Handlung von Cinderella in einem Satz."], - ["Erkläre die Handlung von Cinderella in einem Satz. Auf Deutsch"], - ], - inputs=[user_input], - examples_per_page=30, - ) - - with gr.Accordion("For Chat/Translation API", open=False, visible=False): - input_text = gr.Text() - tr_btn = gr.Button("Go", variant="primary") - out_text = gr.Text() - tr_btn.click( - trans_api, - [input_text, max_length, top_p, temperature], - out_text, - # show_progress="full", - api_name="tr", - ) - _ = """ - input_text.submit( - trans_api, - [input_text, max_length, top_p, temperature], - out_text, - show_progress="full", - api_name="tr1", - ) - # """ - - gr.Markdown("###
    ❗ Please do not generate content that could infringe upon the rights or cause harm to individuals or organizations.
    ") - gr.Markdown("###
    💡 - How to use this app:After sending your questions to ChatGLM2, click “Chat now”, “Generate using Edge-TTS”, and “Generate using FreeVC” in turn.
    ") - gr.HTML(''' - - ''') - - -demo.queue().launch(show_error=True, debug=True) diff --git a/spaces/kevinwang676/OpenAI-TTS-Voice-Conversion/app.py b/spaces/kevinwang676/OpenAI-TTS-Voice-Conversion/app.py deleted file mode 100644 index a2230ba731f3f884bedaf5db339b7ac71dd62e04..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/OpenAI-TTS-Voice-Conversion/app.py +++ /dev/null @@ -1,143 +0,0 @@ -import gradio as gr -import os -import tempfile -from openai import OpenAI -from tts_voice import tts_order_voice -import edge_tts -import tempfile -import anyio - - -# Set an environment variable for key -#os.environ['OPENAI_API_KEY'] = os.environ.get('OPENAI_API_KEY') - -#client = OpenAI() # add api_key - -import torch -import torchaudio -import gradio as gr -from scipy.io import wavfile -from scipy.io.wavfile import write - -from speechbrain.pretrained import SpectralMaskEnhancement - -enhance_model = SpectralMaskEnhancement.from_hparams( - source="speechbrain/metricgan-plus-voicebank", - savedir="pretrained_models/metricgan-plus-voicebank", -) - -knn_vc = torch.hub.load('bshall/knn-vc', 'knn_vc', prematched=True, trust_repo=True, pretrained=True, device='cpu') - -language_dict = tts_order_voice - -async def text_to_speech_edge(text, language_code): - voice = language_dict[language_code] - communicate = edge_tts.Communicate(text, voice) - with tempfile.NamedTemporaryFile(delete=False, suffix=".mp3") as tmp_file: - tmp_path = tmp_file.name - - await communicate.save(tmp_path) - - return "语音合成完成:{}".format(text), tmp_path - - -def voice_change(audio_in, audio_ref): - samplerate1, data1 = wavfile.read(audio_in) - samplerate2, data2 = wavfile.read(audio_ref) - write("./audio_in.wav", samplerate1, data1) - write("./audio_ref.wav", samplerate2, data2) - - query_seq = knn_vc.get_features("./audio_in.wav") - matching_set = knn_vc.get_matching_set(["./audio_ref.wav"]) - out_wav = knn_vc.match(query_seq, matching_set, topk=4) - torchaudio.save('output.wav', out_wav[None], 16000) - noisy = enhance_model.load_audio( - 'output.wav' - ).unsqueeze(0) - enhanced = enhance_model.enhance_batch(noisy, lengths=torch.tensor([1.])) - torchaudio.save('enhanced.wav', enhanced.cpu(), 16000) - return 'enhanced.wav' - - -def tts(text, model, voice, api_key): - if len(text)>300: - raise gr.Error('您输入的文本字符多于300个,请缩短您的文本') - if api_key == '': - raise gr.Error('Please enter your OpenAI API Key') - else: - try: - client = OpenAI(api_key=api_key) - - response = client.audio.speech.create( - model=model, # "tts-1","tts-1-hd" - voice=voice, # 'alloy', 'echo', 'fable', 'onyx', 'nova', 'shimmer' - input=text, - ) - - except Exception as error: - # Handle any exception that occurs - raise gr.Error("An error occurred while generating speech. Please check your API key and try again.") - print(str(error)) - - # Create a temp file to save the audio - with tempfile.NamedTemporaryFile(suffix=".mp3", delete=False) as temp_file: - temp_file.write(response.content) - - # Get the file path of the temp file - temp_file_path = temp_file.name - - return temp_file_path - - -app = gr.Blocks() - -with app: - gr.Markdown("#
    🌟 - OpenAI TTS + AI变声
    ") - gr.Markdown("###
    🎶 地表最强文本转语音模型 + 3秒实时AI变声,支持中文!Powered by [OpenAI TTS](https://platform.openai.com/docs/guides/text-to-speech) and [KNN-VC](https://github.com/bshall/knn-vc)
    ") - gr.Markdown("###
    🌊 更多精彩应用,敬请关注[滔滔AI](http://www.talktalkai.com);滔滔AI,为爱滔滔!💕
    ") - with gr.Tab("🤗 OpenAI TTS"): - with gr.Row(variant='panel'): - api_key = gr.Textbox(type='password', label='OpenAI API Key', value="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", placeholder='请在此填写您的OpenAI API Key') - model = gr.Dropdown(choices=['tts-1','tts-1-hd'], label='请选择模型(tts-1推理更快,tts-1-hd音质更好)', value='tts-1') - voice = gr.Dropdown(choices=['alloy', 'echo', 'fable', 'onyx', 'nova', 'shimmer'], label='请选择一个说话人', value='alloy') - with gr.Row(): - with gr.Column(): - inp_text = gr.Textbox(label="请填写您想生成的文本(中英文皆可)", placeholder="想说却还没说的 还很多 攒着是因为想写成歌", lines=5) - btn_text = gr.Button("一键开启真实拟声吧", variant="primary") - - with gr.Column(): - inp1 = gr.Audio(type="filepath", label="OpenAI TTS真实拟声", interactive=False) - inp2 = gr.Audio(type="filepath", label="请上传AI变声的参照音频(决定变声后的语音音色)") - btn1 = gr.Button("一键开启AI变声吧", variant="primary") - with gr.Column(): - out1 = gr.Audio(type="filepath", label="AI变声后的专属音频") - btn_text.click(tts, [inp_text, model, voice, api_key], inp1) - btn1.click(voice_change, [inp1, inp2], out1) - with gr.Tab("⚡ Edge TTS"): - with gr.Row(): - input_text = gr.Textbox(lines=5, placeholder="想说却还没说的 还很多 攒着是因为想写成歌", label="请填写您想生成的文本(中英文皆可)") - default_language = list(language_dict.keys())[15] - language = gr.Dropdown(choices=list(language_dict.keys()), value=default_language, label="请选择文本对应的语言") - btn_edge = gr.Button("一键开启真实拟声吧", variant="primary") - output_text = gr.Textbox(label="输出文本", visible=False) - output_audio = gr.Audio(type="filepath", label="Edge TTS真实拟声") - - with gr.Row(): - inp_vc = gr.Audio(type="filepath", label="请上传AI变声的参照音频(决定变声后的语音音色)") - btn_vc = gr.Button("一键开启AI变声吧", variant="primary") - out_vc = gr.Audio(type="filepath", label="AI变声后的专属音频") - - btn_edge.click(text_to_speech_edge, [input_text, language], [output_text, output_audio]) - btn_vc.click(voice_change, [output_audio, inp_vc], out_vc) - - - gr.Markdown("###
    注意❗:请不要生成会对个人以及组织造成侵害的内容,此程序仅供科研、学习及个人娱乐使用。Get your OpenAI API Key [here](https://platform.openai.com/api-keys).
    ") - gr.HTML(''' - - ''') - -app.launch(show_error=True) - diff --git a/spaces/kevinwang676/VALLE/utils/generation.py b/spaces/kevinwang676/VALLE/utils/generation.py deleted file mode 100644 index 30ed3164c69b7fc864edc77b30228d5ae279ca54..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VALLE/utils/generation.py +++ /dev/null @@ -1,256 +0,0 @@ -import os -import torch -import gdown -import logging -import langid -langid.set_languages(['en', 'zh', 'ja']) - -import pathlib -import platform -if platform.system().lower() == 'windows': - temp = pathlib.PosixPath - pathlib.PosixPath = pathlib.WindowsPath -elif platform.system().lower() == 'linux': - temp = pathlib.WindowsPath - pathlib.WindowsPath = pathlib.PosixPath - -import numpy as np -from data.tokenizer import ( - AudioTokenizer, - tokenize_audio, -) -from data.collation import get_text_token_collater -from models.vallex import VALLE -from utils.g2p import PhonemeBpeTokenizer -from utils.sentence_cutter import split_text_into_sentences - -from macros import * - -device = torch.device("cpu") -if torch.cuda.is_available(): - device = torch.device("cuda", 0) - -url = 'https://drive.google.com/file/d/10gdQWvP-K_e1undkvv0p2b7SU6I4Egyl/view?usp=sharing' - -checkpoints_dir = "./checkpoints/" - -model_checkpoint_name = "vallex-checkpoint.pt" - -model = None - -codec = None - -text_tokenizer = PhonemeBpeTokenizer(tokenizer_path="./utils/g2p/bpe_69.json") -text_collater = get_text_token_collater() - -def preload_models(): - global model, codec - if not os.path.exists(checkpoints_dir): os.mkdir(checkpoints_dir) - if not os.path.exists(os.path.join(checkpoints_dir, model_checkpoint_name)): - gdown.download(id="10gdQWvP-K_e1undkvv0p2b7SU6I4Egyl", output=os.path.join(checkpoints_dir, model_checkpoint_name), quiet=False) - # VALL-E - model = VALLE( - N_DIM, - NUM_HEAD, - NUM_LAYERS, - norm_first=True, - add_prenet=False, - prefix_mode=PREFIX_MODE, - share_embedding=True, - nar_scale_factor=1.0, - prepend_bos=True, - num_quantizers=NUM_QUANTIZERS, - ).to(device) - checkpoint = torch.load(os.path.join(checkpoints_dir, model_checkpoint_name), map_location='cpu') - missing_keys, unexpected_keys = model.load_state_dict( - checkpoint["model"], strict=True - ) - assert not missing_keys - model.eval() - - # Encodec - codec = AudioTokenizer(device) - -@torch.no_grad() -def generate_audio(text, prompt=None, language='auto', accent='no-accent'): - global model, codec, text_tokenizer, text_collater - text = text.replace("\n", "").strip(" ") - # detect language - if language == "auto": - language = langid.classify(text)[0] - lang_token = lang2token[language] - lang = token2lang[lang_token] - text = lang_token + text + lang_token - - # load prompt - if prompt is not None: - prompt_path = prompt - if not os.path.exists(prompt_path): - prompt_path = "./presets/" + prompt + ".npz" - if not os.path.exists(prompt_path): - prompt_path = "./customs/" + prompt + ".npz" - if not os.path.exists(prompt_path): - raise ValueError(f"Cannot find prompt {prompt}") - prompt_data = np.load(prompt_path) - audio_prompts = prompt_data['audio_tokens'] - text_prompts = prompt_data['text_tokens'] - lang_pr = prompt_data['lang_code'] - lang_pr = code2lang[int(lang_pr)] - - # numpy to tensor - audio_prompts = torch.tensor(audio_prompts).type(torch.int32).to(device) - text_prompts = torch.tensor(text_prompts).type(torch.int32) - else: - audio_prompts = torch.zeros([1, 0, NUM_QUANTIZERS]).type(torch.int32).to(device) - text_prompts = torch.zeros([1, 0]).type(torch.int32) - lang_pr = lang if lang != 'mix' else 'en' - - enroll_x_lens = text_prompts.shape[-1] - logging.info(f"synthesize text: {text}") - phone_tokens, langs = text_tokenizer.tokenize(text=f"_{text}".strip()) - text_tokens, text_tokens_lens = text_collater( - [ - phone_tokens - ] - ) - text_tokens = torch.cat([text_prompts, text_tokens], dim=-1) - text_tokens_lens += enroll_x_lens - # accent control - lang = lang if accent == "no-accent" else token2lang[langdropdown2token[accent]] - encoded_frames = model.inference( - text_tokens.to(device), - text_tokens_lens.to(device), - audio_prompts, - enroll_x_lens=enroll_x_lens, - top_k=-100, - temperature=1, - prompt_language=lang_pr, - text_language=langs if accent == "no-accent" else lang, - ) - samples = codec.decode( - [(encoded_frames.transpose(2, 1), None)] - ) - - return samples[0][0].cpu().numpy() - -@torch.no_grad() -def generate_audio_from_long_text(text, prompt=None, language='auto', accent='no-accent', mode='sliding-window'): - """ - For long audio generation, two modes are available. - fixed-prompt: This mode will keep using the same prompt the user has provided, and generate audio sentence by sentence. - sliding-window: This mode will use the last sentence as the prompt for the next sentence, but has some concern on speaker maintenance. - """ - global model, codec, text_tokenizer, text_collater - if prompt is None or prompt == "": - mode = 'sliding-window' # If no prompt is given, use sliding-window mode - sentences = split_text_into_sentences(text) - # detect language - if language == "auto": - language = langid.classify(text)[0] - - # if initial prompt is given, encode it - if prompt is not None and prompt != "": - prompt_path = prompt - if not os.path.exists(prompt_path): - prompt_path = "./presets/" + prompt + ".npz" - if not os.path.exists(prompt_path): - prompt_path = "./customs/" + prompt + ".npz" - if not os.path.exists(prompt_path): - raise ValueError(f"Cannot find prompt {prompt}") - prompt_data = np.load(prompt_path) - audio_prompts = prompt_data['audio_tokens'] - text_prompts = prompt_data['text_tokens'] - lang_pr = prompt_data['lang_code'] - lang_pr = code2lang[int(lang_pr)] - - # numpy to tensor - audio_prompts = torch.tensor(audio_prompts).type(torch.int32).to(device) - text_prompts = torch.tensor(text_prompts).type(torch.int32) - else: - audio_prompts = torch.zeros([1, 0, NUM_QUANTIZERS]).type(torch.int32).to(device) - text_prompts = torch.zeros([1, 0]).type(torch.int32) - lang_pr = language if language != 'mix' else 'en' - if mode == 'fixed-prompt': - complete_tokens = torch.zeros([1, NUM_QUANTIZERS, 0]).type(torch.LongTensor).to(device) - for text in sentences: - text = text.replace("\n", "").strip(" ") - if text == "": - continue - lang_token = lang2token[language] - lang = token2lang[lang_token] - text = lang_token + text + lang_token - - enroll_x_lens = text_prompts.shape[-1] - logging.info(f"synthesize text: {text}") - phone_tokens, langs = text_tokenizer.tokenize(text=f"_{text}".strip()) - text_tokens, text_tokens_lens = text_collater( - [ - phone_tokens - ] - ) - text_tokens = torch.cat([text_prompts, text_tokens], dim=-1) - text_tokens_lens += enroll_x_lens - # accent control - lang = lang if accent == "no-accent" else token2lang[langdropdown2token[accent]] - encoded_frames = model.inference( - text_tokens.to(device), - text_tokens_lens.to(device), - audio_prompts, - enroll_x_lens=enroll_x_lens, - top_k=-100, - temperature=1, - prompt_language=lang_pr, - text_language=langs if accent == "no-accent" else lang, - ) - complete_tokens = torch.cat([complete_tokens, encoded_frames.transpose(2, 1)], dim=-1) - samples = codec.decode( - [(complete_tokens, None)] - ) - return samples[0][0].cpu().numpy() - elif mode == "sliding-window": - complete_tokens = torch.zeros([1, NUM_QUANTIZERS, 0]).type(torch.LongTensor).to(device) - original_audio_prompts = audio_prompts - original_text_prompts = text_prompts - for text in sentences: - text = text.replace("\n", "").strip(" ") - if text == "": - continue - lang_token = lang2token[language] - lang = token2lang[lang_token] - text = lang_token + text + lang_token - - enroll_x_lens = text_prompts.shape[-1] - logging.info(f"synthesize text: {text}") - phone_tokens, langs = text_tokenizer.tokenize(text=f"_{text}".strip()) - text_tokens, text_tokens_lens = text_collater( - [ - phone_tokens - ] - ) - text_tokens = torch.cat([text_prompts, text_tokens], dim=-1) - text_tokens_lens += enroll_x_lens - # accent control - lang = lang if accent == "no-accent" else token2lang[langdropdown2token[accent]] - encoded_frames = model.inference( - text_tokens.to(device), - text_tokens_lens.to(device), - audio_prompts, - enroll_x_lens=enroll_x_lens, - top_k=-100, - temperature=1, - prompt_language=lang_pr, - text_language=langs if accent == "no-accent" else lang, - ) - complete_tokens = torch.cat([complete_tokens, encoded_frames.transpose(2, 1)], dim=-1) - if torch.rand(1) < 0.5: - audio_prompts = encoded_frames[:, :, -NUM_QUANTIZERS:] - text_prompts = text_tokens[:, enroll_x_lens:] - else: - audio_prompts = original_audio_prompts - text_prompts = original_text_prompts - samples = codec.decode( - [(complete_tokens, None)] - ) - return samples[0][0].cpu().numpy() - else: - raise ValueError(f"No such mode {mode}") \ No newline at end of file diff --git a/spaces/kidcoconut/spcstm_omdenasaudi_liverhccxai/lib/__init__.py b/spaces/kidcoconut/spcstm_omdenasaudi_liverhccxai/lib/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/configs/_base_/datasets/ade20k.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/configs/_base_/datasets/ade20k.py deleted file mode 100644 index efc8b4bb20c981f3db6df7eb52b3dc0744c94cc0..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/configs/_base_/datasets/ade20k.py +++ /dev/null @@ -1,54 +0,0 @@ -# dataset settings -dataset_type = 'ADE20KDataset' -data_root = 'data/ade/ADEChallengeData2016' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -crop_size = (512, 512) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', reduce_zero_label=True), - dict(type='Resize', img_scale=(2048, 512), ratio_range=(0.5, 2.0)), - dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), - dict(type='RandomFlip', prob=0.5), - dict(type='PhotoMetricDistortion'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_semantic_seg']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(2048, 512), - # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75], - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/training', - ann_dir='annotations/training', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/validation', - ann_dir='annotations/validation', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/validation', - ann_dir='annotations/validation', - pipeline=test_pipeline)) diff --git a/spaces/kukuhtw/AutoGPT/autogpt/memory/weaviate.py b/spaces/kukuhtw/AutoGPT/autogpt/memory/weaviate.py deleted file mode 100644 index 5408e9a97aa3594ad443448cfc31f2546a01eb09..0000000000000000000000000000000000000000 --- a/spaces/kukuhtw/AutoGPT/autogpt/memory/weaviate.py +++ /dev/null @@ -1,127 +0,0 @@ -import uuid - -import weaviate -from weaviate import Client -from weaviate.embedded import EmbeddedOptions -from weaviate.util import generate_uuid5 - -from autogpt.config import Config -from autogpt.memory.base import MemoryProviderSingleton, get_ada_embedding - - -def default_schema(weaviate_index): - return { - "class": weaviate_index, - "properties": [ - { - "name": "raw_text", - "dataType": ["text"], - "description": "original text for the embedding", - } - ], - } - - -class WeaviateMemory(MemoryProviderSingleton): - def __init__(self, cfg): - auth_credentials = self._build_auth_credentials(cfg) - - url = f"{cfg.weaviate_protocol}://{cfg.weaviate_host}:{cfg.weaviate_port}" - - if cfg.use_weaviate_embedded: - self.client = Client( - embedded_options=EmbeddedOptions( - hostname=cfg.weaviate_host, - port=int(cfg.weaviate_port), - persistence_data_path=cfg.weaviate_embedded_path, - ) - ) - - print( - f"Weaviate Embedded running on: {url} with persistence path: {cfg.weaviate_embedded_path}" - ) - else: - self.client = Client(url, auth_client_secret=auth_credentials) - - self.index = WeaviateMemory.format_classname(cfg.memory_index) - self._create_schema() - - @staticmethod - def format_classname(index): - # weaviate uses capitalised index names - # The python client uses the following code to format - # index names before the corresponding class is created - if len(index) == 1: - return index.capitalize() - return index[0].capitalize() + index[1:] - - def _create_schema(self): - schema = default_schema(self.index) - if not self.client.schema.contains(schema): - self.client.schema.create_class(schema) - - def _build_auth_credentials(self, cfg): - if cfg.weaviate_username and cfg.weaviate_password: - return weaviate.AuthClientPassword( - cfg.weaviate_username, cfg.weaviate_password - ) - if cfg.weaviate_api_key: - return weaviate.AuthApiKey(api_key=cfg.weaviate_api_key) - else: - return None - - def add(self, data): - vector = get_ada_embedding(data) - - doc_uuid = generate_uuid5(data, self.index) - data_object = {"raw_text": data} - - with self.client.batch as batch: - batch.add_data_object( - uuid=doc_uuid, - data_object=data_object, - class_name=self.index, - vector=vector, - ) - - return f"Inserting data into memory at uuid: {doc_uuid}:\n data: {data}" - - def get(self, data): - return self.get_relevant(data, 1) - - def clear(self): - self.client.schema.delete_all() - - # weaviate does not yet have a neat way to just remove the items in an index - # without removing the entire schema, therefore we need to re-create it - # after a call to delete_all - self._create_schema() - - return "Obliterated" - - def get_relevant(self, data, num_relevant=5): - query_embedding = get_ada_embedding(data) - try: - results = ( - self.client.query.get(self.index, ["raw_text"]) - .with_near_vector({"vector": query_embedding, "certainty": 0.7}) - .with_limit(num_relevant) - .do() - ) - - if len(results["data"]["Get"][self.index]) > 0: - return [ - str(item["raw_text"]) for item in results["data"]["Get"][self.index] - ] - else: - return [] - - except Exception as err: - print(f"Unexpected error {err=}, {type(err)=}") - return [] - - def get_stats(self): - result = self.client.query.aggregate(self.index).with_meta_count().do() - class_data = result["data"]["Aggregate"][self.index] - - return class_data[0]["meta"] if class_data else {} diff --git a/spaces/kukuhtw/VToonify/vtoonify/model/stylegan/model.py b/spaces/kukuhtw/VToonify/vtoonify/model/stylegan/model.py deleted file mode 100644 index 7a4b00e52902d850b78dea3736324198eb32e075..0000000000000000000000000000000000000000 --- a/spaces/kukuhtw/VToonify/vtoonify/model/stylegan/model.py +++ /dev/null @@ -1,719 +0,0 @@ -import math -import random -import functools -import operator - -import torch -from torch import nn -from torch.nn import functional as F -from torch.autograd import Function - -from model.stylegan.op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d, conv2d_gradfix - -class PixelNorm(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, input): - return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8) - - -def make_kernel(k): - k = torch.tensor(k, dtype=torch.float32) - - if k.ndim == 1: - k = k[None, :] * k[:, None] - - k /= k.sum() - - return k - - -class Upsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) * (factor ** 2) - self.register_buffer("kernel", kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad) - - return out - - -class Downsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) - self.register_buffer("kernel", kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad) - - return out - - -class Blur(nn.Module): - def __init__(self, kernel, pad, upsample_factor=1): - super().__init__() - - kernel = make_kernel(kernel) - - if upsample_factor > 1: - kernel = kernel * (upsample_factor ** 2) - - self.register_buffer("kernel", kernel) - - self.pad = pad - - def forward(self, input): - out = upfirdn2d(input, self.kernel, pad=self.pad) - - return out - - -class EqualConv2d(nn.Module): - def __init__( - self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True, dilation=1 ## modified - ): - super().__init__() - - self.weight = nn.Parameter( - torch.randn(out_channel, in_channel, kernel_size, kernel_size) - ) - self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2) - - self.stride = stride - self.padding = padding - self.dilation = dilation ## modified - - if bias: - self.bias = nn.Parameter(torch.zeros(out_channel)) - - else: - self.bias = None - - def forward(self, input): - out = conv2d_gradfix.conv2d( - input, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - dilation=self.dilation, ## modified - ) - - return out - - def __repr__(self): - return ( - f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]}," - f" {self.weight.shape[2]}, stride={self.stride}, padding={self.padding}, dilation={self.dilation})" ## modified - ) - - -class EqualLinear(nn.Module): - def __init__( - self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None - ): - super().__init__() - - self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul)) - - if bias: - self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init)) - - else: - self.bias = None - - self.activation = activation - - self.scale = (1 / math.sqrt(in_dim)) * lr_mul - self.lr_mul = lr_mul - - def forward(self, input): - if self.activation: - out = F.linear(input, self.weight * self.scale) - out = fused_leaky_relu(out, self.bias * self.lr_mul) - - else: - out = F.linear( - input, self.weight * self.scale, bias=self.bias * self.lr_mul - ) - - return out - - def __repr__(self): - return ( - f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})" - ) - - -class ModulatedConv2d(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - demodulate=True, - upsample=False, - downsample=False, - blur_kernel=[1, 3, 3, 1], - fused=True, - ): - super().__init__() - - self.eps = 1e-8 - self.kernel_size = kernel_size - self.in_channel = in_channel - self.out_channel = out_channel - self.upsample = upsample - self.downsample = downsample - - if upsample: - factor = 2 - p = (len(blur_kernel) - factor) - (kernel_size - 1) - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 + 1 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor) - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1)) - - fan_in = in_channel * kernel_size ** 2 - self.scale = 1 / math.sqrt(fan_in) - self.padding = kernel_size // 2 - - self.weight = nn.Parameter( - torch.randn(1, out_channel, in_channel, kernel_size, kernel_size) - ) - - self.modulation = EqualLinear(style_dim, in_channel, bias_init=1) - - self.demodulate = demodulate - self.fused = fused - - def __repr__(self): - return ( - f"{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, " - f"upsample={self.upsample}, downsample={self.downsample})" - ) - - def forward(self, input, style, externalweight=None): - batch, in_channel, height, width = input.shape - - if not self.fused: - weight = self.scale * self.weight.squeeze(0) - style = self.modulation(style) - - if self.demodulate: - w = weight.unsqueeze(0) * style.view(batch, 1, in_channel, 1, 1) - dcoefs = (w.square().sum((2, 3, 4)) + 1e-8).rsqrt() - - input = input * style.reshape(batch, in_channel, 1, 1) - - if self.upsample: - weight = weight.transpose(0, 1) - out = conv2d_gradfix.conv_transpose2d( - input, weight, padding=0, stride=2 - ) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - out = conv2d_gradfix.conv2d(input, weight, padding=0, stride=2) - - else: - out = conv2d_gradfix.conv2d(input, weight, padding=self.padding) - - if self.demodulate: - out = out * dcoefs.view(batch, -1, 1, 1) - - return out - - style = self.modulation(style).view(batch, 1, in_channel, 1, 1) - if externalweight is None: - weight = self.scale * self.weight * style - else: - weight = self.scale * (self.weight + externalweight) * style - - if self.demodulate: - demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8) - weight = weight * demod.view(batch, self.out_channel, 1, 1, 1) - - weight = weight.view( - batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - - if self.upsample: - input = input.view(1, batch * in_channel, height, width) - weight = weight.view( - batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - weight = weight.transpose(1, 2).reshape( - batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size - ) - out = conv2d_gradfix.conv_transpose2d( - input, weight, padding=0, stride=2, groups=batch - ) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - _, _, height, width = input.shape - input = input.view(1, batch * in_channel, height, width) - out = conv2d_gradfix.conv2d( - input, weight, padding=0, stride=2, groups=batch - ) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - else: - input = input.view(1, batch * in_channel, height, width) - out = conv2d_gradfix.conv2d( - input, weight, padding=self.padding, groups=batch - ) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - return out - - -class NoiseInjection(nn.Module): - def __init__(self): - super().__init__() - - self.weight = nn.Parameter(torch.zeros(1)) - - def forward(self, image, noise=None): - if noise is None: - batch, _, height, width = image.shape - noise = image.new_empty(batch, 1, height, width).normal_() - - return image + self.weight * noise - - -class ConstantInput(nn.Module): - def __init__(self, channel, size=4): - super().__init__() - - self.input = nn.Parameter(torch.randn(1, channel, size, size)) - - def forward(self, input): - batch = input.shape[0] - out = self.input.repeat(batch, 1, 1, 1) - - return out - - -class StyledConv(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=False, - blur_kernel=[1, 3, 3, 1], - demodulate=True, - ): - super().__init__() - - self.conv = ModulatedConv2d( - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=upsample, - blur_kernel=blur_kernel, - demodulate=demodulate, - ) - - self.noise = NoiseInjection() - # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1)) - # self.activate = ScaledLeakyReLU(0.2) - self.activate = FusedLeakyReLU(out_channel) - - def forward(self, input, style, noise=None, externalweight=None): - out = self.conv(input, style, externalweight) - out = self.noise(out, noise=noise) - # out = out + self.bias - out = self.activate(out) - - return out - - -class ToRGB(nn.Module): - def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - if upsample: - self.upsample = Upsample(blur_kernel) - - self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False) - self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1)) - - def forward(self, input, style, skip=None, externalweight=None): - out = self.conv(input, style, externalweight) - out = out + self.bias - - if skip is not None: - skip = self.upsample(skip) - - out = out + skip - - return out - - -class Generator(nn.Module): - def __init__( - self, - size, - style_dim, - n_mlp, - channel_multiplier=2, - blur_kernel=[1, 3, 3, 1], - lr_mlp=0.01, - ): - super().__init__() - - self.size = size - - self.style_dim = style_dim - - layers = [PixelNorm()] - - for i in range(n_mlp): - layers.append( - EqualLinear( - style_dim, style_dim, lr_mul=lr_mlp, activation="fused_lrelu" - ) - ) - - self.style = nn.Sequential(*layers) - - self.channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - self.input = ConstantInput(self.channels[4]) - self.conv1 = StyledConv( - self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel - ) - self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False) - - self.log_size = int(math.log(size, 2)) - self.num_layers = (self.log_size - 2) * 2 + 1 - - self.convs = nn.ModuleList() - self.upsamples = nn.ModuleList() - self.to_rgbs = nn.ModuleList() - self.noises = nn.Module() - - in_channel = self.channels[4] - - for layer_idx in range(self.num_layers): - res = (layer_idx + 5) // 2 - shape = [1, 1, 2 ** res, 2 ** res] - self.noises.register_buffer(f"noise_{layer_idx}", torch.randn(*shape)) - - for i in range(3, self.log_size + 1): - out_channel = self.channels[2 ** i] - - self.convs.append( - StyledConv( - in_channel, - out_channel, - 3, - style_dim, - upsample=True, - blur_kernel=blur_kernel, - ) - ) - - self.convs.append( - StyledConv( - out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel - ) - ) - - self.to_rgbs.append(ToRGB(out_channel, style_dim)) - - in_channel = out_channel - - self.n_latent = self.log_size * 2 - 2 - - def make_noise(self): - device = self.input.input.device - - noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)] - - for i in range(3, self.log_size + 1): - for _ in range(2): - noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device)) - - return noises - - def mean_latent(self, n_latent): - latent_in = torch.randn( - n_latent, self.style_dim, device=self.input.input.device - ) - latent = self.style(latent_in).mean(0, keepdim=True) - - return latent - - def get_latent(self, input): - return self.style(input) - - def forward( - self, - styles, - return_latents=False, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_latent=False, - noise=None, - randomize_noise=True, - z_plus_latent=False, - return_feature_ind=999, - ): - if not input_is_latent: - if not z_plus_latent: - styles = [self.style(s) for s in styles] - else: - styles_ = [] - for s in styles: - style_ = [] - for i in range(s.shape[1]): - style_.append(self.style(s[:,i]).unsqueeze(1)) - styles_.append(torch.cat(style_,dim=1)) - styles = styles_ - - if noise is None: - if randomize_noise: - noise = [None] * self.num_layers - else: - noise = [ - getattr(self.noises, f"noise_{i}") for i in range(self.num_layers) - ] - - if truncation < 1: - style_t = [] - - for style in styles: - style_t.append( - truncation_latent + truncation * (style - truncation_latent) - ) - - styles = style_t - - if len(styles) < 2: - inject_index = self.n_latent - - if styles[0].ndim < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - - else: - latent = styles[0] - - else: - if inject_index is None: - inject_index = random.randint(1, self.n_latent - 1) - - if styles[0].ndim < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1) - - latent = torch.cat([latent, latent2], 1) - else: - latent = torch.cat([styles[0][:,0:inject_index], styles[1][:,inject_index:]], 1) - - out = self.input(latent) - out = self.conv1(out, latent[:, 0], noise=noise[0]) - - skip = self.to_rgb1(out, latent[:, 1]) - - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip( - self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs - ): - out = conv1(out, latent[:, i], noise=noise1) - out = conv2(out, latent[:, i + 1], noise=noise2) - skip = to_rgb(out, latent[:, i + 2], skip) - - i += 2 - if i > return_feature_ind: - return out, skip - - image = skip - - if return_latents: - return image, latent - - else: - return image, None - - -class ConvLayer(nn.Sequential): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - downsample=False, - blur_kernel=[1, 3, 3, 1], - bias=True, - activate=True, - dilation=1, ## modified - ): - layers = [] - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - layers.append(Blur(blur_kernel, pad=(pad0, pad1))) - - stride = 2 - self.padding = 0 - - else: - stride = 1 - self.padding = kernel_size // 2 + dilation-1 ## modified - - layers.append( - EqualConv2d( - in_channel, - out_channel, - kernel_size, - padding=self.padding, - stride=stride, - bias=bias and not activate, - dilation=dilation, ## modified - ) - ) - - if activate: - layers.append(FusedLeakyReLU(out_channel, bias=bias)) - - super().__init__(*layers) - - -class ResBlock(nn.Module): - def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - self.conv1 = ConvLayer(in_channel, in_channel, 3) - self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True) - - self.skip = ConvLayer( - in_channel, out_channel, 1, downsample=True, activate=False, bias=False - ) - - def forward(self, input): - out = self.conv1(input) - out = self.conv2(out) - - skip = self.skip(input) - out = (out + skip) / math.sqrt(2) - - return out - - -class Discriminator(nn.Module): - def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - convs = [ConvLayer(3, channels[size], 1)] - - log_size = int(math.log(size, 2)) - - in_channel = channels[size] - - for i in range(log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - - convs.append(ResBlock(in_channel, out_channel, blur_kernel)) - - in_channel = out_channel - - self.convs = nn.Sequential(*convs) - - self.stddev_group = 4 - self.stddev_feat = 1 - - self.final_conv = ConvLayer(in_channel + 1, channels[4], 3) - self.final_linear = nn.Sequential( - EqualLinear(channels[4] * 4 * 4, channels[4], activation="fused_lrelu"), - EqualLinear(channels[4], 1), - ) - - def forward(self, input): - out = self.convs(input) - - batch, channel, height, width = out.shape - group = min(batch, self.stddev_group) - stddev = out.view( - group, -1, self.stddev_feat, channel // self.stddev_feat, height, width - ) - stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) - stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) - stddev = stddev.repeat(group, 1, height, width) - out = torch.cat([out, stddev], 1) - - out = self.final_conv(out) - - out = out.view(batch, -1) - out = self.final_linear(out) - - return out \ No newline at end of file diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/treeTools.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/treeTools.py deleted file mode 100644 index 24e10ba5b19ef41d56a552527680a4c73503cc3c..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/treeTools.py +++ /dev/null @@ -1,45 +0,0 @@ -"""Generic tools for working with trees.""" - -from math import ceil, log - - -def build_n_ary_tree(leaves, n): - """Build N-ary tree from sequence of leaf nodes. - - Return a list of lists where each non-leaf node is a list containing - max n nodes. - """ - if not leaves: - return [] - - assert n > 1 - - depth = ceil(log(len(leaves), n)) - - if depth <= 1: - return list(leaves) - - # Fully populate complete subtrees of root until we have enough leaves left - root = [] - unassigned = None - full_step = n ** (depth - 1) - for i in range(0, len(leaves), full_step): - subtree = leaves[i : i + full_step] - if len(subtree) < full_step: - unassigned = subtree - break - while len(subtree) > n: - subtree = [subtree[k : k + n] for k in range(0, len(subtree), n)] - root.append(subtree) - - if unassigned: - # Recurse to fill the last subtree, which is the only partially populated one - subtree = build_n_ary_tree(unassigned, n) - if len(subtree) <= n - len(root): - # replace last subtree with its children if they can still fit - root.extend(subtree) - else: - root.append(subtree) - assert len(root) <= n - - return root diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/dircache.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/dircache.py deleted file mode 100644 index eca19566b135e5a7a4f6e7407d56411ec58bfe44..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/dircache.py +++ /dev/null @@ -1,98 +0,0 @@ -import time -from collections.abc import MutableMapping -from functools import lru_cache - - -class DirCache(MutableMapping): - """ - Caching of directory listings, in a structure like:: - - {"path0": [ - {"name": "path0/file0", - "size": 123, - "type": "file", - ... - }, - {"name": "path0/file1", - }, - ... - ], - "path1": [...] - } - - Parameters to this class control listing expiry or indeed turn - caching off - """ - - def __init__( - self, - use_listings_cache=True, - listings_expiry_time=None, - max_paths=None, - **kwargs, - ): - """ - - Parameters - ---------- - use_listings_cache: bool - If False, this cache never returns items, but always reports KeyError, - and setting items has no effect - listings_expiry_time: int or float (optional) - Time in seconds that a listing is considered valid. If None, - listings do not expire. - max_paths: int (optional) - The number of most recent listings that are considered valid; 'recent' - refers to when the entry was set. - """ - self._cache = {} - self._times = {} - if max_paths: - self._q = lru_cache(max_paths + 1)(lambda key: self._cache.pop(key, None)) - self.use_listings_cache = use_listings_cache - self.listings_expiry_time = listings_expiry_time - self.max_paths = max_paths - - def __getitem__(self, item): - if self.listings_expiry_time is not None: - if self._times.get(item, 0) - time.time() < -self.listings_expiry_time: - del self._cache[item] - if self.max_paths: - self._q(item) - return self._cache[item] # maybe raises KeyError - - def clear(self): - self._cache.clear() - - def __len__(self): - return len(self._cache) - - def __contains__(self, item): - try: - self[item] - return True - except KeyError: - return False - - def __setitem__(self, key, value): - if not self.use_listings_cache: - return - if self.max_paths: - self._q(key) - self._cache[key] = value - if self.listings_expiry_time is not None: - self._times[key] = time.time() - - def __delitem__(self, key): - del self._cache[key] - - def __iter__(self): - entries = list(self._cache) - - return (k for k in entries if k in self) - - def __reduce__(self): - return ( - DirCache, - (self.use_listings_cache, self.listings_expiry_time, self.max_paths), - ) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/functorch/_src/vmap/__init__.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/functorch/_src/vmap/__init__.py deleted file mode 100644 index 792a2fde38bb3563ed5b336132d7af008bf3e11a..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/functorch/_src/vmap/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -# This file has moved to under torch/_functorch. It is not public API. -# If you are not a PyTorch developer and you are relying on the following -# imports, please file an issue. -from torch._functorch.vmap import ( - _add_batch_dim, - _broadcast_to_and_flatten, - _get_name, - _remove_batch_dim, - _validate_and_get_batch_size, - Tensor, - tree_flatten, - tree_unflatten, - _process_batched_inputs, - _create_batched_inputs, - _unwrap_batched, -) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/jinja2/compiler.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/jinja2/compiler.py deleted file mode 100644 index 3458095f54ede1322eb2ab9e34288da87db54ca1..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/jinja2/compiler.py +++ /dev/null @@ -1,1957 +0,0 @@ -"""Compiles nodes from the parser into Python code.""" -import typing as t -from contextlib import contextmanager -from functools import update_wrapper -from io import StringIO -from itertools import chain -from keyword import iskeyword as is_python_keyword - -from markupsafe import escape -from markupsafe import Markup - -from . import nodes -from .exceptions import TemplateAssertionError -from .idtracking import Symbols -from .idtracking import VAR_LOAD_ALIAS -from .idtracking import VAR_LOAD_PARAMETER -from .idtracking import VAR_LOAD_RESOLVE -from .idtracking import VAR_LOAD_UNDEFINED -from .nodes import EvalContext -from .optimizer import Optimizer -from .utils import _PassArg -from .utils import concat -from .visitor import NodeVisitor - -if t.TYPE_CHECKING: - import typing_extensions as te - from .environment import Environment - -F = t.TypeVar("F", bound=t.Callable[..., t.Any]) - -operators = { - "eq": "==", - "ne": "!=", - "gt": ">", - "gteq": ">=", - "lt": "<", - "lteq": "<=", - "in": "in", - "notin": "not in", -} - - -def optimizeconst(f: F) -> F: - def new_func( - self: "CodeGenerator", node: nodes.Expr, frame: "Frame", **kwargs: t.Any - ) -> t.Any: - # Only optimize if the frame is not volatile - if self.optimizer is not None and not frame.eval_ctx.volatile: - new_node = self.optimizer.visit(node, frame.eval_ctx) - - if new_node != node: - return self.visit(new_node, frame) - - return f(self, node, frame, **kwargs) - - return update_wrapper(t.cast(F, new_func), f) - - -def _make_binop(op: str) -> t.Callable[["CodeGenerator", nodes.BinExpr, "Frame"], None]: - @optimizeconst - def visitor(self: "CodeGenerator", node: nodes.BinExpr, frame: Frame) -> None: - if ( - self.environment.sandboxed - and op in self.environment.intercepted_binops # type: ignore - ): - self.write(f"environment.call_binop(context, {op!r}, ") - self.visit(node.left, frame) - self.write(", ") - self.visit(node.right, frame) - else: - self.write("(") - self.visit(node.left, frame) - self.write(f" {op} ") - self.visit(node.right, frame) - - self.write(")") - - return visitor - - -def _make_unop( - op: str, -) -> t.Callable[["CodeGenerator", nodes.UnaryExpr, "Frame"], None]: - @optimizeconst - def visitor(self: "CodeGenerator", node: nodes.UnaryExpr, frame: Frame) -> None: - if ( - self.environment.sandboxed - and op in self.environment.intercepted_unops # type: ignore - ): - self.write(f"environment.call_unop(context, {op!r}, ") - self.visit(node.node, frame) - else: - self.write("(" + op) - self.visit(node.node, frame) - - self.write(")") - - return visitor - - -def generate( - node: nodes.Template, - environment: "Environment", - name: t.Optional[str], - filename: t.Optional[str], - stream: t.Optional[t.TextIO] = None, - defer_init: bool = False, - optimized: bool = True, -) -> t.Optional[str]: - """Generate the python source for a node tree.""" - if not isinstance(node, nodes.Template): - raise TypeError("Can't compile non template nodes") - - generator = environment.code_generator_class( - environment, name, filename, stream, defer_init, optimized - ) - generator.visit(node) - - if stream is None: - return generator.stream.getvalue() # type: ignore - - return None - - -def has_safe_repr(value: t.Any) -> bool: - """Does the node have a safe representation?""" - if value is None or value is NotImplemented or value is Ellipsis: - return True - - if type(value) in {bool, int, float, complex, range, str, Markup}: - return True - - if type(value) in {tuple, list, set, frozenset}: - return all(has_safe_repr(v) for v in value) - - if type(value) is dict: - return all(has_safe_repr(k) and has_safe_repr(v) for k, v in value.items()) - - return False - - -def find_undeclared( - nodes: t.Iterable[nodes.Node], names: t.Iterable[str] -) -> t.Set[str]: - """Check if the names passed are accessed undeclared. The return value - is a set of all the undeclared names from the sequence of names found. - """ - visitor = UndeclaredNameVisitor(names) - try: - for node in nodes: - visitor.visit(node) - except VisitorExit: - pass - return visitor.undeclared - - -class MacroRef: - def __init__(self, node: t.Union[nodes.Macro, nodes.CallBlock]) -> None: - self.node = node - self.accesses_caller = False - self.accesses_kwargs = False - self.accesses_varargs = False - - -class Frame: - """Holds compile time information for us.""" - - def __init__( - self, - eval_ctx: EvalContext, - parent: t.Optional["Frame"] = None, - level: t.Optional[int] = None, - ) -> None: - self.eval_ctx = eval_ctx - - # the parent of this frame - self.parent = parent - - if parent is None: - self.symbols = Symbols(level=level) - - # in some dynamic inheritance situations the compiler needs to add - # write tests around output statements. - self.require_output_check = False - - # inside some tags we are using a buffer rather than yield statements. - # this for example affects {% filter %} or {% macro %}. If a frame - # is buffered this variable points to the name of the list used as - # buffer. - self.buffer: t.Optional[str] = None - - # the name of the block we're in, otherwise None. - self.block: t.Optional[str] = None - - else: - self.symbols = Symbols(parent.symbols, level=level) - self.require_output_check = parent.require_output_check - self.buffer = parent.buffer - self.block = parent.block - - # a toplevel frame is the root + soft frames such as if conditions. - self.toplevel = False - - # the root frame is basically just the outermost frame, so no if - # conditions. This information is used to optimize inheritance - # situations. - self.rootlevel = False - - # variables set inside of loops and blocks should not affect outer frames, - # but they still needs to be kept track of as part of the active context. - self.loop_frame = False - self.block_frame = False - - # track whether the frame is being used in an if-statement or conditional - # expression as it determines which errors should be raised during runtime - # or compile time. - self.soft_frame = False - - def copy(self) -> "Frame": - """Create a copy of the current one.""" - rv = object.__new__(self.__class__) - rv.__dict__.update(self.__dict__) - rv.symbols = self.symbols.copy() - return rv - - def inner(self, isolated: bool = False) -> "Frame": - """Return an inner frame.""" - if isolated: - return Frame(self.eval_ctx, level=self.symbols.level + 1) - return Frame(self.eval_ctx, self) - - def soft(self) -> "Frame": - """Return a soft frame. A soft frame may not be modified as - standalone thing as it shares the resources with the frame it - was created of, but it's not a rootlevel frame any longer. - - This is only used to implement if-statements and conditional - expressions. - """ - rv = self.copy() - rv.rootlevel = False - rv.soft_frame = True - return rv - - __copy__ = copy - - -class VisitorExit(RuntimeError): - """Exception used by the `UndeclaredNameVisitor` to signal a stop.""" - - -class DependencyFinderVisitor(NodeVisitor): - """A visitor that collects filter and test calls.""" - - def __init__(self) -> None: - self.filters: t.Set[str] = set() - self.tests: t.Set[str] = set() - - def visit_Filter(self, node: nodes.Filter) -> None: - self.generic_visit(node) - self.filters.add(node.name) - - def visit_Test(self, node: nodes.Test) -> None: - self.generic_visit(node) - self.tests.add(node.name) - - def visit_Block(self, node: nodes.Block) -> None: - """Stop visiting at blocks.""" - - -class UndeclaredNameVisitor(NodeVisitor): - """A visitor that checks if a name is accessed without being - declared. This is different from the frame visitor as it will - not stop at closure frames. - """ - - def __init__(self, names: t.Iterable[str]) -> None: - self.names = set(names) - self.undeclared: t.Set[str] = set() - - def visit_Name(self, node: nodes.Name) -> None: - if node.ctx == "load" and node.name in self.names: - self.undeclared.add(node.name) - if self.undeclared == self.names: - raise VisitorExit() - else: - self.names.discard(node.name) - - def visit_Block(self, node: nodes.Block) -> None: - """Stop visiting a blocks.""" - - -class CompilerExit(Exception): - """Raised if the compiler encountered a situation where it just - doesn't make sense to further process the code. Any block that - raises such an exception is not further processed. - """ - - -class CodeGenerator(NodeVisitor): - def __init__( - self, - environment: "Environment", - name: t.Optional[str], - filename: t.Optional[str], - stream: t.Optional[t.TextIO] = None, - defer_init: bool = False, - optimized: bool = True, - ) -> None: - if stream is None: - stream = StringIO() - self.environment = environment - self.name = name - self.filename = filename - self.stream = stream - self.created_block_context = False - self.defer_init = defer_init - self.optimizer: t.Optional[Optimizer] = None - - if optimized: - self.optimizer = Optimizer(environment) - - # aliases for imports - self.import_aliases: t.Dict[str, str] = {} - - # a registry for all blocks. Because blocks are moved out - # into the global python scope they are registered here - self.blocks: t.Dict[str, nodes.Block] = {} - - # the number of extends statements so far - self.extends_so_far = 0 - - # some templates have a rootlevel extends. In this case we - # can safely assume that we're a child template and do some - # more optimizations. - self.has_known_extends = False - - # the current line number - self.code_lineno = 1 - - # registry of all filters and tests (global, not block local) - self.tests: t.Dict[str, str] = {} - self.filters: t.Dict[str, str] = {} - - # the debug information - self.debug_info: t.List[t.Tuple[int, int]] = [] - self._write_debug_info: t.Optional[int] = None - - # the number of new lines before the next write() - self._new_lines = 0 - - # the line number of the last written statement - self._last_line = 0 - - # true if nothing was written so far. - self._first_write = True - - # used by the `temporary_identifier` method to get new - # unique, temporary identifier - self._last_identifier = 0 - - # the current indentation - self._indentation = 0 - - # Tracks toplevel assignments - self._assign_stack: t.List[t.Set[str]] = [] - - # Tracks parameter definition blocks - self._param_def_block: t.List[t.Set[str]] = [] - - # Tracks the current context. - self._context_reference_stack = ["context"] - - @property - def optimized(self) -> bool: - return self.optimizer is not None - - # -- Various compilation helpers - - def fail(self, msg: str, lineno: int) -> "te.NoReturn": - """Fail with a :exc:`TemplateAssertionError`.""" - raise TemplateAssertionError(msg, lineno, self.name, self.filename) - - def temporary_identifier(self) -> str: - """Get a new unique identifier.""" - self._last_identifier += 1 - return f"t_{self._last_identifier}" - - def buffer(self, frame: Frame) -> None: - """Enable buffering for the frame from that point onwards.""" - frame.buffer = self.temporary_identifier() - self.writeline(f"{frame.buffer} = []") - - def return_buffer_contents( - self, frame: Frame, force_unescaped: bool = False - ) -> None: - """Return the buffer contents of the frame.""" - if not force_unescaped: - if frame.eval_ctx.volatile: - self.writeline("if context.eval_ctx.autoescape:") - self.indent() - self.writeline(f"return Markup(concat({frame.buffer}))") - self.outdent() - self.writeline("else:") - self.indent() - self.writeline(f"return concat({frame.buffer})") - self.outdent() - return - elif frame.eval_ctx.autoescape: - self.writeline(f"return Markup(concat({frame.buffer}))") - return - self.writeline(f"return concat({frame.buffer})") - - def indent(self) -> None: - """Indent by one.""" - self._indentation += 1 - - def outdent(self, step: int = 1) -> None: - """Outdent by step.""" - self._indentation -= step - - def start_write(self, frame: Frame, node: t.Optional[nodes.Node] = None) -> None: - """Yield or write into the frame buffer.""" - if frame.buffer is None: - self.writeline("yield ", node) - else: - self.writeline(f"{frame.buffer}.append(", node) - - def end_write(self, frame: Frame) -> None: - """End the writing process started by `start_write`.""" - if frame.buffer is not None: - self.write(")") - - def simple_write( - self, s: str, frame: Frame, node: t.Optional[nodes.Node] = None - ) -> None: - """Simple shortcut for start_write + write + end_write.""" - self.start_write(frame, node) - self.write(s) - self.end_write(frame) - - def blockvisit(self, nodes: t.Iterable[nodes.Node], frame: Frame) -> None: - """Visit a list of nodes as block in a frame. If the current frame - is no buffer a dummy ``if 0: yield None`` is written automatically. - """ - try: - self.writeline("pass") - for node in nodes: - self.visit(node, frame) - except CompilerExit: - pass - - def write(self, x: str) -> None: - """Write a string into the output stream.""" - if self._new_lines: - if not self._first_write: - self.stream.write("\n" * self._new_lines) - self.code_lineno += self._new_lines - if self._write_debug_info is not None: - self.debug_info.append((self._write_debug_info, self.code_lineno)) - self._write_debug_info = None - self._first_write = False - self.stream.write(" " * self._indentation) - self._new_lines = 0 - self.stream.write(x) - - def writeline( - self, x: str, node: t.Optional[nodes.Node] = None, extra: int = 0 - ) -> None: - """Combination of newline and write.""" - self.newline(node, extra) - self.write(x) - - def newline(self, node: t.Optional[nodes.Node] = None, extra: int = 0) -> None: - """Add one or more newlines before the next write.""" - self._new_lines = max(self._new_lines, 1 + extra) - if node is not None and node.lineno != self._last_line: - self._write_debug_info = node.lineno - self._last_line = node.lineno - - def signature( - self, - node: t.Union[nodes.Call, nodes.Filter, nodes.Test], - frame: Frame, - extra_kwargs: t.Optional[t.Mapping[str, t.Any]] = None, - ) -> None: - """Writes a function call to the stream for the current node. - A leading comma is added automatically. The extra keyword - arguments may not include python keywords otherwise a syntax - error could occur. The extra keyword arguments should be given - as python dict. - """ - # if any of the given keyword arguments is a python keyword - # we have to make sure that no invalid call is created. - kwarg_workaround = any( - is_python_keyword(t.cast(str, k)) - for k in chain((x.key for x in node.kwargs), extra_kwargs or ()) - ) - - for arg in node.args: - self.write(", ") - self.visit(arg, frame) - - if not kwarg_workaround: - for kwarg in node.kwargs: - self.write(", ") - self.visit(kwarg, frame) - if extra_kwargs is not None: - for key, value in extra_kwargs.items(): - self.write(f", {key}={value}") - if node.dyn_args: - self.write(", *") - self.visit(node.dyn_args, frame) - - if kwarg_workaround: - if node.dyn_kwargs is not None: - self.write(", **dict({") - else: - self.write(", **{") - for kwarg in node.kwargs: - self.write(f"{kwarg.key!r}: ") - self.visit(kwarg.value, frame) - self.write(", ") - if extra_kwargs is not None: - for key, value in extra_kwargs.items(): - self.write(f"{key!r}: {value}, ") - if node.dyn_kwargs is not None: - self.write("}, **") - self.visit(node.dyn_kwargs, frame) - self.write(")") - else: - self.write("}") - - elif node.dyn_kwargs is not None: - self.write(", **") - self.visit(node.dyn_kwargs, frame) - - def pull_dependencies(self, nodes: t.Iterable[nodes.Node]) -> None: - """Find all filter and test names used in the template and - assign them to variables in the compiled namespace. Checking - that the names are registered with the environment is done when - compiling the Filter and Test nodes. If the node is in an If or - CondExpr node, the check is done at runtime instead. - - .. versionchanged:: 3.0 - Filters and tests in If and CondExpr nodes are checked at - runtime instead of compile time. - """ - visitor = DependencyFinderVisitor() - - for node in nodes: - visitor.visit(node) - - for id_map, names, dependency in (self.filters, visitor.filters, "filters"), ( - self.tests, - visitor.tests, - "tests", - ): - for name in sorted(names): - if name not in id_map: - id_map[name] = self.temporary_identifier() - - # add check during runtime that dependencies used inside of executed - # blocks are defined, as this step may be skipped during compile time - self.writeline("try:") - self.indent() - self.writeline(f"{id_map[name]} = environment.{dependency}[{name!r}]") - self.outdent() - self.writeline("except KeyError:") - self.indent() - self.writeline("@internalcode") - self.writeline(f"def {id_map[name]}(*unused):") - self.indent() - self.writeline( - f'raise TemplateRuntimeError("No {dependency[:-1]}' - f' named {name!r} found.")' - ) - self.outdent() - self.outdent() - - def enter_frame(self, frame: Frame) -> None: - undefs = [] - for target, (action, param) in frame.symbols.loads.items(): - if action == VAR_LOAD_PARAMETER: - pass - elif action == VAR_LOAD_RESOLVE: - self.writeline(f"{target} = {self.get_resolve_func()}({param!r})") - elif action == VAR_LOAD_ALIAS: - self.writeline(f"{target} = {param}") - elif action == VAR_LOAD_UNDEFINED: - undefs.append(target) - else: - raise NotImplementedError("unknown load instruction") - if undefs: - self.writeline(f"{' = '.join(undefs)} = missing") - - def leave_frame(self, frame: Frame, with_python_scope: bool = False) -> None: - if not with_python_scope: - undefs = [] - for target in frame.symbols.loads: - undefs.append(target) - if undefs: - self.writeline(f"{' = '.join(undefs)} = missing") - - def choose_async(self, async_value: str = "async ", sync_value: str = "") -> str: - return async_value if self.environment.is_async else sync_value - - def func(self, name: str) -> str: - return f"{self.choose_async()}def {name}" - - def macro_body( - self, node: t.Union[nodes.Macro, nodes.CallBlock], frame: Frame - ) -> t.Tuple[Frame, MacroRef]: - """Dump the function def of a macro or call block.""" - frame = frame.inner() - frame.symbols.analyze_node(node) - macro_ref = MacroRef(node) - - explicit_caller = None - skip_special_params = set() - args = [] - - for idx, arg in enumerate(node.args): - if arg.name == "caller": - explicit_caller = idx - if arg.name in ("kwargs", "varargs"): - skip_special_params.add(arg.name) - args.append(frame.symbols.ref(arg.name)) - - undeclared = find_undeclared(node.body, ("caller", "kwargs", "varargs")) - - if "caller" in undeclared: - # In older Jinja versions there was a bug that allowed caller - # to retain the special behavior even if it was mentioned in - # the argument list. However thankfully this was only really - # working if it was the last argument. So we are explicitly - # checking this now and error out if it is anywhere else in - # the argument list. - if explicit_caller is not None: - try: - node.defaults[explicit_caller - len(node.args)] - except IndexError: - self.fail( - "When defining macros or call blocks the " - 'special "caller" argument must be omitted ' - "or be given a default.", - node.lineno, - ) - else: - args.append(frame.symbols.declare_parameter("caller")) - macro_ref.accesses_caller = True - if "kwargs" in undeclared and "kwargs" not in skip_special_params: - args.append(frame.symbols.declare_parameter("kwargs")) - macro_ref.accesses_kwargs = True - if "varargs" in undeclared and "varargs" not in skip_special_params: - args.append(frame.symbols.declare_parameter("varargs")) - macro_ref.accesses_varargs = True - - # macros are delayed, they never require output checks - frame.require_output_check = False - frame.symbols.analyze_node(node) - self.writeline(f"{self.func('macro')}({', '.join(args)}):", node) - self.indent() - - self.buffer(frame) - self.enter_frame(frame) - - self.push_parameter_definitions(frame) - for idx, arg in enumerate(node.args): - ref = frame.symbols.ref(arg.name) - self.writeline(f"if {ref} is missing:") - self.indent() - try: - default = node.defaults[idx - len(node.args)] - except IndexError: - self.writeline( - f'{ref} = undefined("parameter {arg.name!r} was not provided",' - f" name={arg.name!r})" - ) - else: - self.writeline(f"{ref} = ") - self.visit(default, frame) - self.mark_parameter_stored(ref) - self.outdent() - self.pop_parameter_definitions() - - self.blockvisit(node.body, frame) - self.return_buffer_contents(frame, force_unescaped=True) - self.leave_frame(frame, with_python_scope=True) - self.outdent() - - return frame, macro_ref - - def macro_def(self, macro_ref: MacroRef, frame: Frame) -> None: - """Dump the macro definition for the def created by macro_body.""" - arg_tuple = ", ".join(repr(x.name) for x in macro_ref.node.args) - name = getattr(macro_ref.node, "name", None) - if len(macro_ref.node.args) == 1: - arg_tuple += "," - self.write( - f"Macro(environment, macro, {name!r}, ({arg_tuple})," - f" {macro_ref.accesses_kwargs!r}, {macro_ref.accesses_varargs!r}," - f" {macro_ref.accesses_caller!r}, context.eval_ctx.autoescape)" - ) - - def position(self, node: nodes.Node) -> str: - """Return a human readable position for the node.""" - rv = f"line {node.lineno}" - if self.name is not None: - rv = f"{rv} in {self.name!r}" - return rv - - def dump_local_context(self, frame: Frame) -> str: - items_kv = ", ".join( - f"{name!r}: {target}" - for name, target in frame.symbols.dump_stores().items() - ) - return f"{{{items_kv}}}" - - def write_commons(self) -> None: - """Writes a common preamble that is used by root and block functions. - Primarily this sets up common local helpers and enforces a generator - through a dead branch. - """ - self.writeline("resolve = context.resolve_or_missing") - self.writeline("undefined = environment.undefined") - self.writeline("concat = environment.concat") - # always use the standard Undefined class for the implicit else of - # conditional expressions - self.writeline("cond_expr_undefined = Undefined") - self.writeline("if 0: yield None") - - def push_parameter_definitions(self, frame: Frame) -> None: - """Pushes all parameter targets from the given frame into a local - stack that permits tracking of yet to be assigned parameters. In - particular this enables the optimization from `visit_Name` to skip - undefined expressions for parameters in macros as macros can reference - otherwise unbound parameters. - """ - self._param_def_block.append(frame.symbols.dump_param_targets()) - - def pop_parameter_definitions(self) -> None: - """Pops the current parameter definitions set.""" - self._param_def_block.pop() - - def mark_parameter_stored(self, target: str) -> None: - """Marks a parameter in the current parameter definitions as stored. - This will skip the enforced undefined checks. - """ - if self._param_def_block: - self._param_def_block[-1].discard(target) - - def push_context_reference(self, target: str) -> None: - self._context_reference_stack.append(target) - - def pop_context_reference(self) -> None: - self._context_reference_stack.pop() - - def get_context_ref(self) -> str: - return self._context_reference_stack[-1] - - def get_resolve_func(self) -> str: - target = self._context_reference_stack[-1] - if target == "context": - return "resolve" - return f"{target}.resolve" - - def derive_context(self, frame: Frame) -> str: - return f"{self.get_context_ref()}.derived({self.dump_local_context(frame)})" - - def parameter_is_undeclared(self, target: str) -> bool: - """Checks if a given target is an undeclared parameter.""" - if not self._param_def_block: - return False - return target in self._param_def_block[-1] - - def push_assign_tracking(self) -> None: - """Pushes a new layer for assignment tracking.""" - self._assign_stack.append(set()) - - def pop_assign_tracking(self, frame: Frame) -> None: - """Pops the topmost level for assignment tracking and updates the - context variables if necessary. - """ - vars = self._assign_stack.pop() - if ( - not frame.block_frame - and not frame.loop_frame - and not frame.toplevel - or not vars - ): - return - public_names = [x for x in vars if x[:1] != "_"] - if len(vars) == 1: - name = next(iter(vars)) - ref = frame.symbols.ref(name) - if frame.loop_frame: - self.writeline(f"_loop_vars[{name!r}] = {ref}") - return - if frame.block_frame: - self.writeline(f"_block_vars[{name!r}] = {ref}") - return - self.writeline(f"context.vars[{name!r}] = {ref}") - else: - if frame.loop_frame: - self.writeline("_loop_vars.update({") - elif frame.block_frame: - self.writeline("_block_vars.update({") - else: - self.writeline("context.vars.update({") - for idx, name in enumerate(vars): - if idx: - self.write(", ") - ref = frame.symbols.ref(name) - self.write(f"{name!r}: {ref}") - self.write("})") - if not frame.block_frame and not frame.loop_frame and public_names: - if len(public_names) == 1: - self.writeline(f"context.exported_vars.add({public_names[0]!r})") - else: - names_str = ", ".join(map(repr, public_names)) - self.writeline(f"context.exported_vars.update(({names_str}))") - - # -- Statement Visitors - - def visit_Template( - self, node: nodes.Template, frame: t.Optional[Frame] = None - ) -> None: - assert frame is None, "no root frame allowed" - eval_ctx = EvalContext(self.environment, self.name) - - from .runtime import exported, async_exported - - if self.environment.is_async: - exported_names = sorted(exported + async_exported) - else: - exported_names = sorted(exported) - - self.writeline("from jinja2.runtime import " + ", ".join(exported_names)) - - # if we want a deferred initialization we cannot move the - # environment into a local name - envenv = "" if self.defer_init else ", environment=environment" - - # do we have an extends tag at all? If not, we can save some - # overhead by just not processing any inheritance code. - have_extends = node.find(nodes.Extends) is not None - - # find all blocks - for block in node.find_all(nodes.Block): - if block.name in self.blocks: - self.fail(f"block {block.name!r} defined twice", block.lineno) - self.blocks[block.name] = block - - # find all imports and import them - for import_ in node.find_all(nodes.ImportedName): - if import_.importname not in self.import_aliases: - imp = import_.importname - self.import_aliases[imp] = alias = self.temporary_identifier() - if "." in imp: - module, obj = imp.rsplit(".", 1) - self.writeline(f"from {module} import {obj} as {alias}") - else: - self.writeline(f"import {imp} as {alias}") - - # add the load name - self.writeline(f"name = {self.name!r}") - - # generate the root render function. - self.writeline( - f"{self.func('root')}(context, missing=missing{envenv}):", extra=1 - ) - self.indent() - self.write_commons() - - # process the root - frame = Frame(eval_ctx) - if "self" in find_undeclared(node.body, ("self",)): - ref = frame.symbols.declare_parameter("self") - self.writeline(f"{ref} = TemplateReference(context)") - frame.symbols.analyze_node(node) - frame.toplevel = frame.rootlevel = True - frame.require_output_check = have_extends and not self.has_known_extends - if have_extends: - self.writeline("parent_template = None") - self.enter_frame(frame) - self.pull_dependencies(node.body) - self.blockvisit(node.body, frame) - self.leave_frame(frame, with_python_scope=True) - self.outdent() - - # make sure that the parent root is called. - if have_extends: - if not self.has_known_extends: - self.indent() - self.writeline("if parent_template is not None:") - self.indent() - if not self.environment.is_async: - self.writeline("yield from parent_template.root_render_func(context)") - else: - self.writeline( - "async for event in parent_template.root_render_func(context):" - ) - self.indent() - self.writeline("yield event") - self.outdent() - self.outdent(1 + (not self.has_known_extends)) - - # at this point we now have the blocks collected and can visit them too. - for name, block in self.blocks.items(): - self.writeline( - f"{self.func('block_' + name)}(context, missing=missing{envenv}):", - block, - 1, - ) - self.indent() - self.write_commons() - # It's important that we do not make this frame a child of the - # toplevel template. This would cause a variety of - # interesting issues with identifier tracking. - block_frame = Frame(eval_ctx) - block_frame.block_frame = True - undeclared = find_undeclared(block.body, ("self", "super")) - if "self" in undeclared: - ref = block_frame.symbols.declare_parameter("self") - self.writeline(f"{ref} = TemplateReference(context)") - if "super" in undeclared: - ref = block_frame.symbols.declare_parameter("super") - self.writeline(f"{ref} = context.super({name!r}, block_{name})") - block_frame.symbols.analyze_node(block) - block_frame.block = name - self.writeline("_block_vars = {}") - self.enter_frame(block_frame) - self.pull_dependencies(block.body) - self.blockvisit(block.body, block_frame) - self.leave_frame(block_frame, with_python_scope=True) - self.outdent() - - blocks_kv_str = ", ".join(f"{x!r}: block_{x}" for x in self.blocks) - self.writeline(f"blocks = {{{blocks_kv_str}}}", extra=1) - debug_kv_str = "&".join(f"{k}={v}" for k, v in self.debug_info) - self.writeline(f"debug_info = {debug_kv_str!r}") - - def visit_Block(self, node: nodes.Block, frame: Frame) -> None: - """Call a block and register it for the template.""" - level = 0 - if frame.toplevel: - # if we know that we are a child template, there is no need to - # check if we are one - if self.has_known_extends: - return - if self.extends_so_far > 0: - self.writeline("if parent_template is None:") - self.indent() - level += 1 - - if node.scoped: - context = self.derive_context(frame) - else: - context = self.get_context_ref() - - if node.required: - self.writeline(f"if len(context.blocks[{node.name!r}]) <= 1:", node) - self.indent() - self.writeline( - f'raise TemplateRuntimeError("Required block {node.name!r} not found")', - node, - ) - self.outdent() - - if not self.environment.is_async and frame.buffer is None: - self.writeline( - f"yield from context.blocks[{node.name!r}][0]({context})", node - ) - else: - self.writeline( - f"{self.choose_async()}for event in" - f" context.blocks[{node.name!r}][0]({context}):", - node, - ) - self.indent() - self.simple_write("event", frame) - self.outdent() - - self.outdent(level) - - def visit_Extends(self, node: nodes.Extends, frame: Frame) -> None: - """Calls the extender.""" - if not frame.toplevel: - self.fail("cannot use extend from a non top-level scope", node.lineno) - - # if the number of extends statements in general is zero so - # far, we don't have to add a check if something extended - # the template before this one. - if self.extends_so_far > 0: - - # if we have a known extends we just add a template runtime - # error into the generated code. We could catch that at compile - # time too, but i welcome it not to confuse users by throwing the - # same error at different times just "because we can". - if not self.has_known_extends: - self.writeline("if parent_template is not None:") - self.indent() - self.writeline('raise TemplateRuntimeError("extended multiple times")') - - # if we have a known extends already we don't need that code here - # as we know that the template execution will end here. - if self.has_known_extends: - raise CompilerExit() - else: - self.outdent() - - self.writeline("parent_template = environment.get_template(", node) - self.visit(node.template, frame) - self.write(f", {self.name!r})") - self.writeline("for name, parent_block in parent_template.blocks.items():") - self.indent() - self.writeline("context.blocks.setdefault(name, []).append(parent_block)") - self.outdent() - - # if this extends statement was in the root level we can take - # advantage of that information and simplify the generated code - # in the top level from this point onwards - if frame.rootlevel: - self.has_known_extends = True - - # and now we have one more - self.extends_so_far += 1 - - def visit_Include(self, node: nodes.Include, frame: Frame) -> None: - """Handles includes.""" - if node.ignore_missing: - self.writeline("try:") - self.indent() - - func_name = "get_or_select_template" - if isinstance(node.template, nodes.Const): - if isinstance(node.template.value, str): - func_name = "get_template" - elif isinstance(node.template.value, (tuple, list)): - func_name = "select_template" - elif isinstance(node.template, (nodes.Tuple, nodes.List)): - func_name = "select_template" - - self.writeline(f"template = environment.{func_name}(", node) - self.visit(node.template, frame) - self.write(f", {self.name!r})") - if node.ignore_missing: - self.outdent() - self.writeline("except TemplateNotFound:") - self.indent() - self.writeline("pass") - self.outdent() - self.writeline("else:") - self.indent() - - skip_event_yield = False - if node.with_context: - self.writeline( - f"{self.choose_async()}for event in template.root_render_func(" - "template.new_context(context.get_all(), True," - f" {self.dump_local_context(frame)})):" - ) - elif self.environment.is_async: - self.writeline( - "for event in (await template._get_default_module_async())" - "._body_stream:" - ) - else: - self.writeline("yield from template._get_default_module()._body_stream") - skip_event_yield = True - - if not skip_event_yield: - self.indent() - self.simple_write("event", frame) - self.outdent() - - if node.ignore_missing: - self.outdent() - - def _import_common( - self, node: t.Union[nodes.Import, nodes.FromImport], frame: Frame - ) -> None: - self.write(f"{self.choose_async('await ')}environment.get_template(") - self.visit(node.template, frame) - self.write(f", {self.name!r}).") - - if node.with_context: - f_name = f"make_module{self.choose_async('_async')}" - self.write( - f"{f_name}(context.get_all(), True, {self.dump_local_context(frame)})" - ) - else: - self.write(f"_get_default_module{self.choose_async('_async')}(context)") - - def visit_Import(self, node: nodes.Import, frame: Frame) -> None: - """Visit regular imports.""" - self.writeline(f"{frame.symbols.ref(node.target)} = ", node) - if frame.toplevel: - self.write(f"context.vars[{node.target!r}] = ") - - self._import_common(node, frame) - - if frame.toplevel and not node.target.startswith("_"): - self.writeline(f"context.exported_vars.discard({node.target!r})") - - def visit_FromImport(self, node: nodes.FromImport, frame: Frame) -> None: - """Visit named imports.""" - self.newline(node) - self.write("included_template = ") - self._import_common(node, frame) - var_names = [] - discarded_names = [] - for name in node.names: - if isinstance(name, tuple): - name, alias = name - else: - alias = name - self.writeline( - f"{frame.symbols.ref(alias)} =" - f" getattr(included_template, {name!r}, missing)" - ) - self.writeline(f"if {frame.symbols.ref(alias)} is missing:") - self.indent() - message = ( - "the template {included_template.__name__!r}" - f" (imported on {self.position(node)})" - f" does not export the requested name {name!r}" - ) - self.writeline( - f"{frame.symbols.ref(alias)} = undefined(f{message!r}, name={name!r})" - ) - self.outdent() - if frame.toplevel: - var_names.append(alias) - if not alias.startswith("_"): - discarded_names.append(alias) - - if var_names: - if len(var_names) == 1: - name = var_names[0] - self.writeline(f"context.vars[{name!r}] = {frame.symbols.ref(name)}") - else: - names_kv = ", ".join( - f"{name!r}: {frame.symbols.ref(name)}" for name in var_names - ) - self.writeline(f"context.vars.update({{{names_kv}}})") - if discarded_names: - if len(discarded_names) == 1: - self.writeline(f"context.exported_vars.discard({discarded_names[0]!r})") - else: - names_str = ", ".join(map(repr, discarded_names)) - self.writeline( - f"context.exported_vars.difference_update(({names_str}))" - ) - - def visit_For(self, node: nodes.For, frame: Frame) -> None: - loop_frame = frame.inner() - loop_frame.loop_frame = True - test_frame = frame.inner() - else_frame = frame.inner() - - # try to figure out if we have an extended loop. An extended loop - # is necessary if the loop is in recursive mode if the special loop - # variable is accessed in the body if the body is a scoped block. - extended_loop = ( - node.recursive - or "loop" - in find_undeclared(node.iter_child_nodes(only=("body",)), ("loop",)) - or any(block.scoped for block in node.find_all(nodes.Block)) - ) - - loop_ref = None - if extended_loop: - loop_ref = loop_frame.symbols.declare_parameter("loop") - - loop_frame.symbols.analyze_node(node, for_branch="body") - if node.else_: - else_frame.symbols.analyze_node(node, for_branch="else") - - if node.test: - loop_filter_func = self.temporary_identifier() - test_frame.symbols.analyze_node(node, for_branch="test") - self.writeline(f"{self.func(loop_filter_func)}(fiter):", node.test) - self.indent() - self.enter_frame(test_frame) - self.writeline(self.choose_async("async for ", "for ")) - self.visit(node.target, loop_frame) - self.write(" in ") - self.write(self.choose_async("auto_aiter(fiter)", "fiter")) - self.write(":") - self.indent() - self.writeline("if ", node.test) - self.visit(node.test, test_frame) - self.write(":") - self.indent() - self.writeline("yield ") - self.visit(node.target, loop_frame) - self.outdent(3) - self.leave_frame(test_frame, with_python_scope=True) - - # if we don't have an recursive loop we have to find the shadowed - # variables at that point. Because loops can be nested but the loop - # variable is a special one we have to enforce aliasing for it. - if node.recursive: - self.writeline( - f"{self.func('loop')}(reciter, loop_render_func, depth=0):", node - ) - self.indent() - self.buffer(loop_frame) - - # Use the same buffer for the else frame - else_frame.buffer = loop_frame.buffer - - # make sure the loop variable is a special one and raise a template - # assertion error if a loop tries to write to loop - if extended_loop: - self.writeline(f"{loop_ref} = missing") - - for name in node.find_all(nodes.Name): - if name.ctx == "store" and name.name == "loop": - self.fail( - "Can't assign to special loop variable in for-loop target", - name.lineno, - ) - - if node.else_: - iteration_indicator = self.temporary_identifier() - self.writeline(f"{iteration_indicator} = 1") - - self.writeline(self.choose_async("async for ", "for "), node) - self.visit(node.target, loop_frame) - if extended_loop: - self.write(f", {loop_ref} in {self.choose_async('Async')}LoopContext(") - else: - self.write(" in ") - - if node.test: - self.write(f"{loop_filter_func}(") - if node.recursive: - self.write("reciter") - else: - if self.environment.is_async and not extended_loop: - self.write("auto_aiter(") - self.visit(node.iter, frame) - if self.environment.is_async and not extended_loop: - self.write(")") - if node.test: - self.write(")") - - if node.recursive: - self.write(", undefined, loop_render_func, depth):") - else: - self.write(", undefined):" if extended_loop else ":") - - self.indent() - self.enter_frame(loop_frame) - - self.writeline("_loop_vars = {}") - self.blockvisit(node.body, loop_frame) - if node.else_: - self.writeline(f"{iteration_indicator} = 0") - self.outdent() - self.leave_frame( - loop_frame, with_python_scope=node.recursive and not node.else_ - ) - - if node.else_: - self.writeline(f"if {iteration_indicator}:") - self.indent() - self.enter_frame(else_frame) - self.blockvisit(node.else_, else_frame) - self.leave_frame(else_frame) - self.outdent() - - # if the node was recursive we have to return the buffer contents - # and start the iteration code - if node.recursive: - self.return_buffer_contents(loop_frame) - self.outdent() - self.start_write(frame, node) - self.write(f"{self.choose_async('await ')}loop(") - if self.environment.is_async: - self.write("auto_aiter(") - self.visit(node.iter, frame) - if self.environment.is_async: - self.write(")") - self.write(", loop)") - self.end_write(frame) - - # at the end of the iteration, clear any assignments made in the - # loop from the top level - if self._assign_stack: - self._assign_stack[-1].difference_update(loop_frame.symbols.stores) - - def visit_If(self, node: nodes.If, frame: Frame) -> None: - if_frame = frame.soft() - self.writeline("if ", node) - self.visit(node.test, if_frame) - self.write(":") - self.indent() - self.blockvisit(node.body, if_frame) - self.outdent() - for elif_ in node.elif_: - self.writeline("elif ", elif_) - self.visit(elif_.test, if_frame) - self.write(":") - self.indent() - self.blockvisit(elif_.body, if_frame) - self.outdent() - if node.else_: - self.writeline("else:") - self.indent() - self.blockvisit(node.else_, if_frame) - self.outdent() - - def visit_Macro(self, node: nodes.Macro, frame: Frame) -> None: - macro_frame, macro_ref = self.macro_body(node, frame) - self.newline() - if frame.toplevel: - if not node.name.startswith("_"): - self.write(f"context.exported_vars.add({node.name!r})") - self.writeline(f"context.vars[{node.name!r}] = ") - self.write(f"{frame.symbols.ref(node.name)} = ") - self.macro_def(macro_ref, macro_frame) - - def visit_CallBlock(self, node: nodes.CallBlock, frame: Frame) -> None: - call_frame, macro_ref = self.macro_body(node, frame) - self.writeline("caller = ") - self.macro_def(macro_ref, call_frame) - self.start_write(frame, node) - self.visit_Call(node.call, frame, forward_caller=True) - self.end_write(frame) - - def visit_FilterBlock(self, node: nodes.FilterBlock, frame: Frame) -> None: - filter_frame = frame.inner() - filter_frame.symbols.analyze_node(node) - self.enter_frame(filter_frame) - self.buffer(filter_frame) - self.blockvisit(node.body, filter_frame) - self.start_write(frame, node) - self.visit_Filter(node.filter, filter_frame) - self.end_write(frame) - self.leave_frame(filter_frame) - - def visit_With(self, node: nodes.With, frame: Frame) -> None: - with_frame = frame.inner() - with_frame.symbols.analyze_node(node) - self.enter_frame(with_frame) - for target, expr in zip(node.targets, node.values): - self.newline() - self.visit(target, with_frame) - self.write(" = ") - self.visit(expr, frame) - self.blockvisit(node.body, with_frame) - self.leave_frame(with_frame) - - def visit_ExprStmt(self, node: nodes.ExprStmt, frame: Frame) -> None: - self.newline(node) - self.visit(node.node, frame) - - class _FinalizeInfo(t.NamedTuple): - const: t.Optional[t.Callable[..., str]] - src: t.Optional[str] - - @staticmethod - def _default_finalize(value: t.Any) -> t.Any: - """The default finalize function if the environment isn't - configured with one. Or, if the environment has one, this is - called on that function's output for constants. - """ - return str(value) - - _finalize: t.Optional[_FinalizeInfo] = None - - def _make_finalize(self) -> _FinalizeInfo: - """Build the finalize function to be used on constants and at - runtime. Cached so it's only created once for all output nodes. - - Returns a ``namedtuple`` with the following attributes: - - ``const`` - A function to finalize constant data at compile time. - - ``src`` - Source code to output around nodes to be evaluated at - runtime. - """ - if self._finalize is not None: - return self._finalize - - finalize: t.Optional[t.Callable[..., t.Any]] - finalize = default = self._default_finalize - src = None - - if self.environment.finalize: - src = "environment.finalize(" - env_finalize = self.environment.finalize - pass_arg = { - _PassArg.context: "context", - _PassArg.eval_context: "context.eval_ctx", - _PassArg.environment: "environment", - }.get( - _PassArg.from_obj(env_finalize) # type: ignore - ) - finalize = None - - if pass_arg is None: - - def finalize(value: t.Any) -> t.Any: - return default(env_finalize(value)) - - else: - src = f"{src}{pass_arg}, " - - if pass_arg == "environment": - - def finalize(value: t.Any) -> t.Any: - return default(env_finalize(self.environment, value)) - - self._finalize = self._FinalizeInfo(finalize, src) - return self._finalize - - def _output_const_repr(self, group: t.Iterable[t.Any]) -> str: - """Given a group of constant values converted from ``Output`` - child nodes, produce a string to write to the template module - source. - """ - return repr(concat(group)) - - def _output_child_to_const( - self, node: nodes.Expr, frame: Frame, finalize: _FinalizeInfo - ) -> str: - """Try to optimize a child of an ``Output`` node by trying to - convert it to constant, finalized data at compile time. - - If :exc:`Impossible` is raised, the node is not constant and - will be evaluated at runtime. Any other exception will also be - evaluated at runtime for easier debugging. - """ - const = node.as_const(frame.eval_ctx) - - if frame.eval_ctx.autoescape: - const = escape(const) - - # Template data doesn't go through finalize. - if isinstance(node, nodes.TemplateData): - return str(const) - - return finalize.const(const) # type: ignore - - def _output_child_pre( - self, node: nodes.Expr, frame: Frame, finalize: _FinalizeInfo - ) -> None: - """Output extra source code before visiting a child of an - ``Output`` node. - """ - if frame.eval_ctx.volatile: - self.write("(escape if context.eval_ctx.autoescape else str)(") - elif frame.eval_ctx.autoescape: - self.write("escape(") - else: - self.write("str(") - - if finalize.src is not None: - self.write(finalize.src) - - def _output_child_post( - self, node: nodes.Expr, frame: Frame, finalize: _FinalizeInfo - ) -> None: - """Output extra source code after visiting a child of an - ``Output`` node. - """ - self.write(")") - - if finalize.src is not None: - self.write(")") - - def visit_Output(self, node: nodes.Output, frame: Frame) -> None: - # If an extends is active, don't render outside a block. - if frame.require_output_check: - # A top-level extends is known to exist at compile time. - if self.has_known_extends: - return - - self.writeline("if parent_template is None:") - self.indent() - - finalize = self._make_finalize() - body: t.List[t.Union[t.List[t.Any], nodes.Expr]] = [] - - # Evaluate constants at compile time if possible. Each item in - # body will be either a list of static data or a node to be - # evaluated at runtime. - for child in node.nodes: - try: - if not ( - # If the finalize function requires runtime context, - # constants can't be evaluated at compile time. - finalize.const - # Unless it's basic template data that won't be - # finalized anyway. - or isinstance(child, nodes.TemplateData) - ): - raise nodes.Impossible() - - const = self._output_child_to_const(child, frame, finalize) - except (nodes.Impossible, Exception): - # The node was not constant and needs to be evaluated at - # runtime. Or another error was raised, which is easier - # to debug at runtime. - body.append(child) - continue - - if body and isinstance(body[-1], list): - body[-1].append(const) - else: - body.append([const]) - - if frame.buffer is not None: - if len(body) == 1: - self.writeline(f"{frame.buffer}.append(") - else: - self.writeline(f"{frame.buffer}.extend((") - - self.indent() - - for item in body: - if isinstance(item, list): - # A group of constant data to join and output. - val = self._output_const_repr(item) - - if frame.buffer is None: - self.writeline("yield " + val) - else: - self.writeline(val + ",") - else: - if frame.buffer is None: - self.writeline("yield ", item) - else: - self.newline(item) - - # A node to be evaluated at runtime. - self._output_child_pre(item, frame, finalize) - self.visit(item, frame) - self._output_child_post(item, frame, finalize) - - if frame.buffer is not None: - self.write(",") - - if frame.buffer is not None: - self.outdent() - self.writeline(")" if len(body) == 1 else "))") - - if frame.require_output_check: - self.outdent() - - def visit_Assign(self, node: nodes.Assign, frame: Frame) -> None: - self.push_assign_tracking() - self.newline(node) - self.visit(node.target, frame) - self.write(" = ") - self.visit(node.node, frame) - self.pop_assign_tracking(frame) - - def visit_AssignBlock(self, node: nodes.AssignBlock, frame: Frame) -> None: - self.push_assign_tracking() - block_frame = frame.inner() - # This is a special case. Since a set block always captures we - # will disable output checks. This way one can use set blocks - # toplevel even in extended templates. - block_frame.require_output_check = False - block_frame.symbols.analyze_node(node) - self.enter_frame(block_frame) - self.buffer(block_frame) - self.blockvisit(node.body, block_frame) - self.newline(node) - self.visit(node.target, frame) - self.write(" = (Markup if context.eval_ctx.autoescape else identity)(") - if node.filter is not None: - self.visit_Filter(node.filter, block_frame) - else: - self.write(f"concat({block_frame.buffer})") - self.write(")") - self.pop_assign_tracking(frame) - self.leave_frame(block_frame) - - # -- Expression Visitors - - def visit_Name(self, node: nodes.Name, frame: Frame) -> None: - if node.ctx == "store" and ( - frame.toplevel or frame.loop_frame or frame.block_frame - ): - if self._assign_stack: - self._assign_stack[-1].add(node.name) - ref = frame.symbols.ref(node.name) - - # If we are looking up a variable we might have to deal with the - # case where it's undefined. We can skip that case if the load - # instruction indicates a parameter which are always defined. - if node.ctx == "load": - load = frame.symbols.find_load(ref) - if not ( - load is not None - and load[0] == VAR_LOAD_PARAMETER - and not self.parameter_is_undeclared(ref) - ): - self.write( - f"(undefined(name={node.name!r}) if {ref} is missing else {ref})" - ) - return - - self.write(ref) - - def visit_NSRef(self, node: nodes.NSRef, frame: Frame) -> None: - # NSRefs can only be used to store values; since they use the normal - # `foo.bar` notation they will be parsed as a normal attribute access - # when used anywhere but in a `set` context - ref = frame.symbols.ref(node.name) - self.writeline(f"if not isinstance({ref}, Namespace):") - self.indent() - self.writeline( - "raise TemplateRuntimeError" - '("cannot assign attribute on non-namespace object")' - ) - self.outdent() - self.writeline(f"{ref}[{node.attr!r}]") - - def visit_Const(self, node: nodes.Const, frame: Frame) -> None: - val = node.as_const(frame.eval_ctx) - if isinstance(val, float): - self.write(str(val)) - else: - self.write(repr(val)) - - def visit_TemplateData(self, node: nodes.TemplateData, frame: Frame) -> None: - try: - self.write(repr(node.as_const(frame.eval_ctx))) - except nodes.Impossible: - self.write( - f"(Markup if context.eval_ctx.autoescape else identity)({node.data!r})" - ) - - def visit_Tuple(self, node: nodes.Tuple, frame: Frame) -> None: - self.write("(") - idx = -1 - for idx, item in enumerate(node.items): - if idx: - self.write(", ") - self.visit(item, frame) - self.write(",)" if idx == 0 else ")") - - def visit_List(self, node: nodes.List, frame: Frame) -> None: - self.write("[") - for idx, item in enumerate(node.items): - if idx: - self.write(", ") - self.visit(item, frame) - self.write("]") - - def visit_Dict(self, node: nodes.Dict, frame: Frame) -> None: - self.write("{") - for idx, item in enumerate(node.items): - if idx: - self.write(", ") - self.visit(item.key, frame) - self.write(": ") - self.visit(item.value, frame) - self.write("}") - - visit_Add = _make_binop("+") - visit_Sub = _make_binop("-") - visit_Mul = _make_binop("*") - visit_Div = _make_binop("/") - visit_FloorDiv = _make_binop("//") - visit_Pow = _make_binop("**") - visit_Mod = _make_binop("%") - visit_And = _make_binop("and") - visit_Or = _make_binop("or") - visit_Pos = _make_unop("+") - visit_Neg = _make_unop("-") - visit_Not = _make_unop("not ") - - @optimizeconst - def visit_Concat(self, node: nodes.Concat, frame: Frame) -> None: - if frame.eval_ctx.volatile: - func_name = "(markup_join if context.eval_ctx.volatile else str_join)" - elif frame.eval_ctx.autoescape: - func_name = "markup_join" - else: - func_name = "str_join" - self.write(f"{func_name}((") - for arg in node.nodes: - self.visit(arg, frame) - self.write(", ") - self.write("))") - - @optimizeconst - def visit_Compare(self, node: nodes.Compare, frame: Frame) -> None: - self.write("(") - self.visit(node.expr, frame) - for op in node.ops: - self.visit(op, frame) - self.write(")") - - def visit_Operand(self, node: nodes.Operand, frame: Frame) -> None: - self.write(f" {operators[node.op]} ") - self.visit(node.expr, frame) - - @optimizeconst - def visit_Getattr(self, node: nodes.Getattr, frame: Frame) -> None: - if self.environment.is_async: - self.write("(await auto_await(") - - self.write("environment.getattr(") - self.visit(node.node, frame) - self.write(f", {node.attr!r})") - - if self.environment.is_async: - self.write("))") - - @optimizeconst - def visit_Getitem(self, node: nodes.Getitem, frame: Frame) -> None: - # slices bypass the environment getitem method. - if isinstance(node.arg, nodes.Slice): - self.visit(node.node, frame) - self.write("[") - self.visit(node.arg, frame) - self.write("]") - else: - if self.environment.is_async: - self.write("(await auto_await(") - - self.write("environment.getitem(") - self.visit(node.node, frame) - self.write(", ") - self.visit(node.arg, frame) - self.write(")") - - if self.environment.is_async: - self.write("))") - - def visit_Slice(self, node: nodes.Slice, frame: Frame) -> None: - if node.start is not None: - self.visit(node.start, frame) - self.write(":") - if node.stop is not None: - self.visit(node.stop, frame) - if node.step is not None: - self.write(":") - self.visit(node.step, frame) - - @contextmanager - def _filter_test_common( - self, node: t.Union[nodes.Filter, nodes.Test], frame: Frame, is_filter: bool - ) -> t.Iterator[None]: - if self.environment.is_async: - self.write("(await auto_await(") - - if is_filter: - self.write(f"{self.filters[node.name]}(") - func = self.environment.filters.get(node.name) - else: - self.write(f"{self.tests[node.name]}(") - func = self.environment.tests.get(node.name) - - # When inside an If or CondExpr frame, allow the filter to be - # undefined at compile time and only raise an error if it's - # actually called at runtime. See pull_dependencies. - if func is None and not frame.soft_frame: - type_name = "filter" if is_filter else "test" - self.fail(f"No {type_name} named {node.name!r}.", node.lineno) - - pass_arg = { - _PassArg.context: "context", - _PassArg.eval_context: "context.eval_ctx", - _PassArg.environment: "environment", - }.get( - _PassArg.from_obj(func) # type: ignore - ) - - if pass_arg is not None: - self.write(f"{pass_arg}, ") - - # Back to the visitor function to handle visiting the target of - # the filter or test. - yield - - self.signature(node, frame) - self.write(")") - - if self.environment.is_async: - self.write("))") - - @optimizeconst - def visit_Filter(self, node: nodes.Filter, frame: Frame) -> None: - with self._filter_test_common(node, frame, True): - # if the filter node is None we are inside a filter block - # and want to write to the current buffer - if node.node is not None: - self.visit(node.node, frame) - elif frame.eval_ctx.volatile: - self.write( - f"(Markup(concat({frame.buffer}))" - f" if context.eval_ctx.autoescape else concat({frame.buffer}))" - ) - elif frame.eval_ctx.autoescape: - self.write(f"Markup(concat({frame.buffer}))") - else: - self.write(f"concat({frame.buffer})") - - @optimizeconst - def visit_Test(self, node: nodes.Test, frame: Frame) -> None: - with self._filter_test_common(node, frame, False): - self.visit(node.node, frame) - - @optimizeconst - def visit_CondExpr(self, node: nodes.CondExpr, frame: Frame) -> None: - frame = frame.soft() - - def write_expr2() -> None: - if node.expr2 is not None: - self.visit(node.expr2, frame) - return - - self.write( - f'cond_expr_undefined("the inline if-expression on' - f" {self.position(node)} evaluated to false and no else" - f' section was defined.")' - ) - - self.write("(") - self.visit(node.expr1, frame) - self.write(" if ") - self.visit(node.test, frame) - self.write(" else ") - write_expr2() - self.write(")") - - @optimizeconst - def visit_Call( - self, node: nodes.Call, frame: Frame, forward_caller: bool = False - ) -> None: - if self.environment.is_async: - self.write("(await auto_await(") - if self.environment.sandboxed: - self.write("environment.call(context, ") - else: - self.write("context.call(") - self.visit(node.node, frame) - extra_kwargs = {"caller": "caller"} if forward_caller else None - loop_kwargs = {"_loop_vars": "_loop_vars"} if frame.loop_frame else {} - block_kwargs = {"_block_vars": "_block_vars"} if frame.block_frame else {} - if extra_kwargs: - extra_kwargs.update(loop_kwargs, **block_kwargs) - elif loop_kwargs or block_kwargs: - extra_kwargs = dict(loop_kwargs, **block_kwargs) - self.signature(node, frame, extra_kwargs) - self.write(")") - if self.environment.is_async: - self.write("))") - - def visit_Keyword(self, node: nodes.Keyword, frame: Frame) -> None: - self.write(node.key + "=") - self.visit(node.value, frame) - - # -- Unused nodes for extensions - - def visit_MarkSafe(self, node: nodes.MarkSafe, frame: Frame) -> None: - self.write("Markup(") - self.visit(node.expr, frame) - self.write(")") - - def visit_MarkSafeIfAutoescape( - self, node: nodes.MarkSafeIfAutoescape, frame: Frame - ) -> None: - self.write("(Markup if context.eval_ctx.autoescape else identity)(") - self.visit(node.expr, frame) - self.write(")") - - def visit_EnvironmentAttribute( - self, node: nodes.EnvironmentAttribute, frame: Frame - ) -> None: - self.write("environment." + node.name) - - def visit_ExtensionAttribute( - self, node: nodes.ExtensionAttribute, frame: Frame - ) -> None: - self.write(f"environment.extensions[{node.identifier!r}].{node.name}") - - def visit_ImportedName(self, node: nodes.ImportedName, frame: Frame) -> None: - self.write(self.import_aliases[node.importname]) - - def visit_InternalName(self, node: nodes.InternalName, frame: Frame) -> None: - self.write(node.name) - - def visit_ContextReference( - self, node: nodes.ContextReference, frame: Frame - ) -> None: - self.write("context") - - def visit_DerivedContextReference( - self, node: nodes.DerivedContextReference, frame: Frame - ) -> None: - self.write(self.derive_context(frame)) - - def visit_Continue(self, node: nodes.Continue, frame: Frame) -> None: - self.writeline("continue", node) - - def visit_Break(self, node: nodes.Break, frame: Frame) -> None: - self.writeline("break", node) - - def visit_Scope(self, node: nodes.Scope, frame: Frame) -> None: - scope_frame = frame.inner() - scope_frame.symbols.analyze_node(node) - self.enter_frame(scope_frame) - self.blockvisit(node.body, scope_frame) - self.leave_frame(scope_frame) - - def visit_OverlayScope(self, node: nodes.OverlayScope, frame: Frame) -> None: - ctx = self.temporary_identifier() - self.writeline(f"{ctx} = {self.derive_context(frame)}") - self.writeline(f"{ctx}.vars = ") - self.visit(node.context, frame) - self.push_context_reference(ctx) - - scope_frame = frame.inner(isolated=True) - scope_frame.symbols.analyze_node(node) - self.enter_frame(scope_frame) - self.blockvisit(node.body, scope_frame) - self.leave_frame(scope_frame) - self.pop_context_reference() - - def visit_EvalContextModifier( - self, node: nodes.EvalContextModifier, frame: Frame - ) -> None: - for keyword in node.options: - self.writeline(f"context.eval_ctx.{keyword.key} = ") - self.visit(keyword.value, frame) - try: - val = keyword.value.as_const(frame.eval_ctx) - except nodes.Impossible: - frame.eval_ctx.volatile = True - else: - setattr(frame.eval_ctx, keyword.key, val) - - def visit_ScopedEvalContextModifier( - self, node: nodes.ScopedEvalContextModifier, frame: Frame - ) -> None: - old_ctx_name = self.temporary_identifier() - saved_ctx = frame.eval_ctx.save() - self.writeline(f"{old_ctx_name} = context.eval_ctx.save()") - self.visit_EvalContextModifier(node, frame) - for child in node.body: - self.visit(child, frame) - frame.eval_ctx.revert(saved_ctx) - self.writeline(f"context.eval_ctx.revert({old_ctx_name})") diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/backend_managers.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/backend_managers.py deleted file mode 100644 index ac74ff97a4e81dab2663e0b6f12f785e493c0d1f..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/backend_managers.py +++ /dev/null @@ -1,398 +0,0 @@ -from matplotlib import _api, backend_tools, cbook, widgets - - -class ToolEvent: - """Event for tool manipulation (add/remove).""" - def __init__(self, name, sender, tool, data=None): - self.name = name - self.sender = sender - self.tool = tool - self.data = data - - -class ToolTriggerEvent(ToolEvent): - """Event to inform that a tool has been triggered.""" - def __init__(self, name, sender, tool, canvasevent=None, data=None): - super().__init__(name, sender, tool, data) - self.canvasevent = canvasevent - - -class ToolManagerMessageEvent: - """ - Event carrying messages from toolmanager. - - Messages usually get displayed to the user by the toolbar. - """ - def __init__(self, name, sender, message): - self.name = name - self.sender = sender - self.message = message - - -class ToolManager: - """ - Manager for actions triggered by user interactions (key press, toolbar - clicks, ...) on a Figure. - - Attributes - ---------- - figure : `.Figure` - keypresslock : `~matplotlib.widgets.LockDraw` - `.LockDraw` object to know if the `canvas` key_press_event is locked. - messagelock : `~matplotlib.widgets.LockDraw` - `.LockDraw` object to know if the message is available to write. - """ - - def __init__(self, figure=None): - - self._key_press_handler_id = None - - self._tools = {} - self._keys = {} - self._toggled = {} - self._callbacks = cbook.CallbackRegistry() - - # to process keypress event - self.keypresslock = widgets.LockDraw() - self.messagelock = widgets.LockDraw() - - self._figure = None - self.set_figure(figure) - - @property - def canvas(self): - """Canvas managed by FigureManager.""" - if not self._figure: - return None - return self._figure.canvas - - @property - def figure(self): - """Figure that holds the canvas.""" - return self._figure - - @figure.setter - def figure(self, figure): - self.set_figure(figure) - - def set_figure(self, figure, update_tools=True): - """ - Bind the given figure to the tools. - - Parameters - ---------- - figure : `.Figure` - update_tools : bool, default: True - Force tools to update figure. - """ - if self._key_press_handler_id: - self.canvas.mpl_disconnect(self._key_press_handler_id) - self._figure = figure - if figure: - self._key_press_handler_id = self.canvas.mpl_connect( - 'key_press_event', self._key_press) - if update_tools: - for tool in self._tools.values(): - tool.figure = figure - - def toolmanager_connect(self, s, func): - """ - Connect event with string *s* to *func*. - - Parameters - ---------- - s : str - The name of the event. The following events are recognized: - - - 'tool_message_event' - - 'tool_removed_event' - - 'tool_added_event' - - For every tool added a new event is created - - - 'tool_trigger_TOOLNAME', where TOOLNAME is the id of the tool. - - func : callable - Callback function for the toolmanager event with signature:: - - def func(event: ToolEvent) -> Any - - Returns - ------- - cid - The callback id for the connection. This can be used in - `.toolmanager_disconnect`. - """ - return self._callbacks.connect(s, func) - - def toolmanager_disconnect(self, cid): - """ - Disconnect callback id *cid*. - - Example usage:: - - cid = toolmanager.toolmanager_connect('tool_trigger_zoom', onpress) - #...later - toolmanager.toolmanager_disconnect(cid) - """ - return self._callbacks.disconnect(cid) - - def message_event(self, message, sender=None): - """Emit a `ToolManagerMessageEvent`.""" - if sender is None: - sender = self - - s = 'tool_message_event' - event = ToolManagerMessageEvent(s, sender, message) - self._callbacks.process(s, event) - - @property - def active_toggle(self): - """Currently toggled tools.""" - return self._toggled - - def get_tool_keymap(self, name): - """ - Return the keymap associated with the specified tool. - - Parameters - ---------- - name : str - Name of the Tool. - - Returns - ------- - list of str - List of keys associated with the tool. - """ - - keys = [k for k, i in self._keys.items() if i == name] - return keys - - def _remove_keys(self, name): - for k in self.get_tool_keymap(name): - del self._keys[k] - - def update_keymap(self, name, key): - """ - Set the keymap to associate with the specified tool. - - Parameters - ---------- - name : str - Name of the Tool. - key : str or list of str - Keys to associate with the tool. - """ - if name not in self._tools: - raise KeyError(f'{name!r} not in Tools') - self._remove_keys(name) - if isinstance(key, str): - key = [key] - for k in key: - if k in self._keys: - _api.warn_external( - f'Key {k} changed from {self._keys[k]} to {name}') - self._keys[k] = name - - def remove_tool(self, name): - """ - Remove tool named *name*. - - Parameters - ---------- - name : str - Name of the tool. - """ - - tool = self.get_tool(name) - destroy = _api.deprecate_method_override( - backend_tools.ToolBase.destroy, tool, since="3.6", - alternative="tool_removed_event") - if destroy is not None: - destroy() - - # If it's a toggle tool and toggled, untoggle - if getattr(tool, 'toggled', False): - self.trigger_tool(tool, 'toolmanager') - - self._remove_keys(name) - - event = ToolEvent('tool_removed_event', self, tool) - self._callbacks.process(event.name, event) - - del self._tools[name] - - def add_tool(self, name, tool, *args, **kwargs): - """ - Add *tool* to `ToolManager`. - - If successful, adds a new event ``tool_trigger_{name}`` where - ``{name}`` is the *name* of the tool; the event is fired every time the - tool is triggered. - - Parameters - ---------- - name : str - Name of the tool, treated as the ID, has to be unique. - tool : type - Class of the tool to be added. A subclass will be used - instead if one was registered for the current canvas class. - *args, **kwargs - Passed to the *tool*'s constructor. - - See Also - -------- - matplotlib.backend_tools.ToolBase : The base class for tools. - """ - - tool_cls = backend_tools._find_tool_class(type(self.canvas), tool) - if not tool_cls: - raise ValueError('Impossible to find class for %s' % str(tool)) - - if name in self._tools: - _api.warn_external('A "Tool class" with the same name already ' - 'exists, not added') - return self._tools[name] - - tool_obj = tool_cls(self, name, *args, **kwargs) - self._tools[name] = tool_obj - - if tool_obj.default_keymap is not None: - self.update_keymap(name, tool_obj.default_keymap) - - # For toggle tools init the radio_group in self._toggled - if isinstance(tool_obj, backend_tools.ToolToggleBase): - # None group is not mutually exclusive, a set is used to keep track - # of all toggled tools in this group - if tool_obj.radio_group is None: - self._toggled.setdefault(None, set()) - else: - self._toggled.setdefault(tool_obj.radio_group, None) - - # If initially toggled - if tool_obj.toggled: - self._handle_toggle(tool_obj, None, None) - tool_obj.set_figure(self.figure) - - event = ToolEvent('tool_added_event', self, tool_obj) - self._callbacks.process(event.name, event) - - return tool_obj - - def _handle_toggle(self, tool, canvasevent, data): - """ - Toggle tools, need to untoggle prior to using other Toggle tool. - Called from trigger_tool. - - Parameters - ---------- - tool : `.ToolBase` - canvasevent : Event - Original Canvas event or None. - data : object - Extra data to pass to the tool when triggering. - """ - - radio_group = tool.radio_group - # radio_group None is not mutually exclusive - # just keep track of toggled tools in this group - if radio_group is None: - if tool.name in self._toggled[None]: - self._toggled[None].remove(tool.name) - else: - self._toggled[None].add(tool.name) - return - - # If the tool already has a toggled state, untoggle it - if self._toggled[radio_group] == tool.name: - toggled = None - # If no tool was toggled in the radio_group - # toggle it - elif self._toggled[radio_group] is None: - toggled = tool.name - # Other tool in the radio_group is toggled - else: - # Untoggle previously toggled tool - self.trigger_tool(self._toggled[radio_group], - self, - canvasevent, - data) - toggled = tool.name - - # Keep track of the toggled tool in the radio_group - self._toggled[radio_group] = toggled - - def trigger_tool(self, name, sender=None, canvasevent=None, data=None): - """ - Trigger a tool and emit the ``tool_trigger_{name}`` event. - - Parameters - ---------- - name : str - Name of the tool. - sender : object - Object that wishes to trigger the tool. - canvasevent : Event - Original Canvas event or None. - data : object - Extra data to pass to the tool when triggering. - """ - tool = self.get_tool(name) - if tool is None: - return - - if sender is None: - sender = self - - if isinstance(tool, backend_tools.ToolToggleBase): - self._handle_toggle(tool, canvasevent, data) - - tool.trigger(sender, canvasevent, data) # Actually trigger Tool. - - s = 'tool_trigger_%s' % name - event = ToolTriggerEvent(s, sender, tool, canvasevent, data) - self._callbacks.process(s, event) - - def _key_press(self, event): - if event.key is None or self.keypresslock.locked(): - return - - name = self._keys.get(event.key, None) - if name is None: - return - self.trigger_tool(name, canvasevent=event) - - @property - def tools(self): - """A dict mapping tool name -> controlled tool.""" - return self._tools - - def get_tool(self, name, warn=True): - """ - Return the tool object with the given name. - - For convenience, this passes tool objects through. - - Parameters - ---------- - name : str or `.ToolBase` - Name of the tool, or the tool itself. - warn : bool, default: True - Whether a warning should be emitted it no tool with the given name - exists. - - Returns - ------- - `.ToolBase` or None - The tool or None if no tool with the given name exists. - """ - if (isinstance(name, backend_tools.ToolBase) - and name.name in self._tools): - return name - if name not in self._tools: - if warn: - _api.warn_external( - f"ToolManager does not control tool {name!r}") - return None - return self._tools[name] diff --git a/spaces/legoandmars/glide-inpainting/glide_text2im/model_creation.py b/spaces/legoandmars/glide-inpainting/glide_text2im/model_creation.py deleted file mode 100644 index 54c37c24546fe0c8e4b22ea903c7039b21da4f4f..0000000000000000000000000000000000000000 --- a/spaces/legoandmars/glide-inpainting/glide_text2im/model_creation.py +++ /dev/null @@ -1,195 +0,0 @@ -from glide_text2im.gaussian_diffusion import get_named_beta_schedule -from glide_text2im.respace import SpacedDiffusion, space_timesteps -from glide_text2im.text2im_model import ( - InpaintText2ImUNet, - SuperResInpaintText2ImUnet, - SuperResText2ImUNet, - Text2ImUNet, -) -from glide_text2im.tokenizer.bpe import get_encoder - - -def model_and_diffusion_defaults(): - return dict( - image_size=64, - num_channels=192, - num_res_blocks=3, - channel_mult="", - num_heads=1, - num_head_channels=64, - num_heads_upsample=-1, - attention_resolutions="32,16,8", - dropout=0.1, - text_ctx=128, - xf_width=512, - xf_layers=16, - xf_heads=8, - xf_final_ln=True, - xf_padding=True, - diffusion_steps=1000, - noise_schedule="squaredcos_cap_v2", - timestep_respacing="", - use_scale_shift_norm=True, - resblock_updown=True, - use_fp16=True, - cache_text_emb=False, - inpaint=False, - super_res=False, - ) - - -def model_and_diffusion_defaults_upsampler(): - result = model_and_diffusion_defaults() - result.update( - dict( - image_size=256, - num_res_blocks=2, - noise_schedule="linear", - super_res=True, - ) - ) - return result - - -def create_model_and_diffusion( - image_size, - num_channels, - num_res_blocks, - channel_mult, - num_heads, - num_head_channels, - num_heads_upsample, - attention_resolutions, - dropout, - text_ctx, - xf_width, - xf_layers, - xf_heads, - xf_final_ln, - xf_padding, - diffusion_steps, - noise_schedule, - timestep_respacing, - use_scale_shift_norm, - resblock_updown, - use_fp16, - cache_text_emb, - inpaint, - super_res, -): - model = create_model( - image_size, - num_channels, - num_res_blocks, - channel_mult=channel_mult, - attention_resolutions=attention_resolutions, - num_heads=num_heads, - num_head_channels=num_head_channels, - num_heads_upsample=num_heads_upsample, - use_scale_shift_norm=use_scale_shift_norm, - dropout=dropout, - text_ctx=text_ctx, - xf_width=xf_width, - xf_layers=xf_layers, - xf_heads=xf_heads, - xf_final_ln=xf_final_ln, - xf_padding=xf_padding, - resblock_updown=resblock_updown, - use_fp16=use_fp16, - cache_text_emb=cache_text_emb, - inpaint=inpaint, - super_res=super_res, - ) - diffusion = create_gaussian_diffusion( - steps=diffusion_steps, - noise_schedule=noise_schedule, - timestep_respacing=timestep_respacing, - ) - return model, diffusion - - -def create_model( - image_size, - num_channels, - num_res_blocks, - channel_mult, - attention_resolutions, - num_heads, - num_head_channels, - num_heads_upsample, - use_scale_shift_norm, - dropout, - text_ctx, - xf_width, - xf_layers, - xf_heads, - xf_final_ln, - xf_padding, - resblock_updown, - use_fp16, - cache_text_emb, - inpaint, - super_res, -): - if channel_mult == "": - if image_size == 256: - channel_mult = (1, 1, 2, 2, 4, 4) - elif image_size == 128: - channel_mult = (1, 1, 2, 3, 4) - elif image_size == 64: - channel_mult = (1, 2, 3, 4) - else: - raise ValueError(f"unsupported image size: {image_size}") - else: - channel_mult = tuple(int(ch_mult) for ch_mult in channel_mult.split(",")) - assert 2 ** (len(channel_mult) + 2) == image_size - - attention_ds = [] - for res in attention_resolutions.split(","): - attention_ds.append(image_size // int(res)) - - if inpaint and super_res: - model_cls = SuperResInpaintText2ImUnet - elif inpaint: - model_cls = InpaintText2ImUNet - elif super_res: - model_cls = SuperResText2ImUNet - else: - model_cls = Text2ImUNet - return model_cls( - text_ctx=text_ctx, - xf_width=xf_width, - xf_layers=xf_layers, - xf_heads=xf_heads, - xf_final_ln=xf_final_ln, - tokenizer=get_encoder(), - xf_padding=xf_padding, - in_channels=3, - model_channels=num_channels, - out_channels=6, - num_res_blocks=num_res_blocks, - attention_resolutions=tuple(attention_ds), - dropout=dropout, - channel_mult=channel_mult, - use_fp16=use_fp16, - num_heads=num_heads, - num_head_channels=num_head_channels, - num_heads_upsample=num_heads_upsample, - use_scale_shift_norm=use_scale_shift_norm, - resblock_updown=resblock_updown, - cache_text_emb=cache_text_emb, - ) - - -def create_gaussian_diffusion( - steps, - noise_schedule, - timestep_respacing, -): - betas = get_named_beta_schedule(noise_schedule, steps) - if not timestep_respacing: - timestep_respacing = [steps] - return SpacedDiffusion( - use_timesteps=space_timesteps(steps, timestep_respacing), - betas=betas, - ) diff --git a/spaces/leilevy/bingo/src/lib/bots/bing/tts.ts b/spaces/leilevy/bingo/src/lib/bots/bing/tts.ts deleted file mode 100644 index cd10b7d1d7581bf9cf46ff6755fcca550c558c9b..0000000000000000000000000000000000000000 --- a/spaces/leilevy/bingo/src/lib/bots/bing/tts.ts +++ /dev/null @@ -1,82 +0,0 @@ -import { sleep } from './utils' - -const synth = window.speechSynthesis - -export class TTS { - currentText = '' - speakText = '' - private controller = new AbortController() - speaking = false - get isSpeaking() { - return this.speaking - } - finished = false - constructor() {} - abort = () => { - this.controller.abort() - } - - reset = () => { - this.speaking = false - this.finished = true - this.currentText = '' - this.speakText = '' - this.abort() - } - - speak = (text: string) => { - if (!synth || text?.trim()?.length < 2) { - return - } - this.currentText = text.replace(/[^\u4e00-\u9fa5_a-zA-Z0-9,。?,:;\.,:]+/g, '') - this.finished = false - this.loop() - } - - private async doSpeek() { - return new Promise((resolve) => { - const endIndex = this.finished ? this.currentText.length : - Math.max( - this.currentText.lastIndexOf('。'), - this.currentText.lastIndexOf(';'), - this.currentText.lastIndexOf('、'), - this.currentText.lastIndexOf('?'), - this.currentText.lastIndexOf('\n') - ) - const startIndex = this.speakText.length ? Math.max(0, this.currentText.lastIndexOf(this.speakText) + this.speakText.length) : 0 - - if (startIndex >= endIndex) { - return resolve(true) - } - const text = this.currentText.slice(startIndex, endIndex) - this.speakText = text - const utterThis = new SpeechSynthesisUtterance(text) - this.controller.signal.onabort = () => { - synth.cancel() - this.finished = true - resolve(false) - } - - utterThis.onend = function (event) { - resolve(true) - } - - utterThis.onerror = function (event) { - resolve(false) - } - - const voice = synth.getVoices().find(v => v.name.includes('Microsoft Yunxi Online')) ?? null - utterThis.voice = voice - synth.speak(utterThis) - }) - } - - private async loop() { - if (this.speaking) return - this.speaking = true - while(!this.finished) { - await Promise.all([sleep(1000), this.doSpeek()]) - } - this.speaking = false - } -} diff --git a/spaces/lewiswu1209/MockingBird/ppg_extractor/encoder/multi_layer_conv.py b/spaces/lewiswu1209/MockingBird/ppg_extractor/encoder/multi_layer_conv.py deleted file mode 100644 index fdb7fe70810eda54c727367efc986ce02ce581cc..0000000000000000000000000000000000000000 --- a/spaces/lewiswu1209/MockingBird/ppg_extractor/encoder/multi_layer_conv.py +++ /dev/null @@ -1,105 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- - -# Copyright 2019 Tomoki Hayashi -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -"""Layer modules for FFT block in FastSpeech (Feed-forward Transformer).""" - -import torch - - -class MultiLayeredConv1d(torch.nn.Module): - """Multi-layered conv1d for Transformer block. - - This is a module of multi-leyered conv1d designed - to replace positionwise feed-forward network - in Transforner block, which is introduced in - `FastSpeech: Fast, Robust and Controllable Text to Speech`_. - - .. _`FastSpeech: Fast, Robust and Controllable Text to Speech`: - https://arxiv.org/pdf/1905.09263.pdf - - """ - - def __init__(self, in_chans, hidden_chans, kernel_size, dropout_rate): - """Initialize MultiLayeredConv1d module. - - Args: - in_chans (int): Number of input channels. - hidden_chans (int): Number of hidden channels. - kernel_size (int): Kernel size of conv1d. - dropout_rate (float): Dropout rate. - - """ - super(MultiLayeredConv1d, self).__init__() - self.w_1 = torch.nn.Conv1d( - in_chans, - hidden_chans, - kernel_size, - stride=1, - padding=(kernel_size - 1) // 2, - ) - self.w_2 = torch.nn.Conv1d( - hidden_chans, - in_chans, - kernel_size, - stride=1, - padding=(kernel_size - 1) // 2, - ) - self.dropout = torch.nn.Dropout(dropout_rate) - - def forward(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Batch of input tensors (B, ..., in_chans). - - Returns: - Tensor: Batch of output tensors (B, ..., hidden_chans). - - """ - x = torch.relu(self.w_1(x.transpose(-1, 1))).transpose(-1, 1) - return self.w_2(self.dropout(x).transpose(-1, 1)).transpose(-1, 1) - - -class Conv1dLinear(torch.nn.Module): - """Conv1D + Linear for Transformer block. - - A variant of MultiLayeredConv1d, which replaces second conv-layer to linear. - - """ - - def __init__(self, in_chans, hidden_chans, kernel_size, dropout_rate): - """Initialize Conv1dLinear module. - - Args: - in_chans (int): Number of input channels. - hidden_chans (int): Number of hidden channels. - kernel_size (int): Kernel size of conv1d. - dropout_rate (float): Dropout rate. - - """ - super(Conv1dLinear, self).__init__() - self.w_1 = torch.nn.Conv1d( - in_chans, - hidden_chans, - kernel_size, - stride=1, - padding=(kernel_size - 1) // 2, - ) - self.w_2 = torch.nn.Linear(hidden_chans, in_chans) - self.dropout = torch.nn.Dropout(dropout_rate) - - def forward(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Batch of input tensors (B, ..., in_chans). - - Returns: - Tensor: Batch of output tensors (B, ..., hidden_chans). - - """ - x = torch.relu(self.w_1(x.transpose(-1, 1))).transpose(-1, 1) - return self.w_2(self.dropout(x)) diff --git a/spaces/lewiswu1209/MockingBird/synthesizer/models/sublayer/cbhg.py b/spaces/lewiswu1209/MockingBird/synthesizer/models/sublayer/cbhg.py deleted file mode 100644 index 10eb6bb85dd2a1711fe7c92ec77bbaaf786f7a53..0000000000000000000000000000000000000000 --- a/spaces/lewiswu1209/MockingBird/synthesizer/models/sublayer/cbhg.py +++ /dev/null @@ -1,85 +0,0 @@ -import torch -import torch.nn as nn -from .common.batch_norm_conv import BatchNormConv -from .common.highway_network import HighwayNetwork - -class CBHG(nn.Module): - def __init__(self, K, in_channels, channels, proj_channels, num_highways): - super().__init__() - - # List of all rnns to call `flatten_parameters()` on - self._to_flatten = [] - - self.bank_kernels = [i for i in range(1, K + 1)] - self.conv1d_bank = nn.ModuleList() - for k in self.bank_kernels: - conv = BatchNormConv(in_channels, channels, k) - self.conv1d_bank.append(conv) - - self.maxpool = nn.MaxPool1d(kernel_size=2, stride=1, padding=1) - - self.conv_project1 = BatchNormConv(len(self.bank_kernels) * channels, proj_channels[0], 3) - self.conv_project2 = BatchNormConv(proj_channels[0], proj_channels[1], 3, relu=False) - - # Fix the highway input if necessary - if proj_channels[-1] != channels: - self.highway_mismatch = True - self.pre_highway = nn.Linear(proj_channels[-1], channels, bias=False) - else: - self.highway_mismatch = False - - self.highways = nn.ModuleList() - for i in range(num_highways): - hn = HighwayNetwork(channels) - self.highways.append(hn) - - self.rnn = nn.GRU(channels, channels // 2, batch_first=True, bidirectional=True) - self._to_flatten.append(self.rnn) - - # Avoid fragmentation of RNN parameters and associated warning - self._flatten_parameters() - - def forward(self, x): - # Although we `_flatten_parameters()` on init, when using DataParallel - # the model gets replicated, making it no longer guaranteed that the - # weights are contiguous in GPU memory. Hence, we must call it again - self.rnn.flatten_parameters() - - # Save these for later - residual = x - seq_len = x.size(-1) - conv_bank = [] - - # Convolution Bank - for conv in self.conv1d_bank: - c = conv(x) # Convolution - conv_bank.append(c[:, :, :seq_len]) - - # Stack along the channel axis - conv_bank = torch.cat(conv_bank, dim=1) - - # dump the last padding to fit residual - x = self.maxpool(conv_bank)[:, :, :seq_len] - - # Conv1d projections - x = self.conv_project1(x) - x = self.conv_project2(x) - - # Residual Connect - x = x + residual - - # Through the highways - x = x.transpose(1, 2) - if self.highway_mismatch is True: - x = self.pre_highway(x) - for h in self.highways: x = h(x) - - # And then the RNN - x, _ = self.rnn(x) - return x - - def _flatten_parameters(self): - """Calls `flatten_parameters` on all the rnns used by the WaveRNN. Used - to improve efficiency and avoid PyTorch yelling at us.""" - [m.flatten_parameters() for m in self._to_flatten] - diff --git a/spaces/lifan0127/zotero-qa/models.py b/spaces/lifan0127/zotero-qa/models.py deleted file mode 100644 index 73824d13148658c1a9e00fffe79abad8d0ca225a..0000000000000000000000000000000000000000 --- a/spaces/lifan0127/zotero-qa/models.py +++ /dev/null @@ -1,72 +0,0 @@ -from enum import Enum - - -class Icons(Enum): - def __str__(self): - return str(self.value) - DOC = "📄" - ERR = "❌" - INDEX = "🗄️" - INFO = "ℹ️" - OK = "👌" - SUCCESS = "✅" - WAIT = "⌛" - WARN = "⚠️" - - -class Message(): - def __init__(self, icon, content): - self.icon = icon - self.content = content - - def __str__(self): - return f"{self.icon} {self.content}" - - -class Messages(): - def __init__(self, messages=[]): - self.messages = messages - - def __str__(self): - return f""" -
    - {("").join([f"
    {x}
    " for x in self.messages])} -
    - """ - - def append(self, new_message): - self.messages.append(new_message) - - def set(self, messages): - self.messages = messages - -# class Message(): - -# def standing_by(self): -# return "
    👌 Standing by...
    " - -# def not_ready(self): -# return """ -#
    -# You have to select a Zotero collection to proceed. -#
    -# """ - -# def openai_api_key(self): -# return """ -#
    -# OpenAI API key is either missing or incorrect. -#
    -# """ - -# def use_queries(queries): -# query_str = ", ".join([f"{q}" for q in queries]) -# return f"
    Search your Zotero collection for {query_str}" - - -# def update_status(messages): -# return gr.HTML.update(f""" -#
    -# {("").join(messages)} -#
    -# """) diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Delphi Decompiler Full Crack 14.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Delphi Decompiler Full Crack 14.md deleted file mode 100644 index 445c3fc36a59297b20d0285c3e66175ccef9d5ff..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Delphi Decompiler Full Crack 14.md +++ /dev/null @@ -1,8 +0,0 @@ - -

    Annotate decompiled code without leaving source. The precise decompiler for Delphi offers more capabilities and features than the free and open source decompiler I discuss here. While the free tool may be sufficient for the casual user, the commercial decompiler offers more features and support, but it's not free. If you're serious about your project, DeDe might be right for you.

    Try for free for 30 days and if you're not comfortable with the result, purchase a license for additional functionality. I found my license to be quite reasonable, at a price slightly higher than a cup of coffee. See for yourself why I like this tool, and if you like it, try the more complete version for your project.

    -

    Delphi is the most popular development environment for the Microsoft Windows operating system and is, therefore, used by many developers. While Delphi is not open source, it does offer free and open source tools for programmers with the Delphi IDE.

    -

    delphi decompiler full crack 14


    DOWNLOAD ••• https://bytlly.com/2uGw5e



    -

    The first version of NWN with it's use of DIGICODE C, was released in 1996 and while it was largely considered to be a commercial, free alternative to NWN Builder, it was released under the GNU General Public License. Since then, two "official" versions of the NWN decompiler have been released, one in 2001 and another in 2003. There's also a third version, called NWN Recompiler and which has been discontinued. However, third party vendors such as Jeremy Barnes and The Crazy Bob's have also released unofficial versions of the NWN decompiler.

    -

    Regardless of whether you're a hobbyist or professional, the first thing you need to know about the Delphi IDE is that you must be patient. Adding a unit makes the IDE crawl at the best of times, so your first units may take a little longer than usual to load. However, once you're finally ready to start coding, you'll see that the IDE runs very smoothly.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Fifa 15 Without Origin Crack Download __TOP__.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Fifa 15 Without Origin Crack Download __TOP__.md deleted file mode 100644 index b641ddc304fccb869a4bb0b461b7064fd39650bd..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Fifa 15 Without Origin Crack Download __TOP__.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Fifa 15 Without Origin Crack Download


    DOWNLOADhttps://bytlly.com/2uGw8U



    -
    -Topics How To Play FIFA 15 Without Origin | 3DM Crack v2 With Crash Fix | March 2015, How To Play FIFA 15 Without Origin,. I found a new source to download ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/liuyuan-pal/SyncDreamer/ldm/models/diffusion/sync_dreamer_utils.py b/spaces/liuyuan-pal/SyncDreamer/ldm/models/diffusion/sync_dreamer_utils.py deleted file mode 100644 index c401c745f498d4fe5435a0e6bea3eedf95c46e29..0000000000000000000000000000000000000000 --- a/spaces/liuyuan-pal/SyncDreamer/ldm/models/diffusion/sync_dreamer_utils.py +++ /dev/null @@ -1,103 +0,0 @@ -import torch -from kornia import create_meshgrid - - -def project_and_normalize(ref_grid, src_proj, length): - """ - - @param ref_grid: b 3 n - @param src_proj: b 4 4 - @param length: int - @return: b, n, 2 - """ - src_grid = src_proj[:, :3, :3] @ ref_grid + src_proj[:, :3, 3:] # b 3 n - div_val = src_grid[:, -1:] - div_val[div_val<1e-4] = 1e-4 - src_grid = src_grid[:, :2] / div_val # divide by depth (b, 2, n) - src_grid[:, 0] = src_grid[:, 0]/((length - 1) / 2) - 1 # scale to -1~1 - src_grid[:, 1] = src_grid[:, 1]/((length - 1) / 2) - 1 # scale to -1~1 - src_grid = src_grid.permute(0, 2, 1) # (b, n, 2) - return src_grid - - -def construct_project_matrix(x_ratio, y_ratio, Ks, poses): - """ - @param x_ratio: float - @param y_ratio: float - @param Ks: b,3,3 - @param poses: b,3,4 - @return: - """ - rfn = Ks.shape[0] - scale_m = torch.tensor([x_ratio, y_ratio, 1.0], dtype=torch.float32, device=Ks.device) - scale_m = torch.diag(scale_m) - ref_prj = scale_m[None, :, :] @ Ks @ poses # rfn,3,4 - pad_vals = torch.zeros([rfn, 1, 4], dtype=torch.float32, device=ref_prj.device) - pad_vals[:, :, 3] = 1.0 - ref_prj = torch.cat([ref_prj, pad_vals], 1) # rfn,4,4 - return ref_prj - -def get_warp_coordinates(volume_xyz, warp_size, input_size, Ks, warp_pose): - B, _, D, H, W = volume_xyz.shape - ratio = warp_size / input_size - warp_proj = construct_project_matrix(ratio, ratio, Ks, warp_pose) # B,4,4 - warp_coords = project_and_normalize(volume_xyz.view(B,3,D*H*W), warp_proj, warp_size).view(B, D, H, W, 2) - return warp_coords - - -def create_target_volume(depth_size, volume_size, input_image_size, pose_target, K, near=None, far=None): - device, dtype = pose_target.device, pose_target.dtype - - # compute a depth range on the unit sphere - H, W, D, B = volume_size, volume_size, depth_size, pose_target.shape[0] - if near is not None and far is not None : - # near, far b,1,h,w - depth_values = torch.linspace(0, 1, steps=depth_size).to(near.device).to(near.dtype) # d - depth_values = depth_values.view(1, D, 1, 1) # 1,d,1,1 - depth_values = depth_values * (far - near) + near # b d h w - depth_values = depth_values.view(B, 1, D, H * W) - else: - near, far = near_far_from_unit_sphere_using_camera_poses(pose_target) # b 1 - depth_values = torch.linspace(0, 1, steps=depth_size).to(near.device).to(near.dtype) # d - depth_values = depth_values[None,:,None] * (far[:,None,:] - near[:,None,:]) + near[:,None,:] # b d 1 - depth_values = depth_values.view(B, 1, D, 1).expand(B, 1, D, H*W) - - ratio = volume_size / input_image_size - - # creat a grid on the target (reference) view - # H, W, D, B = volume_size, volume_size, depth_values.shape[1], depth_values.shape[0] - - # creat mesh grid: note reference also means target - ref_grid = create_meshgrid(H, W, normalized_coordinates=False) # (1, H, W, 2) - ref_grid = ref_grid.to(device).to(dtype) - ref_grid = ref_grid.permute(0, 3, 1, 2) # (1, 2, H, W) - ref_grid = ref_grid.reshape(1, 2, H*W) # (1, 2, H*W) - ref_grid = ref_grid.expand(B, -1, -1) # (B, 2, H*W) - ref_grid = torch.cat((ref_grid, torch.ones(B, 1, H*W, dtype=ref_grid.dtype, device=ref_grid.device)), dim=1) # (B, 3, H*W) - ref_grid = ref_grid.unsqueeze(2) * depth_values # (B, 3, D, H*W) - - # unproject to space and transfer to world coordinates. - Ks = K - ref_proj = construct_project_matrix(ratio, ratio, Ks, pose_target) # B,4,4 - ref_proj_inv = torch.inverse(ref_proj) # B,4,4 - ref_grid = ref_proj_inv[:,:3,:3] @ ref_grid.view(B,3,D*H*W) + ref_proj_inv[:,:3,3:] # B,3,3 @ B,3,DHW + B,3,1 => B,3,DHW - return ref_grid.reshape(B,3,D,H,W), depth_values.view(B,1,D,H,W) - -def near_far_from_unit_sphere_using_camera_poses(camera_poses): - """ - @param camera_poses: b 3 4 - @return: - near: b,1 - far: b,1 - """ - R_w2c = camera_poses[..., :3, :3] # b 3 3 - t_w2c = camera_poses[..., :3, 3:] # b 3 1 - camera_origin = -R_w2c.permute(0,2,1) @ t_w2c # b 3 1 - # R_w2c.T @ (0,0,1) = z_dir - camera_orient = R_w2c.permute(0,2,1)[...,:3,2:3] # b 3 1 - camera_origin, camera_orient = camera_origin[...,0], camera_orient[..., 0] # b 3 - a = torch.sum(camera_orient ** 2, dim=-1, keepdim=True) # b 1 - b = -torch.sum(camera_orient * camera_origin, dim=-1, keepdim=True) # b 1 - mid = b / a # b 1 - near, far = mid - 1.0, mid + 1.0 - return near, far \ No newline at end of file diff --git a/spaces/lllqqq/so-vits-svc-models-pcr/diffusion/infer_gt_mel.py b/spaces/lllqqq/so-vits-svc-models-pcr/diffusion/infer_gt_mel.py deleted file mode 100644 index 033b821a5d21a1232f1786bce5616b12e01488ad..0000000000000000000000000000000000000000 --- a/spaces/lllqqq/so-vits-svc-models-pcr/diffusion/infer_gt_mel.py +++ /dev/null @@ -1,74 +0,0 @@ -import numpy as np -import torch -import torch.nn.functional as F -from diffusion.unit2mel import load_model_vocoder - - -class DiffGtMel: - def __init__(self, project_path=None, device=None): - self.project_path = project_path - if device is not None: - self.device = device - else: - self.device = 'cuda' if torch.cuda.is_available() else 'cpu' - self.model = None - self.vocoder = None - self.args = None - - def flush_model(self, project_path, ddsp_config=None): - if (self.model is None) or (project_path != self.project_path): - model, vocoder, args = load_model_vocoder(project_path, device=self.device) - if self.check_args(ddsp_config, args): - self.model = model - self.vocoder = vocoder - self.args = args - - def check_args(self, args1, args2): - if args1.data.block_size != args2.data.block_size: - raise ValueError("DDSP与DIFF模型的block_size不一致") - if args1.data.sampling_rate != args2.data.sampling_rate: - raise ValueError("DDSP与DIFF模型的sampling_rate不一致") - if args1.data.encoder != args2.data.encoder: - raise ValueError("DDSP与DIFF模型的encoder不一致") - return True - - def __call__(self, audio, f0, hubert, volume, acc=1, spk_id=1, k_step=0, method='pndm', - spk_mix_dict=None, start_frame=0): - input_mel = self.vocoder.extract(audio, self.args.data.sampling_rate) - out_mel = self.model( - hubert, - f0, - volume, - spk_id=spk_id, - spk_mix_dict=spk_mix_dict, - gt_spec=input_mel, - infer=True, - infer_speedup=acc, - method=method, - k_step=k_step, - use_tqdm=False) - if start_frame > 0: - out_mel = out_mel[:, start_frame:, :] - f0 = f0[:, start_frame:, :] - output = self.vocoder.infer(out_mel, f0) - if start_frame > 0: - output = F.pad(output, (start_frame * self.vocoder.vocoder_hop_size, 0)) - return output - - def infer(self, audio, f0, hubert, volume, acc=1, spk_id=1, k_step=0, method='pndm', silence_front=0, - use_silence=False, spk_mix_dict=None): - start_frame = int(silence_front * self.vocoder.vocoder_sample_rate / self.vocoder.vocoder_hop_size) - if use_silence: - audio = audio[:, start_frame * self.vocoder.vocoder_hop_size:] - f0 = f0[:, start_frame:, :] - hubert = hubert[:, start_frame:, :] - volume = volume[:, start_frame:, :] - _start_frame = 0 - else: - _start_frame = start_frame - audio = self.__call__(audio, f0, hubert, volume, acc=acc, spk_id=spk_id, k_step=k_step, - method=method, spk_mix_dict=spk_mix_dict, start_frame=_start_frame) - if use_silence: - if start_frame > 0: - audio = F.pad(audio, (start_frame * self.vocoder.vocoder_hop_size, 0)) - return audio diff --git a/spaces/lojban/text-to-speech/nix_tts_simple/__init__.py b/spaces/lojban/text-to-speech/nix_tts_simple/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/luckwill/chiakicc/text/shanghainese.py b/spaces/luckwill/chiakicc/text/shanghainese.py deleted file mode 100644 index 1c28c17d0dc0d920fd222c909a53d703c95e043b..0000000000000000000000000000000000000000 --- a/spaces/luckwill/chiakicc/text/shanghainese.py +++ /dev/null @@ -1,64 +0,0 @@ -import re -import cn2an -import opencc - - -converter = opencc.OpenCC('chinese_dialect_lexicons/zaonhe') - -# List of (Latin alphabet, ipa) pairs: -_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('A', 'ᴇ'), - ('B', 'bi'), - ('C', 'si'), - ('D', 'di'), - ('E', 'i'), - ('F', 'ᴇf'), - ('G', 'dʑi'), - ('H', 'ᴇtɕʰ'), - ('I', 'ᴀi'), - ('J', 'dʑᴇ'), - ('K', 'kʰᴇ'), - ('L', 'ᴇl'), - ('M', 'ᴇm'), - ('N', 'ᴇn'), - ('O', 'o'), - ('P', 'pʰi'), - ('Q', 'kʰiu'), - ('R', 'ᴀl'), - ('S', 'ᴇs'), - ('T', 'tʰi'), - ('U', 'ɦiu'), - ('V', 'vi'), - ('W', 'dᴀbɤliu'), - ('X', 'ᴇks'), - ('Y', 'uᴀi'), - ('Z', 'zᴇ') -]] - - -def _number_to_shanghainese(num): - num = cn2an.an2cn(num).replace('一十','十').replace('二十', '廿').replace('二', '两') - return re.sub(r'((?:^|[^三四五六七八九])十|廿)两', r'\1二', num) - - -def number_to_shanghainese(text): - return re.sub(r'\d+(?:\.?\d+)?', lambda x: _number_to_shanghainese(x.group()), text) - - -def latin_to_ipa(text): - for regex, replacement in _latin_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def shanghainese_to_ipa(text): - text = number_to_shanghainese(text.upper()) - text = converter.convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text) - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/luxuedong/lxd/src/components/chat-notification.tsx b/spaces/luxuedong/lxd/src/components/chat-notification.tsx deleted file mode 100644 index 3474e522992c43a4d1d0eadcf205a9760d5b930b..0000000000000000000000000000000000000000 --- a/spaces/luxuedong/lxd/src/components/chat-notification.tsx +++ /dev/null @@ -1,91 +0,0 @@ -import { useEffect } from 'react' -import Image from 'next/image' - -import IconWarning from '@/assets/images/warning.svg' -import { ChatError, ErrorCode, ChatMessageModel } from '@/lib/bots/bing/types' -import { ExternalLink } from './external-link' -import { useBing } from '@/lib/hooks/use-bing' - -export interface ChatNotificationProps extends Pick, 'bot'> { - message?: ChatMessageModel -} - -function getAction(error: ChatError, reset: () => void) { - if (error.code === ErrorCode.THROTTLE_LIMIT) { - reset() - return ( -
    - 你已达到每日最大发送消息次数,请更换账号或隔一天后重试 -
    - ) - } - if (error.code === ErrorCode.BING_IP_FORBIDDEN) { - return ( - - 你的服务器或代理已被封禁,请更换服务器或使用代理重试 - - ) - } - if (error.code === ErrorCode.BING_TRY_LATER) { - return ( - - 创建会话失败,请稍候重试 - - ) - } - if (error.code === ErrorCode.BING_FORBIDDEN) { - return ( - - 你的账号已在黑名单,请尝试更换账号及申请解封 - - ) - } - if (error.code === ErrorCode.CONVERSATION_LIMIT) { - return ( -
    - 当前话题已中止,请点 - 重新开始 - 开启新的对话 -
    - ) - } - if (error.code === ErrorCode.BING_CAPTCHA) { - return ( - - 点击通过人机验证 - - ) - } - if (error.code === ErrorCode.BING_UNAUTHORIZED) { - reset() - return ( - 没有获取到身份信息或身份信息失效,点此重新设置 - ) - } - return error.message -} - -export function ChatNotification({ message, bot }: ChatNotificationProps) { - useEffect(() => { - window.scrollBy(0, 2000) - }, [message]) - - if (!message?.error) return - - return ( -
    -
    -
    -
    -
    - error - {getAction(message.error, () => bot.resetConversation())} -
    -
    -
    -
    -
    - ) -} diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/temporary_buffer.h b/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/temporary_buffer.h deleted file mode 100644 index 2adfaf2810c67462e41f271e43ad0aff9cfbf75f..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/temporary_buffer.h +++ /dev/null @@ -1,22 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system has no special temporary buffer functions - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/unique.h b/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/unique.h deleted file mode 100644 index 2e46d2bb4897a54313b7190173bc295d4aba4502..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/unique.h +++ /dev/null @@ -1,59 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include -#include - -namespace thrust -{ -namespace system -{ -namespace tbb -{ -namespace detail -{ - - -template - ForwardIterator unique(execution_policy &exec, - ForwardIterator first, - ForwardIterator last, - BinaryPredicate binary_pred); - - -template - OutputIterator unique_copy(execution_policy &exec, - InputIterator first, - InputIterator last, - OutputIterator output, - BinaryPredicate binary_pred); - - -} // end namespace detail -} // end namespace tbb -} // end namespace system -} // end namespace thrust - -#include - diff --git a/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/archs/basicvsr_arch.py b/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/archs/basicvsr_arch.py deleted file mode 100644 index ed7b824eae108a9bcca57f1c14dd0d8afafc4f58..0000000000000000000000000000000000000000 --- a/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/archs/basicvsr_arch.py +++ /dev/null @@ -1,336 +0,0 @@ -import torch -from torch import nn as nn -from torch.nn import functional as F - -from basicsr.utils.registry import ARCH_REGISTRY -from .arch_util import ResidualBlockNoBN, flow_warp, make_layer -from .edvr_arch import PCDAlignment, TSAFusion -from .spynet_arch import SpyNet - - -@ARCH_REGISTRY.register() -class BasicVSR(nn.Module): - """A recurrent network for video SR. Now only x4 is supported. - - Args: - num_feat (int): Number of channels. Default: 64. - num_block (int): Number of residual blocks for each branch. Default: 15 - spynet_path (str): Path to the pretrained weights of SPyNet. Default: None. - """ - - def __init__(self, num_feat=64, num_block=15, spynet_path=None): - super().__init__() - self.num_feat = num_feat - - # alignment - self.spynet = SpyNet(spynet_path) - - # propagation - self.backward_trunk = ConvResidualBlocks(num_feat + 3, num_feat, num_block) - self.forward_trunk = ConvResidualBlocks(num_feat + 3, num_feat, num_block) - - # reconstruction - self.fusion = nn.Conv2d(num_feat * 2, num_feat, 1, 1, 0, bias=True) - self.upconv1 = nn.Conv2d(num_feat, num_feat * 4, 3, 1, 1, bias=True) - self.upconv2 = nn.Conv2d(num_feat, 64 * 4, 3, 1, 1, bias=True) - self.conv_hr = nn.Conv2d(64, 64, 3, 1, 1) - self.conv_last = nn.Conv2d(64, 3, 3, 1, 1) - - self.pixel_shuffle = nn.PixelShuffle(2) - - # activation functions - self.lrelu = nn.LeakyReLU(negative_slope=0.1, inplace=True) - - def get_flow(self, x): - b, n, c, h, w = x.size() - - x_1 = x[:, :-1, :, :, :].reshape(-1, c, h, w) - x_2 = x[:, 1:, :, :, :].reshape(-1, c, h, w) - - flows_backward = self.spynet(x_1, x_2).view(b, n - 1, 2, h, w) - flows_forward = self.spynet(x_2, x_1).view(b, n - 1, 2, h, w) - - return flows_forward, flows_backward - - def forward(self, x): - """Forward function of BasicVSR. - - Args: - x: Input frames with shape (b, n, c, h, w). n is the temporal dimension / number of frames. - """ - flows_forward, flows_backward = self.get_flow(x) - b, n, _, h, w = x.size() - - # backward branch - out_l = [] - feat_prop = x.new_zeros(b, self.num_feat, h, w) - for i in range(n - 1, -1, -1): - x_i = x[:, i, :, :, :] - if i < n - 1: - flow = flows_backward[:, i, :, :, :] - feat_prop = flow_warp(feat_prop, flow.permute(0, 2, 3, 1)) - feat_prop = torch.cat([x_i, feat_prop], dim=1) - feat_prop = self.backward_trunk(feat_prop) - out_l.insert(0, feat_prop) - - # forward branch - feat_prop = torch.zeros_like(feat_prop) - for i in range(0, n): - x_i = x[:, i, :, :, :] - if i > 0: - flow = flows_forward[:, i - 1, :, :, :] - feat_prop = flow_warp(feat_prop, flow.permute(0, 2, 3, 1)) - - feat_prop = torch.cat([x_i, feat_prop], dim=1) - feat_prop = self.forward_trunk(feat_prop) - - # upsample - out = torch.cat([out_l[i], feat_prop], dim=1) - out = self.lrelu(self.fusion(out)) - out = self.lrelu(self.pixel_shuffle(self.upconv1(out))) - out = self.lrelu(self.pixel_shuffle(self.upconv2(out))) - out = self.lrelu(self.conv_hr(out)) - out = self.conv_last(out) - base = F.interpolate(x_i, scale_factor=4, mode='bilinear', align_corners=False) - out += base - out_l[i] = out - - return torch.stack(out_l, dim=1) - - -class ConvResidualBlocks(nn.Module): - """Conv and residual block used in BasicVSR. - - Args: - num_in_ch (int): Number of input channels. Default: 3. - num_out_ch (int): Number of output channels. Default: 64. - num_block (int): Number of residual blocks. Default: 15. - """ - - def __init__(self, num_in_ch=3, num_out_ch=64, num_block=15): - super().__init__() - self.main = nn.Sequential( - nn.Conv2d(num_in_ch, num_out_ch, 3, 1, 1, bias=True), nn.LeakyReLU(negative_slope=0.1, inplace=True), - make_layer(ResidualBlockNoBN, num_block, num_feat=num_out_ch)) - - def forward(self, fea): - return self.main(fea) - - -@ARCH_REGISTRY.register() -class IconVSR(nn.Module): - """IconVSR, proposed also in the BasicVSR paper. - - Args: - num_feat (int): Number of channels. Default: 64. - num_block (int): Number of residual blocks for each branch. Default: 15. - keyframe_stride (int): Keyframe stride. Default: 5. - temporal_padding (int): Temporal padding. Default: 2. - spynet_path (str): Path to the pretrained weights of SPyNet. Default: None. - edvr_path (str): Path to the pretrained EDVR model. Default: None. - """ - - def __init__(self, - num_feat=64, - num_block=15, - keyframe_stride=5, - temporal_padding=2, - spynet_path=None, - edvr_path=None): - super().__init__() - - self.num_feat = num_feat - self.temporal_padding = temporal_padding - self.keyframe_stride = keyframe_stride - - # keyframe_branch - self.edvr = EDVRFeatureExtractor(temporal_padding * 2 + 1, num_feat, edvr_path) - # alignment - self.spynet = SpyNet(spynet_path) - - # propagation - self.backward_fusion = nn.Conv2d(2 * num_feat, num_feat, 3, 1, 1, bias=True) - self.backward_trunk = ConvResidualBlocks(num_feat + 3, num_feat, num_block) - - self.forward_fusion = nn.Conv2d(2 * num_feat, num_feat, 3, 1, 1, bias=True) - self.forward_trunk = ConvResidualBlocks(2 * num_feat + 3, num_feat, num_block) - - # reconstruction - self.upconv1 = nn.Conv2d(num_feat, num_feat * 4, 3, 1, 1, bias=True) - self.upconv2 = nn.Conv2d(num_feat, 64 * 4, 3, 1, 1, bias=True) - self.conv_hr = nn.Conv2d(64, 64, 3, 1, 1) - self.conv_last = nn.Conv2d(64, 3, 3, 1, 1) - - self.pixel_shuffle = nn.PixelShuffle(2) - - # activation functions - self.lrelu = nn.LeakyReLU(negative_slope=0.1, inplace=True) - - def pad_spatial(self, x): - """Apply padding spatially. - - Since the PCD module in EDVR requires that the resolution is a multiple - of 4, we apply padding to the input LR images if their resolution is - not divisible by 4. - - Args: - x (Tensor): Input LR sequence with shape (n, t, c, h, w). - Returns: - Tensor: Padded LR sequence with shape (n, t, c, h_pad, w_pad). - """ - n, t, c, h, w = x.size() - - pad_h = (4 - h % 4) % 4 - pad_w = (4 - w % 4) % 4 - - # padding - x = x.view(-1, c, h, w) - x = F.pad(x, [0, pad_w, 0, pad_h], mode='reflect') - - return x.view(n, t, c, h + pad_h, w + pad_w) - - def get_flow(self, x): - b, n, c, h, w = x.size() - - x_1 = x[:, :-1, :, :, :].reshape(-1, c, h, w) - x_2 = x[:, 1:, :, :, :].reshape(-1, c, h, w) - - flows_backward = self.spynet(x_1, x_2).view(b, n - 1, 2, h, w) - flows_forward = self.spynet(x_2, x_1).view(b, n - 1, 2, h, w) - - return flows_forward, flows_backward - - def get_keyframe_feature(self, x, keyframe_idx): - if self.temporal_padding == 2: - x = [x[:, [4, 3]], x, x[:, [-4, -5]]] - elif self.temporal_padding == 3: - x = [x[:, [6, 5, 4]], x, x[:, [-5, -6, -7]]] - x = torch.cat(x, dim=1) - - num_frames = 2 * self.temporal_padding + 1 - feats_keyframe = {} - for i in keyframe_idx: - feats_keyframe[i] = self.edvr(x[:, i:i + num_frames].contiguous()) - return feats_keyframe - - def forward(self, x): - b, n, _, h_input, w_input = x.size() - - x = self.pad_spatial(x) - h, w = x.shape[3:] - - keyframe_idx = list(range(0, n, self.keyframe_stride)) - if keyframe_idx[-1] != n - 1: - keyframe_idx.append(n - 1) # last frame is a keyframe - - # compute flow and keyframe features - flows_forward, flows_backward = self.get_flow(x) - feats_keyframe = self.get_keyframe_feature(x, keyframe_idx) - - # backward branch - out_l = [] - feat_prop = x.new_zeros(b, self.num_feat, h, w) - for i in range(n - 1, -1, -1): - x_i = x[:, i, :, :, :] - if i < n - 1: - flow = flows_backward[:, i, :, :, :] - feat_prop = flow_warp(feat_prop, flow.permute(0, 2, 3, 1)) - if i in keyframe_idx: - feat_prop = torch.cat([feat_prop, feats_keyframe[i]], dim=1) - feat_prop = self.backward_fusion(feat_prop) - feat_prop = torch.cat([x_i, feat_prop], dim=1) - feat_prop = self.backward_trunk(feat_prop) - out_l.insert(0, feat_prop) - - # forward branch - feat_prop = torch.zeros_like(feat_prop) - for i in range(0, n): - x_i = x[:, i, :, :, :] - if i > 0: - flow = flows_forward[:, i - 1, :, :, :] - feat_prop = flow_warp(feat_prop, flow.permute(0, 2, 3, 1)) - if i in keyframe_idx: - feat_prop = torch.cat([feat_prop, feats_keyframe[i]], dim=1) - feat_prop = self.forward_fusion(feat_prop) - - feat_prop = torch.cat([x_i, out_l[i], feat_prop], dim=1) - feat_prop = self.forward_trunk(feat_prop) - - # upsample - out = self.lrelu(self.pixel_shuffle(self.upconv1(feat_prop))) - out = self.lrelu(self.pixel_shuffle(self.upconv2(out))) - out = self.lrelu(self.conv_hr(out)) - out = self.conv_last(out) - base = F.interpolate(x_i, scale_factor=4, mode='bilinear', align_corners=False) - out += base - out_l[i] = out - - return torch.stack(out_l, dim=1)[..., :4 * h_input, :4 * w_input] - - -class EDVRFeatureExtractor(nn.Module): - """EDVR feature extractor used in IconVSR. - - Args: - num_input_frame (int): Number of input frames. - num_feat (int): Number of feature channels - load_path (str): Path to the pretrained weights of EDVR. Default: None. - """ - - def __init__(self, num_input_frame, num_feat, load_path): - - super(EDVRFeatureExtractor, self).__init__() - - self.center_frame_idx = num_input_frame // 2 - - # extract pyramid features - self.conv_first = nn.Conv2d(3, num_feat, 3, 1, 1) - self.feature_extraction = make_layer(ResidualBlockNoBN, 5, num_feat=num_feat) - self.conv_l2_1 = nn.Conv2d(num_feat, num_feat, 3, 2, 1) - self.conv_l2_2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - self.conv_l3_1 = nn.Conv2d(num_feat, num_feat, 3, 2, 1) - self.conv_l3_2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - - # pcd and tsa module - self.pcd_align = PCDAlignment(num_feat=num_feat, deformable_groups=8) - self.fusion = TSAFusion(num_feat=num_feat, num_frame=num_input_frame, center_frame_idx=self.center_frame_idx) - - # activation function - self.lrelu = nn.LeakyReLU(negative_slope=0.1, inplace=True) - - if load_path: - self.load_state_dict(torch.load(load_path, map_location=lambda storage, loc: storage)['params']) - - def forward(self, x): - b, n, c, h, w = x.size() - - # extract features for each frame - # L1 - feat_l1 = self.lrelu(self.conv_first(x.view(-1, c, h, w))) - feat_l1 = self.feature_extraction(feat_l1) - # L2 - feat_l2 = self.lrelu(self.conv_l2_1(feat_l1)) - feat_l2 = self.lrelu(self.conv_l2_2(feat_l2)) - # L3 - feat_l3 = self.lrelu(self.conv_l3_1(feat_l2)) - feat_l3 = self.lrelu(self.conv_l3_2(feat_l3)) - - feat_l1 = feat_l1.view(b, n, -1, h, w) - feat_l2 = feat_l2.view(b, n, -1, h // 2, w // 2) - feat_l3 = feat_l3.view(b, n, -1, h // 4, w // 4) - - # PCD alignment - ref_feat_l = [ # reference feature list - feat_l1[:, self.center_frame_idx, :, :, :].clone(), feat_l2[:, self.center_frame_idx, :, :, :].clone(), - feat_l3[:, self.center_frame_idx, :, :, :].clone() - ] - aligned_feat = [] - for i in range(n): - nbr_feat_l = [ # neighboring feature list - feat_l1[:, i, :, :, :].clone(), feat_l2[:, i, :, :, :].clone(), feat_l3[:, i, :, :, :].clone() - ] - aligned_feat.append(self.pcd_align(nbr_feat_l, ref_feat_l)) - aligned_feat = torch.stack(aligned_feat, dim=1) # (b, t, c, h, w) - - # TSA fusion - return self.fusion(aligned_feat) diff --git a/spaces/manh-linh/Linh-Gradio/app.py b/spaces/manh-linh/Linh-Gradio/app.py deleted file mode 100644 index fae8a9764ce37f6481dc8080b465b4ba2f757997..0000000000000000000000000000000000000000 --- a/spaces/manh-linh/Linh-Gradio/app.py +++ /dev/null @@ -1,107 +0,0 @@ -import os -from typing import Optional, Tuple - -import gradio as gr -from langchain.chains import ConversationChain -from langchain.llms import OpenAI -from threading import Lock - - -def load_chain(): - """Logic for loading the chain you want to use should go here.""" - llm = OpenAI(temperature=0) - chain = ConversationChain(llm=llm) - return chain - - -def set_openai_api_key(api_key: str): - """Set the api key and return chain. - - If no api_key, then None is returned. - """ - if api_key: - os.environ["OPENAI_API_KEY"] = api_key - chain = load_chain() - os.environ["OPENAI_API_KEY"] = "" - return chain - -class ChatWrapper: - - def __init__(self): - self.lock = Lock() - def __call__( - self, api_key: str, inp: str, history: Optional[Tuple[str, str]], chain: Optional[ConversationChain] - ): - """Execute the chat functionality.""" - self.lock.acquire() - try: - history = history or [] - # If chain is None, that is because no API key was provided. - if chain is None: - history.append((inp, "Please paste your OpenAI key to use")) - return history, history - # Set OpenAI key - import openai - openai.api_key = api_key - # Run chain and append input. - output = chain.run(input=inp) - history.append((inp, output)) - except Exception as e: - raise e - finally: - self.lock.release() - return history, history - -chat = ChatWrapper() - -block = gr.Blocks(css=".gradio-container {background-color: lightgray}") - -with block: - with gr.Row(): - gr.Markdown("

    LangChain Demo

    ") - - openai_api_key_textbox = gr.Textbox( - placeholder="Paste your OpenAI API key (sk-...)", - show_label=False, - lines=1, - type="password", - ) - - chatbot = gr.Chatbot() - - with gr.Row(): - message = gr.Textbox( - label="What's your question?", - placeholder="What's the answer to life, the universe, and everything?", - lines=1, - ) - submit = gr.Button(value="Send", variant="secondary").style(full_width=False) - - gr.Examples( - examples=[ - "Hi! How's it going?", - "What should I do tonight?", - "Whats 2 + 2?", - ], - inputs=message, - ) - - gr.HTML("Demo application of a LangChain chain.") - - gr.HTML( - "
    Powered by LangChain 🦜️🔗
    " - ) - - state = gr.State() - agent_state = gr.State() - - submit.click(chat, inputs=[openai_api_key_textbox, message, state, agent_state], outputs=[chatbot, state]) - message.submit(chat, inputs=[openai_api_key_textbox, message, state, agent_state], outputs=[chatbot, state]) - - openai_api_key_textbox.change( - set_openai_api_key, - inputs=[openai_api_key_textbox], - outputs=[agent_state], - ) - -block.launch(debug=True) diff --git a/spaces/marker22/Bark-Voice-Cloning/training/__init__.py b/spaces/marker22/Bark-Voice-Cloning/training/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/merve/data-leak/public/measuring-diversity/script.js b/spaces/merve/data-leak/public/measuring-diversity/script.js deleted file mode 100644 index 002fb32c0d0ee11cf292109725ebda6a2a4b57a4..0000000000000000000000000000000000000000 --- a/spaces/merve/data-leak/public/measuring-diversity/script.js +++ /dev/null @@ -1,360 +0,0 @@ -// Seeded random number generator -window.random = new Math.seedrandom('aaaa') -window.randomIndex = new Math.seedrandom('7b') - -window.numRows = 20 -window.shapes = window.shapes || d3.range(21).map(i => randomShape(i, random)) - -window.random2 = new Math.seedrandom('7') -// window.columnShapes = window.columnShapes || d3.range(window.numRows).map(i => d3.range(10).map(i =>randomShape(i, random2))) -window.columnShapes = d3.range(window.numRows).map(i => d3.range(10).map(i =>randomShape(i, random2, true))) - -console.log(window.random3) -function randomShape(i, random, colTargets){ - var color2fill = { - green: '#5A9F8A', - orange: '#DF831F', - blue: '#80BAD4', - } - - var randomItem = function(arr) { - const index = Math.abs(random.int32()) % arr.length - return arr[index] - } - - var color = randomItem(d3.keys(color2fill)) - var size = randomItem(['small', 'large']) - var shape = randomItem(['circle', 'square', 'triangle']) - - if (colTargets && (i == 4 || i == 5)){ - color = 'green' - } - if (colTargets && (i == 4 || i == 15)){ - size = 'small' - } - if (colTargets && (i == 3 || i == 5)){ - shape = 'triangle' - } - - var displayIndex = randomIndex() - - return { - i, - displayIndex, - color, - fill: color2fill[color], - dFill: d3.color(color2fill[color]).darker(1), - size, - sizeVal: size == 'large' ? 1 : .4, - shape, - } -} - -var metrics = [ - { - str: 'Greens', - key: 'green', - field: 'color', - target: .3 - }, - { - str: 'Dot', - key: 'triangle', - field: 'shape', - target: .35 - }, - { - str: 'Smalls', - key: 'small', - field: 'size', - target: .60 - }, -] -window.metrics1 = metrics.map(d => ({...d})) -metrics1[2].target = .5 -window.metrics2 = metrics1.map(d => ({...d})) -metrics2[0].target = 1 - -metrics.forEach(d => { - d.scoreScale = d3.scaleLinear().domain([0, d.target, 1]).range([0, 1, 0]) -}) - - -var pctFmt = d3.format('.0%') -function addMetrics(metrics, {active, topSel, isSmall}){ - var metricSel = topSel - .st({textAlign: 'center'}) - .appendMany('div', metrics) - .st({textAlign: 'center', width: 200, display: 'inline-block'}) - - var width = 120 - - var svg = metricSel.append('svg') - .at({width: 120, height: 100}) - .append('g') - .translate([.5, 40.5]) - - if (isSmall){ - svg.translate((d, i) => [i ? -20.5 : 20.5, 40.5]) - } - - - var xScale = d3.scaleLinear().rangeRound([0, width]) - - var topText = svg.append('text') - .at({y: -20, fontWeight: 500, textAnchor: 'middle', x: width/2}) - - svg.append('path') - .at({d: 'M 0 0 H ' + width, stroke: '#000'}) - - var topTick = svg.append('path') - .at({d: 'M 0 0 V -12.5', stroke: '#000', strokeWidth: 3}) - - - var actualSel = svg.append('g').st({fill: highlightColor}) - - actualSel.append('path') - .at({d: 'M 0 0 V 12.5', stroke: highlightColor, strokeWidth: 3}) - - var actualPct = actualSel.append('text') - .translate(30, 1).at({textAnchor: 'middle'}).st({fontWeight: 300}) - - var actualScore = actualSel.append('text') - .translate(50, 1).at({textAnchor: 'middle'}).st({fontWeight: 300}) - - return () => { - var pcts = metrics.map(d => active.percents[d.key] || 0) - - topText.text(d => (d.str + ' Target: ').replace('s ', ' ') + pctFmt(d.target)) - - topTick.translate(d => xScale(d.target), 0) - actualSel.translate((d, i) => xScale(pcts[i]), 0) - - actualPct.text((d, i) => 'Actual: ' + pctFmt(pcts[i])) - actualScore.text((d, i) => 'Difference: ' + pctFmt(Math.abs(d.target - pcts[i]))) - } -} - - -function scoreActive(active){ - var numActive = d3.sum(active) - return metrics.map(m => { - var v = d3.sum(active, (d, i) => active[i] && shapes[i][m.field] == m.key) - return Math.abs(m.target - v/numActive); - // return m.scoreScale(v/numActive || 0) - }) -} - -var measures = [ - { - str: 'Utilitarian', - display_text: 'Minimize Mean Difference', - ranking_display_text: 'Mean Difference', - fn: s => d3.mean(s)*100, - ppFn: s => d3.format('.2%')(d3.mean(s)), - format: s => 'mean(' + s.map(d => d + '%').join(', ') + ')' - }, - { - str: 'Egalitarian', - display_text: 'Minimize Max Difference', - ranking_display_text: 'Max Difference', - fn: s => { - var srt = _.sortBy(s).map(d => Math.round(d*100)).reverse() - - return srt[0]*100000000 + srt[1]*10000 + srt[2] - }, - ppFn: s => { - var srt = _.sortBy(s).map(d => Math.round(d*100)).reverse() - - return srt[0] + '%' - }, - format: s => 'max(' + s.map(d => d + '%').join(', ') + ')' - } -] -measures2 = measures.map(d => ({...d})) - - -var randomActive = d3.range(10000).map(d => { - var active = shapes.map(d => random() < .3) - - if (d == 0) active = '111111111111101011100'.split('').map(d => +d) - - active.score = scoreActive(active) - measures.forEach(d => { - active[d.str] = d.fn(active.score) - }) - - return active -}) - -function addMetricBestButton(metricIndex, {active, sel, render}){ - var measureSel = sel - .append('div').st({textAlign: 'center', marginTop: 20, marginBottom: -20}) - .append('div.measure').st({width: 200, lineHeight: '1.8em', display: 'inline-block'}) - .html('Show Best') - .on('click', d => { - - // console.log(active) - var pcts = metrics.map(d => active.percents[d.key] || 0) - if (pcts[metricIndex] == metrics[metricIndex].target) return - - var nextActive = _.minBy(randomActive, a => a.score[metricIndex]) - active.forEach((d, i) => active[i] = nextActive[i]) - - measureSel.classed('active', e => e == d) - render() - }) -} - -function addMeasures(measures, {active, sel, render}){ - var measureSel = sel.selectAll('div.measure-container') - - measureSel - .append('div.measure') - .st({width: 200, lineHeight: '1.8em', display: 'inline-block', textAlign: 'center', }) - .html((d, i) => i ? 'Show the set where the highest difference is the smallest' : 'Show the set with
    lowest mean difference') - .html('Show Best') - .on('click', d => { - - var nextActive = _.minBy(randomActive, a => a[d.str]) - active.forEach((d, i) => active[i] = nextActive[i]) - - measureSel.classed('active', e => e == d) - render() - }) - - -} - -function addTotalMetrics(metrics, measures, {active, sel, render}){ - var metricSel = sel.classed('bot', 1).st({textAlign: 'center'}) - .appendMany('div.measure-container', measures) - .append('div', measures) - .st({textAlign: 'center', display: 'inline-block'}) - - - var headlineSel = metricSel.append('div') - var calcSel = metricSel.append('div')//.st({color: highlightColor}) - - return () => { - - measures.forEach(d => { - d.scores = scoreActive(active) - - d.score = Math.round(d.fn(d.scores)*100)/100 - if (d.ppFn) d.score = d.ppFn(d.scores) - }) - - headlineSel.st({fontWeight: 600}) - .text(d => d.ranking_display_text + ': ' + d.score) - - calcSel.text(d => { - var roundedScores = d.scores.map(s => Math.round(s * 100)) - - return d.format(roundedScores) - }) - } -} - - -window.shapeRandom = new Math.seedrandom('aaf') -var defaultActive = shapes.map(d => shapeRandom() < .4) -drawShape('all-shapes') - -drawShape('pick-green', ({active, topSel, sel, render}) => { - active.forEach((d, i) => active[i] = defaultActive[i]) - addMetricBestButton(0, {active, sel, render}) - return addMetrics(metrics.filter(d => d.key == 'green'), {active, topSel}) -}) - -drawShape('pick-triangle', ({active, topSel, sel, render}) => { - active.forEach((d, i) => active[i] = defaultActive[i]) - addMetricBestButton(1, {active, sel, render}) - return addMetrics(metrics.filter(d => d.key == 'triangle'), {active, topSel}) -}) - -drawShape('pick-metric', grid => { - grid.active.forEach((d, i) => grid.active[i] = defaultActive[i]) - - var metricRender = addMetrics(metrics, grid) - var totalMetricRender = addTotalMetrics(metrics, measures, grid) - addMeasures(measures, grid) - - return () => { - metricRender() - totalMetricRender() - } -}) - - -function drawShape(id, initFn=d => e => e){ - var active = shapes.map(d => true) - - var sel = d3.select('#' + id).html('') - - var s = 110 - - var topSel = sel.append('div.top') - var shapeSel = sel.appendMany('div.shape', _.sortBy(shapes, d => d.displayIndex)) - .st({width: s, height: s}) - .on('click', d => { - active[d.i] = !active[d.i] - render() - }) - - shapeSel.append('svg') - .at({width: s, height: s}) - .append('g').translate([s/2, s/2]) - .each(function(d){ - if (d.shape == 'square' || true){ - var rs = Math.round(d.sizeVal*s/3.5) - var shapeSel = d3.select(this).append('rect') - .at({x: -rs, y: -rs, width: rs*2, height: rs*2}) - } else if (d.shape == 'circle'){ - var shapeSel = d3.select(this).append('circle') - .at({r: d.sizeVal*s/3}) - } else if (d.shape == 'triangle'){ - var rs = Math.round(d.sizeVal*s/2.9) - var shapeSel = d3.select(this).append('path') - .translate(rs*Math.pow(3,1/2)/10, 1) - .at({d: [ - 'M', 0, -rs, - 'L', -rs*Math.pow(3,1/2)/2, rs/2, - 'L', +rs*Math.pow(3,1/2)/2, rs/2, - 'Z' - ].join(' ')}) - } - - if (d.shape == 'triangle'){ - d3.select(this).append('circle') - .at({r: 4, fill: '#fff', stroke: '#000', strokeWidth: 1}) - } - - shapeSel.at({fill: d.fill, stroke: d.dFill, strokeWidth: 2}) - }) - - var customRender = initFn({active, topSel, sel, render}) - - shapes.render = render - function render(){ - shapeSel.classed('active', d => active[d.i]) - // console.log(active.map(d => +d).join('')) - - active.percents = {} - active.shapes = shapes.filter(d => active[d.i]) - - d3.nestBy(active.shapes, d => d.color).forEach(d => { - active.percents[d.key] = d.length/active.shapes.length - }) - d3.nestBy(active.shapes, d => d.size).forEach(d => { - active.percents[d.key] = d.length/active.shapes.length - }) - d3.nestBy(active.shapes, d => d.shape).forEach(d => { - active.percents[d.key] = d.length/active.shapes.length - }) - - - customRender() - } - render() -} \ No newline at end of file diff --git a/spaces/merve/measuring-fairness/public/measuring-diversity/index.html b/spaces/merve/measuring-fairness/public/measuring-diversity/index.html deleted file mode 100644 index 152d63d665428726e115c623d650d9ad5bef780b..0000000000000000000000000000000000000000 --- a/spaces/merve/measuring-fairness/public/measuring-diversity/index.html +++ /dev/null @@ -1,167 +0,0 @@ - - - - - - - - - - - - - - - - - - Measuring Diversity - - - - - - - - - - - - - - - -
    - -
    - -

    Measuring Diversity

    -
    Search results that reflect historic inequities can amplify stereotypes and perpetuate under-representation. Carefully measuring diversity in data sets can help.
    - - -

    Search, ranking and recommendation systems can help find useful documents in large datasets. However, these datasets reflect the biases of the society in which they were created and the systems risk re-entrenching those biases. For example, if someone who is not a white man searches for “CEO pictures” and sees a page of white men, they may feel that only white men can be CEOs, further perpetuating lack of representation at companies’ executive levels.

    -

    Using the careful quantification outlined in a recent paper, Diversity and Inclusion Metrics in Subset Selection, we can quantify biases and push these systems to return a wider range of results.

    -

    The mathematics of all this is a little easier to follow with abstract shapes. Let’s take a look at some of them:

    -
    - -

    Suppose we want to return about 30% green boxes to reflect the distribution of some larger universe of shapes. Try clicking on the shapes below to select some of them — can you find a better subset to return?

    -
    - -

    Another diversity metric we care about is the percentage of dots… how close to 35% dots can you get?

    -
    - -

    If we can only return a single subset, how should we consider multiple diversity metrics? Sometimes it isn’t possible to reduce the difference of every metric to zero. One natural approach: find the selection with the lowest mean difference across all the metrics to get as close as possible to all the targets.

    -

    In other circumstances, like picking a panel of speakers, avoiding badly representing any single category might be more important. This can be done by finding the subset with the lowest max difference. Try minimizing both below:

    -
    - -

    Notice that minimizing the mean results in a different subset than minimizing the max; how else might using one over the other change the results?

    -

    Ranking Measures

    -

    We can pull out more detail by showing how the mean difference and maximum difference rank lots of sets. Below, there are 20 sets of 10 shapes sorted by the two measures. Try adjusting the target slider on the left to see how the rankings change; each set’s percentage of green, dots and small shapes are shown in the small histograms.

    -
    - -

    At the extremes, the choice of measure can have a big impact: if we want to try and return all green results, we can shift the green target up to 100%. With this target, the minimum difference basically sorts the sets by the number of green items and uses the other targets as a tiebreaker. In contrast, sorting by the mean difference balances the green target more with the dot and small targets.

    -
    - -

    Beyond mean and max differences, there are more ways to combine diversity metrics, like taking the cross of two metrics to account for intersectionality. The absolute value of the difference in target and actual percentages can also be quantified in other ways — you might want to penalize undershooting more than overshooting, for example. It’s important to keep in mind what exactly you’re trying to maximize and the dataset that you’re operating on.

    -

    Which Measure is Best?

    -

    In a vacuum, all of these ranking methods are defensible. Picking one requires knowledge of the dataset and broader societal context.

    -

    For example, the doctors on the left have more variance along the shirt color attribute, but they’re less diverse by gender than the doctors on the right. With the shirt color and gender targets we’ve picked, the two subsets have the same mean and max differences However, in most applications, it’s more important to have a representative sample of socially relevant characteristics, like gender, rather than something less salient, like clothing color.

    -
    - -

    Just selecting a diverse sample isn’t sufficient either. Diversity and Inclusion Metrics in Subset Selection introduces a way of measuring “inclusion” - how well does the searcher feel represented in the results?

    -

    Below, we have gender diversity, without inclusion for women, in the “construction worker” image domain. Masculine-presenting individuals are shown in realistic, modern construction worker situations, while feminine-presenting individuals and other gender presentations are depicted as historic nostalgia, toys, clipart, or passive.

    -
    - -

    The context of the query and the searcher also plays in the quality of search results. A search for “work clothing” that shows a mixed palette of colors for men’s clothing and only pink women’s clothing might make the searcher feel that women need to appear stereotypically feminine in a professional setting. But the same set of women’s clothes might be appropriate to show for a “pink women work clothes” search or if the searcher had previously expressed a preference for pink.

    -

    We saw how a small switch from mean to max made a huge difference in what abstract shapes are returned – and how things can get even more complex when socially salient characteristics are layered in. Defaults and small decisions can encode our priorities and values; intentionally thinking about how diversity and inclusion are being measured and which characteristics are emphasized is a step towards designing more equitable systems.

    -

    More Reading

    -

    The Diversity and Inclusion Metrics paper has a Colab with a detailed desciption of the metrics, additional visualizations and a reference Python implementation.

    -

    The difficulties of measuring fairness in general have been well studied; subset selection is still an active area of research. Fairness of Exposure in Rankings proposes a ranking algorithm that incorporates fairness constraints. Toward creating a fairer ranking in search engine results measures diversity bias in actual search results.

    -

    Inferring user preferences is also tricky; you can checkout ways to design for user feedback and control over queries in the People + AI Guidebook.

    -

    Credits

    -

    Adam Pearce, Dylan Baker, Ellen Jiang, Meg Mitchell* and Timnit Gebru* // March 2021

    -

    *Work done while at Google

    -

    Thanks to Alex Hanna, Carey Radebaugh, Emily Denton, Fernanda Viégas, James Wexler, Jess Holbrook, Ludovic Peran, Martin Wattenberg, Michael Terry, Yannick Assogba and Zan Armstrong for their help with this piece.

    -

    More Explorables

    - -

    - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/merve/measuring-fairness/source/fill-in-the-blank/init-pair.js b/spaces/merve/measuring-fairness/source/fill-in-the-blank/init-pair.js deleted file mode 100644 index dbd16d4499ddbcc59234fcdefbf7a5cad6f91a7a..0000000000000000000000000000000000000000 --- a/spaces/merve/measuring-fairness/source/fill-in-the-blank/init-pair.js +++ /dev/null @@ -1,360 +0,0 @@ -/* Copyright 2021 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - -window.initPair = function(pair){ - var isMobile = window.innerWidth <= 820 - - var sel = d3.select('.' + pair.class).html('') - .at({role: 'graphics-document', 'aria-label': pair.ariaLabel}) - .on('keydown', function(){ - sel.classed('changed', 1) - if (d3.event.keyCode != 13) return - d3.event.preventDefault() - // return - - pair.str0 = '' - pair.str1 = '' - - updateChart() - }) - - if (!sel.node()) return - - var optionSel = sel.append('div.options') - - var inputRow = optionSel.append('div.flex-row.flex-row-textarea') - var input1Sel = inputRow.append('textarea.input-1') - .st({color: util.colors[1]}).at({cols: 30}) - input1Sel.node().value = pair.s1.replace('[MASK]', '_') - - var input0Sel = inputRow.append('textarea.input-0') - .st({color: util.colors[0]}).at({cols: 30}) - input0Sel.node().value = pair.s0.replace('[MASK]', '_') - - if (isMobile){ - sel.selectAll('textarea').on('change', updateChart) - } - - var countSel = optionSel.append('div') - .append('b').text('Number of Tokens') - .append('info').text('ⓘ').call(addLockedTooltip) - .datum('The scales are set using the top N tokens for each sentence.

    "Likelihoods" will show more than N tokens if a top completion for one sentence is unlikely for the other sentence.') - .parent().parent() - .append('div.flex-row') - .appendMany('div.button', [30, 200, 1000, 5000, 99999]) - .text(d => d > 5000 ? 'All' : d) - .st({textAlign: 'center'}) - .on('click', d => { - pair.count = d - updateChart() - }) - - var typeSel = optionSel.append('div') - .append('b').text('Chart Type') - .append('info').text('ⓘ').call(addLockedTooltip) - .datum('"Likelihoods" shows the logits from both models plotted directly with a shared linear scale.

    To better contrast the outputs, "Differences" shows logitA - logitB on the y-axis and mean(logitA, logitB) on the x-axis with separate linear scales.') - .parent().parent() - .append('div.flex-row') - .appendMany('div.button', ['Likelihoods', 'Differences']) - .text(d => d) - .st({textAlign: 'center'}) - .on('click', d => { - pair.type = d - updateChart() - }) - - var modelSel = optionSel.append('div') - .st({display: pair.model == 'BERT' ? 'none' : ''}) - .append('b').text('Model') - .parent() - .append('div.flex-row') - .appendMany('div.button', ['BERT', 'Zari']) - .text(d => d) - .st({textAlign: 'center'}) - .on('click', d => { - pair.model = d - updateChart() - }) - - // TODO add loading spinner - var updateSel = optionSel - .append('div.flex-row') - .append('div.button.update').on('click', updateChart) - .text('Update') - .st({display: isMobile ? 'none' : ''}) - - var warningSel = optionSel.append('div.warning') - .text('⚠️Some of the text this model was trained on includes harmful stereotypes. This is a tool to uncover these associations—not an endorsement of them.') - - var resetSel = optionSel.append('div.reset') - .html(' Reset') - .on('click', () => { - pair = JSON.parse(pair.pairStr) - pair.pairStr = JSON.stringify(pair) - - input0Sel.node().value = pair.s0 - input1Sel.node().value = pair.s1 - - updateChart(true) - }) - - if (pair.alts){ - d3.select('.' + pair.class + '-alts').html('') - .classed('alt-block', 1).st({display: 'block'}) - .appendMany('span.p-button-link', pair.alts) - .html(d => d.str) - .on('click', d => { - input0Sel.node().value = d.s0 - input1Sel.node().value = d.s1 - - updateChart() - }) - } - - - var margin = {bottom: 50, left: 25, top: 5, right: 20} - var graphSel = sel.append('div.graph') - var totalWidth = graphSel.node().offsetWidth - var width = totalWidth - margin.left - margin.right - - var c = d3.conventions({ - sel: graphSel.append('div').st({marginTop: isMobile ? 20 : -5}), - width, - height: width, - margin, - layers: 'sdds', - }) - - - var nTicks = 4 - var tickScale = d3.scaleLinear().range([0, c.width]) - c.svg.appendMany('path.bg-tick', d3.range(nTicks + 1)) - .at({d: d => `M ${.5 + Math.round(tickScale(d/nTicks))} 0 V ${c.height}`}) - c.svg.appendMany('path.bg-tick', d3.range(nTicks + 1)) - .at({d: d => `M 0 ${.5 + Math.round(tickScale(d/nTicks))} H ${c.width}`}) - - - var annotationSel = c.layers[1].appendMany('div.annotations', pair.annotations) - .translate(d => d.pos) - .html(d => d.str) - .st({color: d => d.color, width: 250, postion: 'absolute'}) - - var scatter = window.initScatter(c) - - updateChart(true) - - - async function updateChart(isFirst){ - sel.classed('changed', 0) - warningSel.st({opacity: isFirst ? 0 : 1}) - resetSel.st({opacity: isFirst ? 0 : 1}) - annotationSel.st({opacity: isFirst ? 1 : 0}) - - countSel.classed('active', d => d == pair.count) - typeSel.classed('active', d => d == pair.type) - modelSel.classed('active', d => d == pair.model) - - function getStr(sel){ - return sel.node().value.replace('_', '[MASK]') - } - - var modelPath = pair.model == 'Zari' ? 'embed_zari_cda' : 'embed' - - pair.s0 = input0Sel.node().value.replace('_', '[MASK]') - pair.s1 = input1Sel.node().value.replace('_', '[MASK]') - - updateSel.classed('loading', 1) - var vals0 = await post(modelPath, {sentence: pair.s0}) - var vals1 = await post(modelPath, {sentence: pair.s1}) - updateSel.classed('loading', 0) - - - var allTokens = vals0.map((v0, i) => { - return {word: tokenizer.vocab[i], v0, i, v1: vals1[i]} - }) - allTokens.forEach(d => { - d.dif = d.v0 - d.v1 - d.meanV = (d.v0 + d.v1) / 2 - d.isVisible = false - }) - - _.sortBy(allTokens, d => -d.v1).forEach((d, i) => d.v1i = i) - _.sortBy(allTokens, d => -d.v0).forEach((d, i) => d.v0i = i) - - var topTokens = allTokens.filter(d => d.v0i <= pair.count || d.v1i <= pair.count) - - - var logitExtent = d3.extent(topTokens.map(d => d.v0).concat(topTokens.map(d => d.v1))) - - var tokens = allTokens - .filter(d => logitExtent[0] <= d.v0 && logitExtent[0] <= d.v1) - - var mag = logitExtent[1] - logitExtent[0] - logitExtent = [logitExtent[0] - mag*.002, logitExtent[1] + mag*.002] - - if (pair.type == 'Differences') tokens = _.sortBy(allTokens, d => -d.meanV).slice(0, pair.count) - - tokens.forEach(d => { - d.isVisible = true - }) - - var maxDif = d3.max(d3.extent(tokens, d => d.dif).map(Math.abs)) - var color = palette(-maxDif*.8, maxDif*.8) - - updateSentenceLabels() - - if (pair.type == 'Likelihoods'){ - drawXY() - } else{ - drawRotated() - } - - sel.classed('is-xy', pair.type == 'Likelihoods') - sel.classed('is-rotate', pair.type != 'Likelihoods') - - - function drawXY(){ - c.x.domain(logitExtent) - c.y.domain(logitExtent) - - d3.drawAxis(c) - - var s = {30: 4, 200: 3, 1000: 3}[pair.count] || 2 - var scatterData = allTokens.map(d => { - var x = c.x(d.v0) - var y = c.y(d.v1) - var fill = color(d.dif) - var dif = d.dif - var word = d.word - var show = '' - var isVisible = d.isVisible - - return {x, y, s, dif, fill, word, show, isVisible} - }) - - var textCandidates = _.sortBy(scatterData.filter(d => d.isVisible), d => d.dif) - d3.nestBy(textCandidates.slice(0, 1000), d => Math.round(d.y/10)) - .forEach(d => d[0].show = 'uf') - d3.nestBy(textCandidates.reverse().slice(0, 1000), d => Math.round(d.y/10)) - .forEach(d => d[0].show = 'lr') - - logitExtent.pair = pair - scatter.draw(c, scatterData, true) - - c.svg.selectAppend('text.x-axis-label.xy-only') - .translate([c.width/2, c.height + 24]) - .text(pair.label0 ? ' __ likelihood, ' + pair.label0 + ' sentence →' : '__ likelihood, sentence two →') - .st({fill: util.colors[0]}) - .at({textAnchor: 'middle'}) - - - c.svg.selectAppend('g.y-axis-label.xy-only') - .translate([c.width + 20, c.height/2]) - .selectAppend('text') - .text(pair.label1 ? ' __ likelihood, ' + pair.label1 + ' sentence →' : '__ likelihood, sentence one →') - .st({fill: util.colors[1]}) - .at({textAnchor: 'middle', transform: 'rotate(-90)'}) - } - - function drawRotated(){ - c.x.domain(d3.extent(tokens, d => d.meanV)) - c.y.domain([maxDif, -maxDif]) - - d3.drawAxis(c) - - var scatterData = allTokens.map(d => { - var x = c.x(d.meanV) - var y = c.y(d.dif) - var fill = color(d.dif) - var word = d.word - var show = '' - var isVisible = d.isVisible - - return {x, y, s: 2, fill, word, show, isVisible} - }) - - scatterData.forEach(d => { - d.dx = d.x - c.width/2 - d.dy = d.y - c.height/2 - }) - - var textCandidates = _.sortBy(scatterData, d => -d.dx*d.dx - d.dy*d.dy) - .filter(d => d.isVisible) - .slice(0, 5000) - d3.nestBy(textCandidates, d => Math.round(12*Math.atan2(d.dx, d.dy))) - .map(d => d[0]) - .forEach(d => d.show = (d.dy < 0 ? 'u' : 'l') + (d.dx < 0 ? 'l' : 'r')) - - scatter.draw(c, scatterData, false) - - c.svg.selectAppend('text.rotate-only.x-axis-label') - .translate([c.width/2, c.height + 24]) - .text('__ likelihood, both sentences →') - .at({textAnchor: 'middle'}) - .st({fill: '#000'}) - - c.svg.selectAll('g.rotate-only.sent-1,g.rotate-only.sent-1').remove() - c.svg.selectAppend('g.rotate-only.sent-1') - .translate([c.width + 20, c.height/2]) - .append('text') - .text(`Higher likelihood, ${pair.label1 ? pair.label1 + ' sentence ' : 'sentence one'} →`) - .at({textAnchor: 'start', transform: 'rotate(-90)', x: 20}) - .st({fill: util.colors[1]}) - - c.svg.selectAppend('g.rotate-only.sent-1') - .translate([c.width + 20, c.height/2 + 0]) - .append('text') - .text(`← Higher likelihood, ${pair.label0 ? pair.label0 + ' sentence ' : 'sentence two'}`) - .at({textAnchor: 'end', transform: 'rotate(-90)', x: -20}) - .st({fill: util.colors[0]}) - } - } - - function updateSentenceLabels(){ - var t0 = tokenizer.tokenize(pair.s0) - var t1 = tokenizer.tokenize(pair.s1) - - var i = 0 - while (t0[i] == t1[i] && i < t0.length) i++ - - var j = 1 - while (t0[t0.length - j] == t1[t1.length - j] && j < t0.length) j++ - - pair.label0 = tokens2origStr(t0, pair.s0) - pair.label1 = tokens2origStr(t1, pair.s1) - - function tokens2origStr(t, s){ - var tokenStr = tokenizer.decode(t.slice(i, -j + 1)).trim() - var lowerStr = s.toLowerCase() - - var startI = lowerStr.indexOf(tokenStr) - return s.slice(startI, startI + tokenStr.length) - } - - if ( - !pair.label0.length || - !pair.label1.length || - pair.label0.length > 15 || - pair.label1.length > 15){ - pair.label0 = '' - pair.label1 = '' - } - - // console.log(i, j, pair.label0, pair.label1) - } -} - -if (window.init) init() diff --git a/spaces/merve/uncertainty-calibration/public/measuring-diversity/columns-height.js b/spaces/merve/uncertainty-calibration/public/measuring-diversity/columns-height.js deleted file mode 100644 index 3933c17b4bb8abe209b3573bb436c53c47543b1b..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/public/measuring-diversity/columns-height.js +++ /dev/null @@ -1,177 +0,0 @@ -window.initColumns = function(id, metrics, measures){ - var c = d3.conventions({ - sel: d3.select(id).html('').st({width: 775, margin: '0px auto', left: 27}), - margin: {left: 260, top: 40}, - height: 600, - }) - - var sets = d3.range(numRows).map(i => { - var shapes = columnShapes[i] - shapes = _.sortBy(shapes, d => d.shape) - shapes = _.sortBy(shapes, d => d.size) - shapes = _.sortBy(shapes, d => d.color) - shapes = _.sortBy(shapes, d => d.color == 'green' ? 0 : 1) - - - shapes.nG = d3.sum(shapes, d => d.color == 'green') - shapes.nB = d3.sum(shapes, d => d.color == 'blue') - shapes.nO = d3.sum(shapes, d => d.color == 'orange') - shapes.nR = d3.sum(shapes, d => d.color == 'red') - - shapes.forEach((d, i) => { - d.i = i - d.sizeVal = d.sizeVal < 1 ? .6 : 1 - }) - shapes.i = i - return shapes - }) - - var colW = 200 - var colWpad = 50 - var colH = 20 - var colHpad = 10 - var offsetW = -20 - - var colSel = c.svg.appendMany('g', measures) - .translate((d, i) => [.5 + i*(colW + colWpad) + offsetW, .5]) - - colSel.append('text').text(d => d.ranking_display_text) - .at({y: -20, textAnchor: 'middle', x: colW/2, fontWeight: 600, }) - - var rowSel = colSel.appendMany('g.row', sets) - .translate(d => d.i*(colH + colHpad), 1) - - var colMean = colSel.filter((d, i) => i === 0) - var colMin = colSel.filter((d, i) => i === 1) - var scoreLabelsMean = colMean.selectAll('.row').append('text') - .at({x: -5, y: 15, textAnchor: 'end'}) - .st({fontSize: '13px', opacity: .7}) - var scoreLabelsMin = colMin.selectAll('.row').append('text') - .at({x: 222, y: 15, textAnchor: 'end'}) - .st({fontSize: '13px', opacity: .7}) - - colSel.each(function(d, i){ - d.rowSel = d3.select(this).selectAll('.row') - - c.svg.append('marker') - .attr('id', 'arrow') - .attr('viewBox', '-10 -10 20 20') - .attr('markerWidth', 20) - .attr('markerHeight', 20) - .attr('orient', 'auto') - .append('path') - .attr('d', 'M-6.75,-6.75 L 0,0 L -6.75,6.75') - .at({fill: '#000'}) - - - if (i){ - var pathstr = ['M', 160, -25, 'C', 215, -25, 215, -25, 215, -5].join(' ') - } else{ - var pathstr = ['M', 35, -25, 'C', -20, -25, -20, -25, -20, -5].join(' ') - } - d3.select(this).append('path') - .at({stroke: '#000', fill: 'none', d: pathstr, markerEnd: 'url(#arrow)', strokeWidth: .6}) - }) - - - var s = colH - var p = 2 - - var l0Sel = c.svg.appendMany('path.set', sets).classed('set1', true) - .translate(d => [colW + offsetW, s/2 + .5]) - - drawRow(rowSel) - function drawRow(rowSel){ - rowSel.append('rect.set.no-stroke') - .at({x: -p, y: -p, width: colW + p*2, height: colH + p*2, fill: '#fff'}).classed('set1', true) - - rowSel.appendMany('g', d => d) - .translate(d => [d.i*s + s/2, s/2]) - .each(function(d){ - - var sOffset = 12 - var classNames = [d.shape, d.size, d.color, 'rank-item'].join(' ') - var shapeSel = d3.select(this).append('rect') - .at({ - x: -s/2, - y: -s/2 + (d.size == 'small' ? sOffset/2 : 0) - .5, - width: s - .5, - height: s - (d.size == 'small' ? sOffset : 0), - fill: d.fill, - class: classNames - }) - - if (d.shape == 'triangle'){ - var shapeSel = d3.select(this).append('circle') - .at({r: 2, fill: '#fff', stroke: '#000', strokeWidth: .5, class: classNames}) - } - }) - - } - - var setSel = c.svg.selectAll('.set1') - .on('mouseover', selectSet) - - sets.selected = sets[0] - function selectSet(set){ - sets.selected = set - sets.forEach(d => d.selected = d == set) - setSel - .classed('selected', d => d.selected) - .filter(d => d.selected) - .lower() - - rowSel.classed('selected', d => d.selected) - - sliders.render() - } - - - var sliders = makeSliders(metrics, sets, c, selectSet, drawRow, () => { - sets.forEach(shapes => { - shapes.score = metrics.map(m => { - var v = d3.sum(shapes, (d, i) => shapes[i][m.field] == m.key) - return Math.abs(m.target - v/shapes.length) - }) - }) - - measures.forEach(m => { - sets.forEach(shapes => { - shapes[m.str] = m.fn(shapes.score) - }) - _.sortBy(sets, d => d[m.str] + d.i/10000000)//.reverse() - .forEach((d, i) => d['i' + m.str] = i) - - m.rowSel.translate(d => d['i' + m.str]*(colH + colHpad), 1) - }) - - var p = 0 - l0Sel.at({d: d => [ - 'M', p, d['iUtilitarian']*(colH + colHpad), - 'L', colWpad - p, d['iEgalitarian']*(colH + colHpad), - ].join(' ')}) - - - scoreLabelsMean.text(d => { - return d3.format('.2f')(d['Utilitarian'])// + '%' - }) - scoreLabelsMin.text(d => { - return measures[1].ppFn(d['score']).replace('%', '')// + '%' - }) - }) - - sliders.render() - selectSet(_.sortBy(sets, d => d.iEgalitarian)[0]) -} -window.initColumns('#columns-height', metrics1, measures) -window.initColumns('#columns-height-disagree', metrics2, measures2) - -// Only highlight green items in the second ranking chart. -d3.select('#columns-height-disagree').selectAll('.rank-item').at({opacity: .3}) -d3.select('#columns-height-disagree').selectAll('.green').at({opacity: 1}) - -// Only highlight the green slider in the second ranking chart. -d3.select('#columns-height-disagree').selectAll('.slider').at({opacity: d => { - return d.key !== 'green' ? 0.35: 1 -}}) - diff --git a/spaces/mfrashad/CharacterGAN/models/stylegan2/stylegan2-pytorch/lpips/dist_model.py b/spaces/mfrashad/CharacterGAN/models/stylegan2/stylegan2-pytorch/lpips/dist_model.py deleted file mode 100644 index 4ff0aa4ca6e4b217954c167787eaac1ca1f8e304..0000000000000000000000000000000000000000 --- a/spaces/mfrashad/CharacterGAN/models/stylegan2/stylegan2-pytorch/lpips/dist_model.py +++ /dev/null @@ -1,284 +0,0 @@ - -from __future__ import absolute_import - -import sys -import numpy as np -import torch -from torch import nn -import os -from collections import OrderedDict -from torch.autograd import Variable -import itertools -from .base_model import BaseModel -from scipy.ndimage import zoom -import fractions -import functools -import skimage.transform -from tqdm import tqdm - -from IPython import embed - -from . import networks_basic as networks -import lpips as util - -class DistModel(BaseModel): - def name(self): - return self.model_name - - def initialize(self, model='net-lin', net='alex', colorspace='Lab', pnet_rand=False, pnet_tune=False, model_path=None, - use_gpu=True, printNet=False, spatial=False, - is_train=False, lr=.0001, beta1=0.5, version='0.1', gpu_ids=[0]): - ''' - INPUTS - model - ['net-lin'] for linearly calibrated network - ['net'] for off-the-shelf network - ['L2'] for L2 distance in Lab colorspace - ['SSIM'] for ssim in RGB colorspace - net - ['squeeze','alex','vgg'] - model_path - if None, will look in weights/[NET_NAME].pth - colorspace - ['Lab','RGB'] colorspace to use for L2 and SSIM - use_gpu - bool - whether or not to use a GPU - printNet - bool - whether or not to print network architecture out - spatial - bool - whether to output an array containing varying distances across spatial dimensions - spatial_shape - if given, output spatial shape. if None then spatial shape is determined automatically via spatial_factor (see below). - spatial_factor - if given, specifies upsampling factor relative to the largest spatial extent of a convolutional layer. if None then resized to size of input images. - spatial_order - spline order of filter for upsampling in spatial mode, by default 1 (bilinear). - is_train - bool - [True] for training mode - lr - float - initial learning rate - beta1 - float - initial momentum term for adam - version - 0.1 for latest, 0.0 was original (with a bug) - gpu_ids - int array - [0] by default, gpus to use - ''' - BaseModel.initialize(self, use_gpu=use_gpu, gpu_ids=gpu_ids) - - self.model = model - self.net = net - self.is_train = is_train - self.spatial = spatial - self.gpu_ids = gpu_ids - self.model_name = '%s [%s]'%(model,net) - - if(self.model == 'net-lin'): # pretrained net + linear layer - self.net = networks.PNetLin(pnet_rand=pnet_rand, pnet_tune=pnet_tune, pnet_type=net, - use_dropout=True, spatial=spatial, version=version, lpips=True) - kw = {} - if not use_gpu: - kw['map_location'] = 'cpu' - if(model_path is None): - import inspect - model_path = os.path.abspath(os.path.join(inspect.getfile(self.initialize), '..', 'weights/v%s/%s.pth'%(version,net))) - - if(not is_train): - print('Loading model from: %s'%model_path) - self.net.load_state_dict(torch.load(model_path, **kw), strict=False) - - elif(self.model=='net'): # pretrained network - self.net = networks.PNetLin(pnet_rand=pnet_rand, pnet_type=net, lpips=False) - elif(self.model in ['L2','l2']): - self.net = networks.L2(use_gpu=use_gpu,colorspace=colorspace) # not really a network, only for testing - self.model_name = 'L2' - elif(self.model in ['DSSIM','dssim','SSIM','ssim']): - self.net = networks.DSSIM(use_gpu=use_gpu,colorspace=colorspace) - self.model_name = 'SSIM' - else: - raise ValueError("Model [%s] not recognized." % self.model) - - self.parameters = list(self.net.parameters()) - - if self.is_train: # training mode - # extra network on top to go from distances (d0,d1) => predicted human judgment (h*) - self.rankLoss = networks.BCERankingLoss() - self.parameters += list(self.rankLoss.net.parameters()) - self.lr = lr - self.old_lr = lr - self.optimizer_net = torch.optim.Adam(self.parameters, lr=lr, betas=(beta1, 0.999)) - else: # test mode - self.net.eval() - - if(use_gpu): - self.net.to(gpu_ids[0]) - self.net = torch.nn.DataParallel(self.net, device_ids=gpu_ids) - if(self.is_train): - self.rankLoss = self.rankLoss.to(device=gpu_ids[0]) # just put this on GPU0 - - if(printNet): - print('---------- Networks initialized -------------') - networks.print_network(self.net) - print('-----------------------------------------------') - - def forward(self, in0, in1, retPerLayer=False): - ''' Function computes the distance between image patches in0 and in1 - INPUTS - in0, in1 - torch.Tensor object of shape Nx3xXxY - image patch scaled to [-1,1] - OUTPUT - computed distances between in0 and in1 - ''' - - return self.net.forward(in0, in1, retPerLayer=retPerLayer) - - # ***** TRAINING FUNCTIONS ***** - def optimize_parameters(self): - self.forward_train() - self.optimizer_net.zero_grad() - self.backward_train() - self.optimizer_net.step() - self.clamp_weights() - - def clamp_weights(self): - for module in self.net.modules(): - if(hasattr(module, 'weight') and module.kernel_size==(1,1)): - module.weight.data = torch.clamp(module.weight.data,min=0) - - def set_input(self, data): - self.input_ref = data['ref'] - self.input_p0 = data['p0'] - self.input_p1 = data['p1'] - self.input_judge = data['judge'] - - if(self.use_gpu): - self.input_ref = self.input_ref.to(device=self.gpu_ids[0]) - self.input_p0 = self.input_p0.to(device=self.gpu_ids[0]) - self.input_p1 = self.input_p1.to(device=self.gpu_ids[0]) - self.input_judge = self.input_judge.to(device=self.gpu_ids[0]) - - self.var_ref = Variable(self.input_ref,requires_grad=True) - self.var_p0 = Variable(self.input_p0,requires_grad=True) - self.var_p1 = Variable(self.input_p1,requires_grad=True) - - def forward_train(self): # run forward pass - # print(self.net.module.scaling_layer.shift) - # print(torch.norm(self.net.module.net.slice1[0].weight).item(), torch.norm(self.net.module.lin0.model[1].weight).item()) - - self.d0 = self.forward(self.var_ref, self.var_p0) - self.d1 = self.forward(self.var_ref, self.var_p1) - self.acc_r = self.compute_accuracy(self.d0,self.d1,self.input_judge) - - self.var_judge = Variable(1.*self.input_judge).view(self.d0.size()) - - self.loss_total = self.rankLoss.forward(self.d0, self.d1, self.var_judge*2.-1.) - - return self.loss_total - - def backward_train(self): - torch.mean(self.loss_total).backward() - - def compute_accuracy(self,d0,d1,judge): - ''' d0, d1 are Variables, judge is a Tensor ''' - d1_lt_d0 = (d1 %f' % (type,self.old_lr, lr)) - self.old_lr = lr - -def score_2afc_dataset(data_loader, func, name=''): - ''' Function computes Two Alternative Forced Choice (2AFC) score using - distance function 'func' in dataset 'data_loader' - INPUTS - data_loader - CustomDatasetDataLoader object - contains a TwoAFCDataset inside - func - callable distance function - calling d=func(in0,in1) should take 2 - pytorch tensors with shape Nx3xXxY, and return numpy array of length N - OUTPUTS - [0] - 2AFC score in [0,1], fraction of time func agrees with human evaluators - [1] - dictionary with following elements - d0s,d1s - N arrays containing distances between reference patch to perturbed patches - gts - N array in [0,1], preferred patch selected by human evaluators - (closer to "0" for left patch p0, "1" for right patch p1, - "0.6" means 60pct people preferred right patch, 40pct preferred left) - scores - N array in [0,1], corresponding to what percentage function agreed with humans - CONSTS - N - number of test triplets in data_loader - ''' - - d0s = [] - d1s = [] - gts = [] - - for data in tqdm(data_loader.load_data(), desc=name): - d0s+=func(data['ref'],data['p0']).data.cpu().numpy().flatten().tolist() - d1s+=func(data['ref'],data['p1']).data.cpu().numpy().flatten().tolist() - gts+=data['judge'].cpu().numpy().flatten().tolist() - - d0s = np.array(d0s) - d1s = np.array(d1s) - gts = np.array(gts) - scores = (d0st in e?hc(e,t,{enumerable:!0,configurable:!0,writable:!0,value:n}):e[t]=n;var Et=(e,t,n)=>(yc(e,typeof t!="symbol"?t+"":t,n),n);(function(){const t=document.createElement("link").relList;if(t&&t.supports&&t.supports("modulepreload"))return;for(const l of document.querySelectorAll('link[rel="modulepreload"]'))r(l);new MutationObserver(l=>{for(const i of l)if(i.type==="childList")for(const o of i.addedNodes)o.tagName==="LINK"&&o.rel==="modulepreload"&&r(o)}).observe(document,{childList:!0,subtree:!0});function n(l){const i={};return l.integrity&&(i.integrity=l.integrity),l.referrerPolicy&&(i.referrerPolicy=l.referrerPolicy),l.crossOrigin==="use-credentials"?i.credentials="include":l.crossOrigin==="anonymous"?i.credentials="omit":i.credentials="same-origin",i}function r(l){if(l.ep)return;l.ep=!0;const i=n(l);fetch(l.href,i)}})();var Mr={},vc={get exports(){return Mr},set exports(e){Mr=e}},ul={},ne={},gc={get exports(){return ne},set exports(e){ne=e}},T={};/** - * @license React - * react.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var bn=Symbol.for("react.element"),wc=Symbol.for("react.portal"),kc=Symbol.for("react.fragment"),Sc=Symbol.for("react.strict_mode"),Ec=Symbol.for("react.profiler"),xc=Symbol.for("react.provider"),_c=Symbol.for("react.context"),Cc=Symbol.for("react.forward_ref"),Nc=Symbol.for("react.suspense"),Pc=Symbol.for("react.memo"),zc=Symbol.for("react.lazy"),Qo=Symbol.iterator;function Oc(e){return e===null||typeof e!="object"?null:(e=Qo&&e[Qo]||e["@@iterator"],typeof e=="function"?e:null)}var ns={isMounted:function(){return!1},enqueueForceUpdate:function(){},enqueueReplaceState:function(){},enqueueSetState:function(){}},rs=Object.assign,ls={};function cn(e,t,n){this.props=e,this.context=t,this.refs=ls,this.updater=n||ns}cn.prototype.isReactComponent={};cn.prototype.setState=function(e,t){if(typeof e!="object"&&typeof e!="function"&&e!=null)throw Error("setState(...): takes an object of state variables to update or a function which returns an object of state variables.");this.updater.enqueueSetState(this,e,t,"setState")};cn.prototype.forceUpdate=function(e){this.updater.enqueueForceUpdate(this,e,"forceUpdate")};function is(){}is.prototype=cn.prototype;function Xi(e,t,n){this.props=e,this.context=t,this.refs=ls,this.updater=n||ns}var Yi=Xi.prototype=new is;Yi.constructor=Xi;rs(Yi,cn.prototype);Yi.isPureReactComponent=!0;var Ko=Array.isArray,os=Object.prototype.hasOwnProperty,Gi={current:null},us={key:!0,ref:!0,__self:!0,__source:!0};function ss(e,t,n){var r,l={},i=null,o=null;if(t!=null)for(r in t.ref!==void 0&&(o=t.ref),t.key!==void 0&&(i=""+t.key),t)os.call(t,r)&&!us.hasOwnProperty(r)&&(l[r]=t[r]);var u=arguments.length-2;if(u===1)l.children=n;else if(1]+)>;\s+rel="([^"]+)"/g;return Object.fromEntries([...e.matchAll(t)].map(([,n,r])=>[r,n]))}var Qc=["pipeline_tag","private","gated","downloads","likes"];async function*Kc(e){var r,l;Hc(e==null?void 0:e.credentials);const t=new URLSearchParams([...Object.entries({limit:"500",...(r=e==null?void 0:e.search)!=null&&r.owner?{author:e.search.owner}:void 0,...(l=e==null?void 0:e.search)!=null&&l.task?{pipeline_tag:e.search.task}:void 0}),...Qc.map(i=>["expand",i])]).toString();let n=`${(e==null?void 0:e.hubUrl)||$c}/api/models?${t}`;for(;n;){const i=await fetch(n,{headers:{accept:"application/json",...e!=null&&e.credentials?{Authorization:`Bearer ${e.credentials.accessToken}`}:void 0}});if(!i.ok)throw Vc(i);const o=await i.json();for(const s of o)yield{id:s._id,name:s.id,private:s.private,task:s.pipeline_tag,downloads:s.downloads,gated:s.gated,likes:s.likes,updatedAt:new Date(s.lastModified)};const u=i.headers.get("Link");n=u?Wc(u).next:void 0}}var Xc=Object.defineProperty,Yc=(e,t)=>{for(var n in t)Xc(e,n,{get:t[n],enumerable:!0})},Ji={};Yc(Ji,{audioClassification:()=>bc,automaticSpeechRecognition:()=>ef,conversational:()=>uf,featureExtraction:()=>sf,fillMask:()=>af,imageClassification:()=>tf,imageSegmentation:()=>nf,imageToText:()=>rf,objectDetection:()=>lf,questionAnswering:()=>cf,request:()=>K,sentenceSimilarity:()=>ff,streamingRequest:()=>qi,summarization:()=>df,tableQuestionAnswering:()=>pf,textClassification:()=>mf,textGeneration:()=>hf,textGenerationStream:()=>yf,textToImage:()=>of,tokenClassification:()=>vf,translation:()=>gf,zeroShotClassification:()=>wf});var Gc="https://api-inference.huggingface.co/models/";function cs(e,t){const{model:n,accessToken:r,...l}=e,i={};r&&(i.Authorization=`Bearer ${r}`);const o="data"in e&&!!e.data;o?(t!=null&&t.wait_for_model&&(i["X-Wait-For-Model"]="true"),(t==null?void 0:t.use_cache)===!1&&(i["X-Use-Cache"]="false"),t!=null&&t.dont_load_model&&(i["X-Load-Model"]="0")):i["Content-Type"]="application/json";const u=/^http(s?):/.test(n)||n.startsWith("/")?n:`${Gc}${n}`,s={headers:i,method:"POST",body:o?e.data:JSON.stringify({...l,options:t}),credentials:t!=null&&t.includeCredentials?"include":"same-origin"};return{url:u,info:s}}async function K(e,t){var i,o;const{url:n,info:r}=cs(e,t),l=await fetch(n,r);if((t==null?void 0:t.retry_on_error)!==!1&&l.status===503&&!(t!=null&&t.wait_for_model))return K(e,{...t,wait_for_model:!0});if(!l.ok){if((i=l.headers.get("Content-Type"))!=null&&i.startsWith("application/json")){const u=await l.json();if(u.error)throw new Error(u.error)}throw new Error("An error occurred while fetching the blob")}return(o=l.headers.get("Content-Type"))!=null&&o.startsWith("application/json")?await l.json():await l.blob()}function Zc(e){let t,n,r,l=!1;return function(o){t===void 0?(t=o,n=0,r=-1):t=qc(t,o);const u=t.length;let s=0;for(;n0){const s=l.decode(o.subarray(0,u)),c=u+(o[u+1]===32?2:1),m=l.decode(o.subarray(c));switch(s){case"data":r.data=r.data?r.data+` -`+m:m;break;case"event":r.event=m;break;case"id":e(r.id=m);break;case"retry":const h=parseInt(m,10);isNaN(h)||t(r.retry=h);break}}}}function qc(e,t){const n=new Uint8Array(e.length+t.length);return n.set(e),n.set(t,e.length),n}function Yo(){return{data:"",event:"",id:"",retry:void 0}}async function*qi(e,t){var c;const{url:n,info:r}=cs({...e,stream:!0},t),l=await fetch(n,r);if((t==null?void 0:t.retry_on_error)!==!1&&l.status===503&&!(t!=null&&t.wait_for_model))return qi(e,{...t,wait_for_model:!0});if(!l.ok){if((c=l.headers.get("Content-Type"))!=null&&c.startsWith("application/json")){const m=await l.json();if(m.error)throw new Error(m.error)}throw new Error(`Server response contains error: ${l.status}`)}if(l.headers.get("content-type")!=="text/event-stream")throw new Error("Server does not support event stream content type, it returned "+l.headers.get("content-type"));if(!l.body)return;const i=l.body.getReader();let o=[];const s=Zc(Jc(()=>{},()=>{},m=>{o.push(m)}));try{for(;;){const{done:m,value:h}=await i.read();if(m)return;s(h);for(const p of o)p.data.length>0&&(yield JSON.parse(p.data));o=[]}}finally{i.releaseLock()}}var J=class extends TypeError{constructor(e){super(`Invalid inference output: ${e}. Use the 'request' method with the same parameters to do a custom call with no type checking.`),this.name="InferenceOutputError"}};async function bc(e,t){const n=await K(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof l.label=="string"&&typeof l.score=="number")))throw new J("Expected Array<{label: string, score: number}>");return n}async function ef(e,t){const n=await K(e,t);if(!(typeof(n==null?void 0:n.text)=="string"))throw new J("Expected {text: string}");return n}async function tf(e,t){const n=await K(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof l.label=="string"&&typeof l.score=="number")))throw new J("Expected Array<{label: string, score: number}>");return n}async function nf(e,t){const n=await K(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof l.label=="string"&&typeof l.mask=="string"&&typeof l.score=="number")))throw new J("Expected Array<{label: string, mask: string, score: number}>");return n}async function rf(e,t){var r;const n=(r=await K(e,t))==null?void 0:r[0];if(typeof(n==null?void 0:n.generated_text)!="string")throw new J("Expected {generated_text: string}");return n}async function lf(e,t){const n=await K(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof l.label=="string"&&typeof l.score=="number"&&typeof l.box.xmin=="number"&&typeof l.box.ymin=="number"&&typeof l.box.xmax=="number"&&typeof l.box.ymax=="number")))throw new J("Expected Array<{label:string; score:number; box:{xmin:number; ymin:number; xmax:number; ymax:number}}>");return n}async function of(e,t){const n=await K(e,t);if(!(n&&n instanceof Blob))throw new J("Expected Blob");return n}async function uf(e,t){const n=await K(e,t);if(!(Array.isArray(n.conversation.generated_responses)&&n.conversation.generated_responses.every(l=>typeof l=="string")&&Array.isArray(n.conversation.past_user_inputs)&&n.conversation.past_user_inputs.every(l=>typeof l=="string")&&typeof n.generated_text=="string"&&Array.isArray(n.warnings)&&n.warnings.every(l=>typeof l=="string")))throw new J("Expected {conversation: {generated_responses: string[], past_user_inputs: string[]}, generated_text: string, warnings: string[]}");return n}async function sf(e,t){const n=await K(e,t);let r=!0;if(Array.isArray(n)){for(const l of n)if(Array.isArray(l)){if(r=l.every(i=>typeof i=="number"),!r)break}else if(typeof l!="number"){r=!1;break}}else r=!1;if(!r)throw new J("Expected Array");return n}async function af(e,t){const n=await K(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof l.score=="number"&&typeof l.sequence=="string"&&typeof l.token=="number"&&typeof l.token_str=="string")))throw new J("Expected Array<{score: number, sequence: string, token: number, token_str: string}>");return n}async function cf(e,t){const n=await K(e,t);if(!(typeof(n==null?void 0:n.answer)=="string"&&typeof n.end=="number"&&typeof n.score=="number"&&typeof n.start=="number"))throw new J("Expected {answer: string, end: number, score: number, start: number}");return n}async function ff(e,t){const n=await K(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof l=="number")))throw new J("Expected number[]");return n}async function df(e,t){const n=await K(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof(l==null?void 0:l.summary_text)=="string")))throw new J("Expected Array<{summary_text: string}>");return n==null?void 0:n[0]}async function pf(e,t){const n=await K(e,t);if(!(typeof(n==null?void 0:n.aggregator)=="string"&&typeof n.answer=="string"&&Array.isArray(n.cells)&&n.cells.every(l=>typeof l=="string")&&Array.isArray(n.coordinates)&&n.coordinates.every(l=>Array.isArray(l)&&l.every(i=>typeof i=="number"))))throw new J("Expected {aggregator: string, answer: string, cells: string[], coordinates: number[][]}");return n}async function mf(e,t){var l;const n=(l=await K(e,t))==null?void 0:l[0];if(!(Array.isArray(n)&&n.every(i=>typeof(i==null?void 0:i.label)=="string"&&typeof i.score=="number")))throw new J("Expected Array<{label: string, score: number}>");return n}async function hf(e,t){const n=await K(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof(l==null?void 0:l.generated_text)=="string")))throw new J("Expected Array<{generated_text: string}>");return n==null?void 0:n[0]}async function*yf(e,t){yield*qi(e,t)}function fs(e){return Array.isArray(e)?e:[e]}async function vf(e,t){const n=fs(await K(e,t));if(!(Array.isArray(n)&&n.every(l=>typeof l.end=="number"&&typeof l.entity_group=="string"&&typeof l.score=="number"&&typeof l.start=="number"&&typeof l.word=="string")))throw new J("Expected Array<{end: number, entity_group: string, score: number, start: number, word: string}>");return n}async function gf(e,t){const n=await K(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof(l==null?void 0:l.translation_text)=="string")))throw new J("Expected type Array<{translation_text: string}>");return n==null?void 0:n[0]}async function wf(e,t){const n=fs(await K(e,t));if(!(Array.isArray(n)&&n.every(l=>Array.isArray(l.labels)&&l.labels.every(i=>typeof i=="string")&&Array.isArray(l.scores)&&l.scores.every(i=>typeof i=="number")&&typeof l.sequence=="string")))throw new J("Expected Array<{labels: string[], scores: number[], sequence: string}>");return n}var kf=class{constructor(e="",t={}){Et(this,"accessToken");Et(this,"defaultOptions");this.accessToken=e,this.defaultOptions=t;for(const[n,r]of Object.entries(Ji))Object.defineProperty(this,n,{enumerable:!1,value:(l,i)=>r({...l,accessToken:e},{...t,...i})})}endpoint(e){return new Sf(e,this.accessToken,this.defaultOptions)}},Sf=class{constructor(e,t="",n={}){for(const[r,l]of Object.entries(Ji))Object.defineProperty(this,r,{enumerable:!1,value:(i,o)=>l({...i,accessToken:t,model:e},{...n,...o})})}},jr=function(){return jr=Object.assign||function(t){for(var n,r=1,l=arguments.length;r0&&n>="0"&&n<="9"?"_"+n+r:""+n.toUpperCase()+r}function Nf(e,t){return t===void 0&&(t={}),Cf(e,jr({delimiter:"",transform:ds},t))}function Pf(e,t){return t===0?e.toLowerCase():ds(e,t)}function zf(e,t){return t===void 0&&(t={}),Nf(e,jr({transform:Pf},t))}var bl={},Of={get exports(){return bl},set exports(e){bl=e}},Ee={},ei={},Tf={get exports(){return ei},set exports(e){ei=e}},ps={};/** - * @license React - * scheduler.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */(function(e){function t(x,z){var O=x.length;x.push(z);e:for(;0>>1,q=x[W];if(0>>1;Wl(Cl,O))Stl(ir,Cl)?(x[W]=ir,x[St]=O,W=St):(x[W]=Cl,x[kt]=O,W=kt);else if(Stl(ir,O))x[W]=ir,x[St]=O,W=St;else break e}}return z}function l(x,z){var O=x.sortIndex-z.sortIndex;return O!==0?O:x.id-z.id}if(typeof performance=="object"&&typeof performance.now=="function"){var i=performance;e.unstable_now=function(){return i.now()}}else{var o=Date,u=o.now();e.unstable_now=function(){return o.now()-u}}var s=[],c=[],m=1,h=null,p=3,g=!1,w=!1,k=!1,D=typeof setTimeout=="function"?setTimeout:null,f=typeof clearTimeout=="function"?clearTimeout:null,a=typeof setImmediate<"u"?setImmediate:null;typeof navigator<"u"&&navigator.scheduling!==void 0&&navigator.scheduling.isInputPending!==void 0&&navigator.scheduling.isInputPending.bind(navigator.scheduling);function d(x){for(var z=n(c);z!==null;){if(z.callback===null)r(c);else if(z.startTime<=x)r(c),z.sortIndex=z.expirationTime,t(s,z);else break;z=n(c)}}function y(x){if(k=!1,d(x),!w)if(n(s)!==null)w=!0,xl(E);else{var z=n(c);z!==null&&_l(y,z.startTime-x)}}function E(x,z){w=!1,k&&(k=!1,f(N),N=-1),g=!0;var O=p;try{for(d(z),h=n(s);h!==null&&(!(h.expirationTime>z)||x&&!Le());){var W=h.callback;if(typeof W=="function"){h.callback=null,p=h.priorityLevel;var q=W(h.expirationTime<=z);z=e.unstable_now(),typeof q=="function"?h.callback=q:h===n(s)&&r(s),d(z)}else r(s);h=n(s)}if(h!==null)var lr=!0;else{var kt=n(c);kt!==null&&_l(y,kt.startTime-z),lr=!1}return lr}finally{h=null,p=O,g=!1}}var _=!1,C=null,N=-1,H=5,L=-1;function Le(){return!(e.unstable_now()-Lx||125W?(x.sortIndex=O,t(c,x),n(s)===null&&x===n(c)&&(k?(f(N),N=-1):k=!0,_l(y,O-W))):(x.sortIndex=q,t(s,x),w||g||(w=!0,xl(E))),x},e.unstable_shouldYield=Le,e.unstable_wrapCallback=function(x){var z=p;return function(){var O=p;p=z;try{return x.apply(this,arguments)}finally{p=O}}}})(ps);(function(e){e.exports=ps})(Tf);/** - * @license React - * react-dom.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var ms=ne,Se=ei;function v(e){for(var t="https://reactjs.org/docs/error-decoder.html?invariant="+e,n=1;n"u"||typeof window.document>"u"||typeof window.document.createElement>"u"),ti=Object.prototype.hasOwnProperty,Lf=/^[:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD][:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD\-.0-9\u00B7\u0300-\u036F\u203F-\u2040]*$/,Zo={},Jo={};function Rf(e){return ti.call(Jo,e)?!0:ti.call(Zo,e)?!1:Lf.test(e)?Jo[e]=!0:(Zo[e]=!0,!1)}function If(e,t,n,r){if(n!==null&&n.type===0)return!1;switch(typeof t){case"function":case"symbol":return!0;case"boolean":return r?!1:n!==null?!n.acceptsBooleans:(e=e.toLowerCase().slice(0,5),e!=="data-"&&e!=="aria-");default:return!1}}function Af(e,t,n,r){if(t===null||typeof t>"u"||If(e,t,n,r))return!0;if(r)return!1;if(n!==null)switch(n.type){case 3:return!t;case 4:return t===!1;case 5:return isNaN(t);case 6:return isNaN(t)||1>t}return!1}function de(e,t,n,r,l,i,o){this.acceptsBooleans=t===2||t===3||t===4,this.attributeName=r,this.attributeNamespace=l,this.mustUseProperty=n,this.propertyName=e,this.type=t,this.sanitizeURL=i,this.removeEmptyString=o}var le={};"children dangerouslySetInnerHTML defaultValue defaultChecked innerHTML suppressContentEditableWarning suppressHydrationWarning style".split(" ").forEach(function(e){le[e]=new de(e,0,!1,e,null,!1,!1)});[["acceptCharset","accept-charset"],["className","class"],["htmlFor","for"],["httpEquiv","http-equiv"]].forEach(function(e){var t=e[0];le[t]=new de(t,1,!1,e[1],null,!1,!1)});["contentEditable","draggable","spellCheck","value"].forEach(function(e){le[e]=new de(e,2,!1,e.toLowerCase(),null,!1,!1)});["autoReverse","externalResourcesRequired","focusable","preserveAlpha"].forEach(function(e){le[e]=new de(e,2,!1,e,null,!1,!1)});"allowFullScreen async autoFocus autoPlay controls default defer disabled disablePictureInPicture disableRemotePlayback formNoValidate hidden loop noModule noValidate open playsInline readOnly required reversed scoped seamless itemScope".split(" ").forEach(function(e){le[e]=new de(e,3,!1,e.toLowerCase(),null,!1,!1)});["checked","multiple","muted","selected"].forEach(function(e){le[e]=new de(e,3,!0,e,null,!1,!1)});["capture","download"].forEach(function(e){le[e]=new de(e,4,!1,e,null,!1,!1)});["cols","rows","size","span"].forEach(function(e){le[e]=new de(e,6,!1,e,null,!1,!1)});["rowSpan","start"].forEach(function(e){le[e]=new de(e,5,!1,e.toLowerCase(),null,!1,!1)});var bi=/[\-:]([a-z])/g;function eo(e){return e[1].toUpperCase()}"accent-height alignment-baseline arabic-form baseline-shift cap-height clip-path clip-rule color-interpolation color-interpolation-filters color-profile color-rendering dominant-baseline enable-background fill-opacity fill-rule flood-color flood-opacity font-family font-size font-size-adjust font-stretch font-style font-variant font-weight glyph-name glyph-orientation-horizontal glyph-orientation-vertical horiz-adv-x horiz-origin-x image-rendering letter-spacing lighting-color marker-end marker-mid marker-start overline-position overline-thickness paint-order panose-1 pointer-events rendering-intent shape-rendering stop-color stop-opacity strikethrough-position strikethrough-thickness stroke-dasharray stroke-dashoffset stroke-linecap stroke-linejoin stroke-miterlimit stroke-opacity stroke-width text-anchor text-decoration text-rendering underline-position underline-thickness unicode-bidi unicode-range units-per-em v-alphabetic v-hanging v-ideographic v-mathematical vector-effect vert-adv-y vert-origin-x vert-origin-y word-spacing writing-mode xmlns:xlink x-height".split(" ").forEach(function(e){var t=e.replace(bi,eo);le[t]=new de(t,1,!1,e,null,!1,!1)});"xlink:actuate xlink:arcrole xlink:role xlink:show xlink:title xlink:type".split(" ").forEach(function(e){var t=e.replace(bi,eo);le[t]=new de(t,1,!1,e,"http://www.w3.org/1999/xlink",!1,!1)});["xml:base","xml:lang","xml:space"].forEach(function(e){var t=e.replace(bi,eo);le[t]=new de(t,1,!1,e,"http://www.w3.org/XML/1998/namespace",!1,!1)});["tabIndex","crossOrigin"].forEach(function(e){le[e]=new de(e,1,!1,e.toLowerCase(),null,!1,!1)});le.xlinkHref=new de("xlinkHref",1,!1,"xlink:href","http://www.w3.org/1999/xlink",!0,!1);["src","href","action","formAction"].forEach(function(e){le[e]=new de(e,1,!1,e.toLowerCase(),null,!0,!0)});function to(e,t,n,r){var l=le.hasOwnProperty(t)?le[t]:null;(l!==null?l.type!==0:r||!(2u||l[o]!==i[u]){var s=` -`+l[o].replace(" at new "," at ");return e.displayName&&s.includes("")&&(s=s.replace("",e.displayName)),s}while(1<=o&&0<=u);break}}}finally{zl=!1,Error.prepareStackTrace=n}return(e=e?e.displayName||e.name:"")?xn(e):""}function Mf(e){switch(e.tag){case 5:return xn(e.type);case 16:return xn("Lazy");case 13:return xn("Suspense");case 19:return xn("SuspenseList");case 0:case 2:case 15:return e=Ol(e.type,!1),e;case 11:return e=Ol(e.type.render,!1),e;case 1:return e=Ol(e.type,!0),e;default:return""}}function ii(e){if(e==null)return null;if(typeof e=="function")return e.displayName||e.name||null;if(typeof e=="string")return e;switch(e){case Ut:return"Fragment";case Ft:return"Portal";case ni:return"Profiler";case no:return"StrictMode";case ri:return"Suspense";case li:return"SuspenseList"}if(typeof e=="object")switch(e.$$typeof){case vs:return(e.displayName||"Context")+".Consumer";case ys:return(e._context.displayName||"Context")+".Provider";case ro:var t=e.render;return e=e.displayName,e||(e=t.displayName||t.name||"",e=e!==""?"ForwardRef("+e+")":"ForwardRef"),e;case lo:return t=e.displayName||null,t!==null?t:ii(e.type)||"Memo";case tt:t=e._payload,e=e._init;try{return ii(e(t))}catch{}}return null}function jf(e){var t=e.type;switch(e.tag){case 24:return"Cache";case 9:return(t.displayName||"Context")+".Consumer";case 10:return(t._context.displayName||"Context")+".Provider";case 18:return"DehydratedFragment";case 11:return e=t.render,e=e.displayName||e.name||"",t.displayName||(e!==""?"ForwardRef("+e+")":"ForwardRef");case 7:return"Fragment";case 5:return t;case 4:return"Portal";case 3:return"Root";case 6:return"Text";case 16:return ii(t);case 8:return t===no?"StrictMode":"Mode";case 22:return"Offscreen";case 12:return"Profiler";case 21:return"Scope";case 13:return"Suspense";case 19:return"SuspenseList";case 25:return"TracingMarker";case 1:case 0:case 17:case 2:case 14:case 15:if(typeof t=="function")return t.displayName||t.name||null;if(typeof t=="string")return t}return null}function ht(e){switch(typeof e){case"boolean":case"number":case"string":case"undefined":return e;case"object":return e;default:return""}}function ws(e){var t=e.type;return(e=e.nodeName)&&e.toLowerCase()==="input"&&(t==="checkbox"||t==="radio")}function Df(e){var t=ws(e)?"checked":"value",n=Object.getOwnPropertyDescriptor(e.constructor.prototype,t),r=""+e[t];if(!e.hasOwnProperty(t)&&typeof n<"u"&&typeof n.get=="function"&&typeof n.set=="function"){var l=n.get,i=n.set;return Object.defineProperty(e,t,{configurable:!0,get:function(){return l.call(this)},set:function(o){r=""+o,i.call(this,o)}}),Object.defineProperty(e,t,{enumerable:n.enumerable}),{getValue:function(){return r},setValue:function(o){r=""+o},stopTracking:function(){e._valueTracker=null,delete e[t]}}}}function sr(e){e._valueTracker||(e._valueTracker=Df(e))}function ks(e){if(!e)return!1;var t=e._valueTracker;if(!t)return!0;var n=t.getValue(),r="";return e&&(r=ws(e)?e.checked?"true":"false":e.value),e=r,e!==n?(t.setValue(e),!0):!1}function Dr(e){if(e=e||(typeof document<"u"?document:void 0),typeof e>"u")return null;try{return e.activeElement||e.body}catch{return e.body}}function oi(e,t){var n=t.checked;return V({},t,{defaultChecked:void 0,defaultValue:void 0,value:void 0,checked:n??e._wrapperState.initialChecked})}function bo(e,t){var n=t.defaultValue==null?"":t.defaultValue,r=t.checked!=null?t.checked:t.defaultChecked;n=ht(t.value!=null?t.value:n),e._wrapperState={initialChecked:r,initialValue:n,controlled:t.type==="checkbox"||t.type==="radio"?t.checked!=null:t.value!=null}}function Ss(e,t){t=t.checked,t!=null&&to(e,"checked",t,!1)}function ui(e,t){Ss(e,t);var n=ht(t.value),r=t.type;if(n!=null)r==="number"?(n===0&&e.value===""||e.value!=n)&&(e.value=""+n):e.value!==""+n&&(e.value=""+n);else if(r==="submit"||r==="reset"){e.removeAttribute("value");return}t.hasOwnProperty("value")?si(e,t.type,n):t.hasOwnProperty("defaultValue")&&si(e,t.type,ht(t.defaultValue)),t.checked==null&&t.defaultChecked!=null&&(e.defaultChecked=!!t.defaultChecked)}function eu(e,t,n){if(t.hasOwnProperty("value")||t.hasOwnProperty("defaultValue")){var r=t.type;if(!(r!=="submit"&&r!=="reset"||t.value!==void 0&&t.value!==null))return;t=""+e._wrapperState.initialValue,n||t===e.value||(e.value=t),e.defaultValue=t}n=e.name,n!==""&&(e.name=""),e.defaultChecked=!!e._wrapperState.initialChecked,n!==""&&(e.name=n)}function si(e,t,n){(t!=="number"||Dr(e.ownerDocument)!==e)&&(n==null?e.defaultValue=""+e._wrapperState.initialValue:e.defaultValue!==""+n&&(e.defaultValue=""+n))}var _n=Array.isArray;function Zt(e,t,n,r){if(e=e.options,t){t={};for(var l=0;l"+t.valueOf().toString()+"",t=ar.firstChild;e.firstChild;)e.removeChild(e.firstChild);for(;t.firstChild;)e.appendChild(t.firstChild)}});function Dn(e,t){if(t){var n=e.firstChild;if(n&&n===e.lastChild&&n.nodeType===3){n.nodeValue=t;return}}e.textContent=t}var Pn={animationIterationCount:!0,aspectRatio:!0,borderImageOutset:!0,borderImageSlice:!0,borderImageWidth:!0,boxFlex:!0,boxFlexGroup:!0,boxOrdinalGroup:!0,columnCount:!0,columns:!0,flex:!0,flexGrow:!0,flexPositive:!0,flexShrink:!0,flexNegative:!0,flexOrder:!0,gridArea:!0,gridRow:!0,gridRowEnd:!0,gridRowSpan:!0,gridRowStart:!0,gridColumn:!0,gridColumnEnd:!0,gridColumnSpan:!0,gridColumnStart:!0,fontWeight:!0,lineClamp:!0,lineHeight:!0,opacity:!0,order:!0,orphans:!0,tabSize:!0,widows:!0,zIndex:!0,zoom:!0,fillOpacity:!0,floodOpacity:!0,stopOpacity:!0,strokeDasharray:!0,strokeDashoffset:!0,strokeMiterlimit:!0,strokeOpacity:!0,strokeWidth:!0},Ff=["Webkit","ms","Moz","O"];Object.keys(Pn).forEach(function(e){Ff.forEach(function(t){t=t+e.charAt(0).toUpperCase()+e.substring(1),Pn[t]=Pn[e]})});function Cs(e,t,n){return t==null||typeof t=="boolean"||t===""?"":n||typeof t!="number"||t===0||Pn.hasOwnProperty(e)&&Pn[e]?(""+t).trim():t+"px"}function Ns(e,t){e=e.style;for(var n in t)if(t.hasOwnProperty(n)){var r=n.indexOf("--")===0,l=Cs(n,t[n],r);n==="float"&&(n="cssFloat"),r?e.setProperty(n,l):e[n]=l}}var Uf=V({menuitem:!0},{area:!0,base:!0,br:!0,col:!0,embed:!0,hr:!0,img:!0,input:!0,keygen:!0,link:!0,meta:!0,param:!0,source:!0,track:!0,wbr:!0});function fi(e,t){if(t){if(Uf[e]&&(t.children!=null||t.dangerouslySetInnerHTML!=null))throw Error(v(137,e));if(t.dangerouslySetInnerHTML!=null){if(t.children!=null)throw Error(v(60));if(typeof t.dangerouslySetInnerHTML!="object"||!("__html"in t.dangerouslySetInnerHTML))throw Error(v(61))}if(t.style!=null&&typeof t.style!="object")throw Error(v(62))}}function di(e,t){if(e.indexOf("-")===-1)return typeof t.is=="string";switch(e){case"annotation-xml":case"color-profile":case"font-face":case"font-face-src":case"font-face-uri":case"font-face-format":case"font-face-name":case"missing-glyph":return!1;default:return!0}}var pi=null;function io(e){return e=e.target||e.srcElement||window,e.correspondingUseElement&&(e=e.correspondingUseElement),e.nodeType===3?e.parentNode:e}var mi=null,Jt=null,qt=null;function ru(e){if(e=nr(e)){if(typeof mi!="function")throw Error(v(280));var t=e.stateNode;t&&(t=dl(t),mi(e.stateNode,e.type,t))}}function Ps(e){Jt?qt?qt.push(e):qt=[e]:Jt=e}function zs(){if(Jt){var e=Jt,t=qt;if(qt=Jt=null,ru(e),t)for(e=0;e>>=0,e===0?32:31-(Zf(e)/Jf|0)|0}var cr=64,fr=4194304;function Cn(e){switch(e&-e){case 1:return 1;case 2:return 2;case 4:return 4;case 8:return 8;case 16:return 16;case 32:return 32;case 64:case 128:case 256:case 512:case 1024:case 2048:case 4096:case 8192:case 16384:case 32768:case 65536:case 131072:case 262144:case 524288:case 1048576:case 2097152:return e&4194240;case 4194304:case 8388608:case 16777216:case 33554432:case 67108864:return e&130023424;case 134217728:return 134217728;case 268435456:return 268435456;case 536870912:return 536870912;case 1073741824:return 1073741824;default:return e}}function Vr(e,t){var n=e.pendingLanes;if(n===0)return 0;var r=0,l=e.suspendedLanes,i=e.pingedLanes,o=n&268435455;if(o!==0){var u=o&~l;u!==0?r=Cn(u):(i&=o,i!==0&&(r=Cn(i)))}else o=n&~l,o!==0?r=Cn(o):i!==0&&(r=Cn(i));if(r===0)return 0;if(t!==0&&t!==r&&!(t&l)&&(l=r&-r,i=t&-t,l>=i||l===16&&(i&4194240)!==0))return t;if(r&4&&(r|=n&16),t=e.entangledLanes,t!==0)for(e=e.entanglements,t&=r;0n;n++)t.push(e);return t}function er(e,t,n){e.pendingLanes|=t,t!==536870912&&(e.suspendedLanes=0,e.pingedLanes=0),e=e.eventTimes,t=31-je(t),e[t]=n}function td(e,t){var n=e.pendingLanes&~t;e.pendingLanes=t,e.suspendedLanes=0,e.pingedLanes=0,e.expiredLanes&=t,e.mutableReadLanes&=t,e.entangledLanes&=t,t=e.entanglements;var r=e.eventTimes;for(e=e.expirationTimes;0=On),du=String.fromCharCode(32),pu=!1;function Ys(e,t){switch(e){case"keyup":return Od.indexOf(t.keyCode)!==-1;case"keydown":return t.keyCode!==229;case"keypress":case"mousedown":case"focusout":return!0;default:return!1}}function Gs(e){return e=e.detail,typeof e=="object"&&"data"in e?e.data:null}var $t=!1;function Ld(e,t){switch(e){case"compositionend":return Gs(t);case"keypress":return t.which!==32?null:(pu=!0,du);case"textInput":return e=t.data,e===du&&pu?null:e;default:return null}}function Rd(e,t){if($t)return e==="compositionend"||!mo&&Ys(e,t)?(e=Ks(),Nr=co=it=null,$t=!1,e):null;switch(e){case"paste":return null;case"keypress":if(!(t.ctrlKey||t.altKey||t.metaKey)||t.ctrlKey&&t.altKey){if(t.char&&1=t)return{node:n,offset:t-e};e=r}e:{for(;n;){if(n.nextSibling){n=n.nextSibling;break e}n=n.parentNode}n=void 0}n=vu(n)}}function bs(e,t){return e&&t?e===t?!0:e&&e.nodeType===3?!1:t&&t.nodeType===3?bs(e,t.parentNode):"contains"in e?e.contains(t):e.compareDocumentPosition?!!(e.compareDocumentPosition(t)&16):!1:!1}function ea(){for(var e=window,t=Dr();t instanceof e.HTMLIFrameElement;){try{var n=typeof t.contentWindow.location.href=="string"}catch{n=!1}if(n)e=t.contentWindow;else break;t=Dr(e.document)}return t}function ho(e){var t=e&&e.nodeName&&e.nodeName.toLowerCase();return t&&(t==="input"&&(e.type==="text"||e.type==="search"||e.type==="tel"||e.type==="url"||e.type==="password")||t==="textarea"||e.contentEditable==="true")}function Vd(e){var t=ea(),n=e.focusedElem,r=e.selectionRange;if(t!==n&&n&&n.ownerDocument&&bs(n.ownerDocument.documentElement,n)){if(r!==null&&ho(n)){if(t=r.start,e=r.end,e===void 0&&(e=t),"selectionStart"in n)n.selectionStart=t,n.selectionEnd=Math.min(e,n.value.length);else if(e=(t=n.ownerDocument||document)&&t.defaultView||window,e.getSelection){e=e.getSelection();var l=n.textContent.length,i=Math.min(r.start,l);r=r.end===void 0?i:Math.min(r.end,l),!e.extend&&i>r&&(l=r,r=i,i=l),l=gu(n,i);var o=gu(n,r);l&&o&&(e.rangeCount!==1||e.anchorNode!==l.node||e.anchorOffset!==l.offset||e.focusNode!==o.node||e.focusOffset!==o.offset)&&(t=t.createRange(),t.setStart(l.node,l.offset),e.removeAllRanges(),i>r?(e.addRange(t),e.extend(o.node,o.offset)):(t.setEnd(o.node,o.offset),e.addRange(t)))}}for(t=[],e=n;e=e.parentNode;)e.nodeType===1&&t.push({element:e,left:e.scrollLeft,top:e.scrollTop});for(typeof n.focus=="function"&&n.focus(),n=0;n=document.documentMode,Vt=null,ki=null,Ln=null,Si=!1;function wu(e,t,n){var r=n.window===n?n.document:n.nodeType===9?n:n.ownerDocument;Si||Vt==null||Vt!==Dr(r)||(r=Vt,"selectionStart"in r&&ho(r)?r={start:r.selectionStart,end:r.selectionEnd}:(r=(r.ownerDocument&&r.ownerDocument.defaultView||window).getSelection(),r={anchorNode:r.anchorNode,anchorOffset:r.anchorOffset,focusNode:r.focusNode,focusOffset:r.focusOffset}),Ln&&Hn(Ln,r)||(Ln=r,r=Wr(ki,"onSelect"),0Wt||(e.current=Pi[Wt],Pi[Wt]=null,Wt--)}function A(e,t){Wt++,Pi[Wt]=e.current,e.current=t}var yt={},se=gt(yt),he=gt(!1),Tt=yt;function rn(e,t){var n=e.type.contextTypes;if(!n)return yt;var r=e.stateNode;if(r&&r.__reactInternalMemoizedUnmaskedChildContext===t)return r.__reactInternalMemoizedMaskedChildContext;var l={},i;for(i in n)l[i]=t[i];return r&&(e=e.stateNode,e.__reactInternalMemoizedUnmaskedChildContext=t,e.__reactInternalMemoizedMaskedChildContext=l),l}function ye(e){return e=e.childContextTypes,e!=null}function Kr(){j(he),j(se)}function Nu(e,t,n){if(se.current!==yt)throw Error(v(168));A(se,t),A(he,n)}function aa(e,t,n){var r=e.stateNode;if(t=t.childContextTypes,typeof r.getChildContext!="function")return n;r=r.getChildContext();for(var l in r)if(!(l in t))throw Error(v(108,jf(e)||"Unknown",l));return V({},n,r)}function Xr(e){return e=(e=e.stateNode)&&e.__reactInternalMemoizedMergedChildContext||yt,Tt=se.current,A(se,e),A(he,he.current),!0}function Pu(e,t,n){var r=e.stateNode;if(!r)throw Error(v(169));n?(e=aa(e,t,Tt),r.__reactInternalMemoizedMergedChildContext=e,j(he),j(se),A(se,e)):j(he),A(he,n)}var Qe=null,pl=!1,Hl=!1;function ca(e){Qe===null?Qe=[e]:Qe.push(e)}function bd(e){pl=!0,ca(e)}function wt(){if(!Hl&&Qe!==null){Hl=!0;var e=0,t=I;try{var n=Qe;for(I=1;e>=o,l-=o,Ke=1<<32-je(t)+l|n<N?(H=C,C=null):H=C.sibling;var L=p(f,C,d[N],y);if(L===null){C===null&&(C=H);break}e&&C&&L.alternate===null&&t(f,C),a=i(L,a,N),_===null?E=L:_.sibling=L,_=L,C=H}if(N===d.length)return n(f,C),F&&xt(f,N),E;if(C===null){for(;NN?(H=C,C=null):H=C.sibling;var Le=p(f,C,L.value,y);if(Le===null){C===null&&(C=H);break}e&&C&&Le.alternate===null&&t(f,C),a=i(Le,a,N),_===null?E=Le:_.sibling=Le,_=Le,C=H}if(L.done)return n(f,C),F&&xt(f,N),E;if(C===null){for(;!L.done;N++,L=d.next())L=h(f,L.value,y),L!==null&&(a=i(L,a,N),_===null?E=L:_.sibling=L,_=L);return F&&xt(f,N),E}for(C=r(f,C);!L.done;N++,L=d.next())L=g(C,f,N,L.value,y),L!==null&&(e&&L.alternate!==null&&C.delete(L.key===null?N:L.key),a=i(L,a,N),_===null?E=L:_.sibling=L,_=L);return e&&C.forEach(function(pn){return t(f,pn)}),F&&xt(f,N),E}function D(f,a,d,y){if(typeof d=="object"&&d!==null&&d.type===Ut&&d.key===null&&(d=d.props.children),typeof d=="object"&&d!==null){switch(d.$$typeof){case ur:e:{for(var E=d.key,_=a;_!==null;){if(_.key===E){if(E=d.type,E===Ut){if(_.tag===7){n(f,_.sibling),a=l(_,d.props.children),a.return=f,f=a;break e}}else if(_.elementType===E||typeof E=="object"&&E!==null&&E.$$typeof===tt&&Au(E)===_.type){n(f,_.sibling),a=l(_,d.props),a.ref=kn(f,_,d),a.return=f,f=a;break e}n(f,_);break}else t(f,_);_=_.sibling}d.type===Ut?(a=Ot(d.props.children,f.mode,y,d.key),a.return=f,f=a):(y=Ar(d.type,d.key,d.props,null,f.mode,y),y.ref=kn(f,a,d),y.return=f,f=y)}return o(f);case Ft:e:{for(_=d.key;a!==null;){if(a.key===_)if(a.tag===4&&a.stateNode.containerInfo===d.containerInfo&&a.stateNode.implementation===d.implementation){n(f,a.sibling),a=l(a,d.children||[]),a.return=f,f=a;break e}else{n(f,a);break}else t(f,a);a=a.sibling}a=Jl(d,f.mode,y),a.return=f,f=a}return o(f);case tt:return _=d._init,D(f,a,_(d._payload),y)}if(_n(d))return w(f,a,d,y);if(hn(d))return k(f,a,d,y);gr(f,d)}return typeof d=="string"&&d!==""||typeof d=="number"?(d=""+d,a!==null&&a.tag===6?(n(f,a.sibling),a=l(a,d),a.return=f,f=a):(n(f,a),a=Zl(d,f.mode,y),a.return=f,f=a),o(f)):n(f,a)}return D}var on=ga(!0),wa=ga(!1),rr={},He=gt(rr),Xn=gt(rr),Yn=gt(rr);function Pt(e){if(e===rr)throw Error(v(174));return e}function _o(e,t){switch(A(Yn,t),A(Xn,e),A(He,rr),e=t.nodeType,e){case 9:case 11:t=(t=t.documentElement)?t.namespaceURI:ci(null,"");break;default:e=e===8?t.parentNode:t,t=e.namespaceURI||null,e=e.tagName,t=ci(t,e)}j(He),A(He,t)}function un(){j(He),j(Xn),j(Yn)}function ka(e){Pt(Yn.current);var t=Pt(He.current),n=ci(t,e.type);t!==n&&(A(Xn,e),A(He,n))}function Co(e){Xn.current===e&&(j(He),j(Xn))}var U=gt(0);function br(e){for(var t=e;t!==null;){if(t.tag===13){var n=t.memoizedState;if(n!==null&&(n=n.dehydrated,n===null||n.data==="$?"||n.data==="$!"))return t}else if(t.tag===19&&t.memoizedProps.revealOrder!==void 0){if(t.flags&128)return t}else if(t.child!==null){t.child.return=t,t=t.child;continue}if(t===e)break;for(;t.sibling===null;){if(t.return===null||t.return===e)return null;t=t.return}t.sibling.return=t.return,t=t.sibling}return null}var Wl=[];function No(){for(var e=0;en?n:4,e(!0);var r=Ql.transition;Ql.transition={};try{e(!1),t()}finally{I=n,Ql.transition=r}}function ja(){return Te().memoizedState}function rp(e,t,n){var r=pt(e);if(n={lane:r,action:n,hasEagerState:!1,eagerState:null,next:null},Da(e))Fa(t,n);else if(n=ma(e,t,n,r),n!==null){var l=ce();De(n,e,r,l),Ua(n,t,r)}}function lp(e,t,n){var r=pt(e),l={lane:r,action:n,hasEagerState:!1,eagerState:null,next:null};if(Da(e))Fa(t,l);else{var i=e.alternate;if(e.lanes===0&&(i===null||i.lanes===0)&&(i=t.lastRenderedReducer,i!==null))try{var o=t.lastRenderedState,u=i(o,n);if(l.hasEagerState=!0,l.eagerState=u,Fe(u,o)){var s=t.interleaved;s===null?(l.next=l,Eo(t)):(l.next=s.next,s.next=l),t.interleaved=l;return}}catch{}finally{}n=ma(e,t,l,r),n!==null&&(l=ce(),De(n,e,r,l),Ua(n,t,r))}}function Da(e){var t=e.alternate;return e===$||t!==null&&t===$}function Fa(e,t){Rn=el=!0;var n=e.pending;n===null?t.next=t:(t.next=n.next,n.next=t),e.pending=t}function Ua(e,t,n){if(n&4194240){var r=t.lanes;r&=e.pendingLanes,n|=r,t.lanes=n,uo(e,n)}}var tl={readContext:Oe,useCallback:ie,useContext:ie,useEffect:ie,useImperativeHandle:ie,useInsertionEffect:ie,useLayoutEffect:ie,useMemo:ie,useReducer:ie,useRef:ie,useState:ie,useDebugValue:ie,useDeferredValue:ie,useTransition:ie,useMutableSource:ie,useSyncExternalStore:ie,useId:ie,unstable_isNewReconciler:!1},ip={readContext:Oe,useCallback:function(e,t){return $e().memoizedState=[e,t===void 0?null:t],e},useContext:Oe,useEffect:ju,useImperativeHandle:function(e,t,n){return n=n!=null?n.concat([e]):null,Tr(4194308,4,La.bind(null,t,e),n)},useLayoutEffect:function(e,t){return Tr(4194308,4,e,t)},useInsertionEffect:function(e,t){return Tr(4,2,e,t)},useMemo:function(e,t){var n=$e();return t=t===void 0?null:t,e=e(),n.memoizedState=[e,t],e},useReducer:function(e,t,n){var r=$e();return t=n!==void 0?n(t):t,r.memoizedState=r.baseState=t,e={pending:null,interleaved:null,lanes:0,dispatch:null,lastRenderedReducer:e,lastRenderedState:t},r.queue=e,e=e.dispatch=rp.bind(null,$,e),[r.memoizedState,e]},useRef:function(e){var t=$e();return e={current:e},t.memoizedState=e},useState:Mu,useDebugValue:Lo,useDeferredValue:function(e){return $e().memoizedState=e},useTransition:function(){var e=Mu(!1),t=e[0];return e=np.bind(null,e[1]),$e().memoizedState=e,[t,e]},useMutableSource:function(){},useSyncExternalStore:function(e,t,n){var r=$,l=$e();if(F){if(n===void 0)throw Error(v(407));n=n()}else{if(n=t(),ee===null)throw Error(v(349));Rt&30||xa(r,t,n)}l.memoizedState=n;var i={value:n,getSnapshot:t};return l.queue=i,ju(Ca.bind(null,r,i,e),[e]),r.flags|=2048,Jn(9,_a.bind(null,r,i,n,t),void 0,null),n},useId:function(){var e=$e(),t=ee.identifierPrefix;if(F){var n=Xe,r=Ke;n=(r&~(1<<32-je(r)-1)).toString(32)+n,t=":"+t+"R"+n,n=Gn++,0<\/script>",e=e.removeChild(e.firstChild)):typeof r.is=="string"?e=o.createElement(n,{is:r.is}):(e=o.createElement(n),n==="select"&&(o=e,r.multiple?o.multiple=!0:r.size&&(o.size=r.size))):e=o.createElementNS(e,n),e[Ve]=t,e[Kn]=r,Ya(e,t,!1,!1),t.stateNode=e;e:{switch(o=di(n,r),n){case"dialog":M("cancel",e),M("close",e),l=r;break;case"iframe":case"object":case"embed":M("load",e),l=r;break;case"video":case"audio":for(l=0;lan&&(t.flags|=128,r=!0,Sn(i,!1),t.lanes=4194304)}else{if(!r)if(e=br(o),e!==null){if(t.flags|=128,r=!0,n=e.updateQueue,n!==null&&(t.updateQueue=n,t.flags|=4),Sn(i,!0),i.tail===null&&i.tailMode==="hidden"&&!o.alternate&&!F)return oe(t),null}else 2*Q()-i.renderingStartTime>an&&n!==1073741824&&(t.flags|=128,r=!0,Sn(i,!1),t.lanes=4194304);i.isBackwards?(o.sibling=t.child,t.child=o):(n=i.last,n!==null?n.sibling=o:t.child=o,i.last=o)}return i.tail!==null?(t=i.tail,i.rendering=t,i.tail=t.sibling,i.renderingStartTime=Q(),t.sibling=null,n=U.current,A(U,r?n&1|2:n&1),t):(oe(t),null);case 22:case 23:return Do(),r=t.memoizedState!==null,e!==null&&e.memoizedState!==null!==r&&(t.flags|=8192),r&&t.mode&1?ge&1073741824&&(oe(t),t.subtreeFlags&6&&(t.flags|=8192)):oe(t),null;case 24:return null;case 25:return null}throw Error(v(156,t.tag))}function pp(e,t){switch(vo(t),t.tag){case 1:return ye(t.type)&&Kr(),e=t.flags,e&65536?(t.flags=e&-65537|128,t):null;case 3:return un(),j(he),j(se),No(),e=t.flags,e&65536&&!(e&128)?(t.flags=e&-65537|128,t):null;case 5:return Co(t),null;case 13:if(j(U),e=t.memoizedState,e!==null&&e.dehydrated!==null){if(t.alternate===null)throw Error(v(340));ln()}return e=t.flags,e&65536?(t.flags=e&-65537|128,t):null;case 19:return j(U),null;case 4:return un(),null;case 10:return So(t.type._context),null;case 22:case 23:return Do(),null;case 24:return null;default:return null}}var kr=!1,ue=!1,mp=typeof WeakSet=="function"?WeakSet:Set,S=null;function Yt(e,t){var n=e.ref;if(n!==null)if(typeof n=="function")try{n(null)}catch(r){B(e,t,r)}else n.current=null}function Ui(e,t,n){try{n()}catch(r){B(e,t,r)}}var Qu=!1;function hp(e,t){if(Ei=Br,e=ea(),ho(e)){if("selectionStart"in e)var n={start:e.selectionStart,end:e.selectionEnd};else e:{n=(n=e.ownerDocument)&&n.defaultView||window;var r=n.getSelection&&n.getSelection();if(r&&r.rangeCount!==0){n=r.anchorNode;var l=r.anchorOffset,i=r.focusNode;r=r.focusOffset;try{n.nodeType,i.nodeType}catch{n=null;break e}var o=0,u=-1,s=-1,c=0,m=0,h=e,p=null;t:for(;;){for(var g;h!==n||l!==0&&h.nodeType!==3||(u=o+l),h!==i||r!==0&&h.nodeType!==3||(s=o+r),h.nodeType===3&&(o+=h.nodeValue.length),(g=h.firstChild)!==null;)p=h,h=g;for(;;){if(h===e)break t;if(p===n&&++c===l&&(u=o),p===i&&++m===r&&(s=o),(g=h.nextSibling)!==null)break;h=p,p=h.parentNode}h=g}n=u===-1||s===-1?null:{start:u,end:s}}else n=null}n=n||{start:0,end:0}}else n=null;for(xi={focusedElem:e,selectionRange:n},Br=!1,S=t;S!==null;)if(t=S,e=t.child,(t.subtreeFlags&1028)!==0&&e!==null)e.return=t,S=e;else for(;S!==null;){t=S;try{var w=t.alternate;if(t.flags&1024)switch(t.tag){case 0:case 11:case 15:break;case 1:if(w!==null){var k=w.memoizedProps,D=w.memoizedState,f=t.stateNode,a=f.getSnapshotBeforeUpdate(t.elementType===t.type?k:Ie(t.type,k),D);f.__reactInternalSnapshotBeforeUpdate=a}break;case 3:var d=t.stateNode.containerInfo;d.nodeType===1?d.textContent="":d.nodeType===9&&d.documentElement&&d.removeChild(d.documentElement);break;case 5:case 6:case 4:case 17:break;default:throw Error(v(163))}}catch(y){B(t,t.return,y)}if(e=t.sibling,e!==null){e.return=t.return,S=e;break}S=t.return}return w=Qu,Qu=!1,w}function In(e,t,n){var r=t.updateQueue;if(r=r!==null?r.lastEffect:null,r!==null){var l=r=r.next;do{if((l.tag&e)===e){var i=l.destroy;l.destroy=void 0,i!==void 0&&Ui(t,n,i)}l=l.next}while(l!==r)}}function yl(e,t){if(t=t.updateQueue,t=t!==null?t.lastEffect:null,t!==null){var n=t=t.next;do{if((n.tag&e)===e){var r=n.create;n.destroy=r()}n=n.next}while(n!==t)}}function $i(e){var t=e.ref;if(t!==null){var n=e.stateNode;switch(e.tag){case 5:e=n;break;default:e=n}typeof t=="function"?t(e):t.current=e}}function Ja(e){var t=e.alternate;t!==null&&(e.alternate=null,Ja(t)),e.child=null,e.deletions=null,e.sibling=null,e.tag===5&&(t=e.stateNode,t!==null&&(delete t[Ve],delete t[Kn],delete t[Ni],delete t[Jd],delete t[qd])),e.stateNode=null,e.return=null,e.dependencies=null,e.memoizedProps=null,e.memoizedState=null,e.pendingProps=null,e.stateNode=null,e.updateQueue=null}function qa(e){return e.tag===5||e.tag===3||e.tag===4}function Ku(e){e:for(;;){for(;e.sibling===null;){if(e.return===null||qa(e.return))return null;e=e.return}for(e.sibling.return=e.return,e=e.sibling;e.tag!==5&&e.tag!==6&&e.tag!==18;){if(e.flags&2||e.child===null||e.tag===4)continue e;e.child.return=e,e=e.child}if(!(e.flags&2))return e.stateNode}}function Vi(e,t,n){var r=e.tag;if(r===5||r===6)e=e.stateNode,t?n.nodeType===8?n.parentNode.insertBefore(e,t):n.insertBefore(e,t):(n.nodeType===8?(t=n.parentNode,t.insertBefore(e,n)):(t=n,t.appendChild(e)),n=n._reactRootContainer,n!=null||t.onclick!==null||(t.onclick=Qr));else if(r!==4&&(e=e.child,e!==null))for(Vi(e,t,n),e=e.sibling;e!==null;)Vi(e,t,n),e=e.sibling}function Bi(e,t,n){var r=e.tag;if(r===5||r===6)e=e.stateNode,t?n.insertBefore(e,t):n.appendChild(e);else if(r!==4&&(e=e.child,e!==null))for(Bi(e,t,n),e=e.sibling;e!==null;)Bi(e,t,n),e=e.sibling}var te=null,Ae=!1;function et(e,t,n){for(n=n.child;n!==null;)ba(e,t,n),n=n.sibling}function ba(e,t,n){if(Be&&typeof Be.onCommitFiberUnmount=="function")try{Be.onCommitFiberUnmount(sl,n)}catch{}switch(n.tag){case 5:ue||Yt(n,t);case 6:var r=te,l=Ae;te=null,et(e,t,n),te=r,Ae=l,te!==null&&(Ae?(e=te,n=n.stateNode,e.nodeType===8?e.parentNode.removeChild(n):e.removeChild(n)):te.removeChild(n.stateNode));break;case 18:te!==null&&(Ae?(e=te,n=n.stateNode,e.nodeType===8?Bl(e.parentNode,n):e.nodeType===1&&Bl(e,n),Vn(e)):Bl(te,n.stateNode));break;case 4:r=te,l=Ae,te=n.stateNode.containerInfo,Ae=!0,et(e,t,n),te=r,Ae=l;break;case 0:case 11:case 14:case 15:if(!ue&&(r=n.updateQueue,r!==null&&(r=r.lastEffect,r!==null))){l=r=r.next;do{var i=l,o=i.destroy;i=i.tag,o!==void 0&&(i&2||i&4)&&Ui(n,t,o),l=l.next}while(l!==r)}et(e,t,n);break;case 1:if(!ue&&(Yt(n,t),r=n.stateNode,typeof r.componentWillUnmount=="function"))try{r.props=n.memoizedProps,r.state=n.memoizedState,r.componentWillUnmount()}catch(u){B(n,t,u)}et(e,t,n);break;case 21:et(e,t,n);break;case 22:n.mode&1?(ue=(r=ue)||n.memoizedState!==null,et(e,t,n),ue=r):et(e,t,n);break;default:et(e,t,n)}}function Xu(e){var t=e.updateQueue;if(t!==null){e.updateQueue=null;var n=e.stateNode;n===null&&(n=e.stateNode=new mp),t.forEach(function(r){var l=_p.bind(null,e,r);n.has(r)||(n.add(r),r.then(l,l))})}}function Re(e,t){var n=t.deletions;if(n!==null)for(var r=0;rl&&(l=o),r&=~i}if(r=l,r=Q()-r,r=(120>r?120:480>r?480:1080>r?1080:1920>r?1920:3e3>r?3e3:4320>r?4320:1960*vp(r/1960))-r,10e?16:e,ot===null)var r=!1;else{if(e=ot,ot=null,ll=0,R&6)throw Error(v(331));var l=R;for(R|=4,S=e.current;S!==null;){var i=S,o=i.child;if(S.flags&16){var u=i.deletions;if(u!==null){for(var s=0;sQ()-Mo?zt(e,0):Ao|=n),ve(e,t)}function uc(e,t){t===0&&(e.mode&1?(t=fr,fr<<=1,!(fr&130023424)&&(fr=4194304)):t=1);var n=ce();e=Je(e,t),e!==null&&(er(e,t,n),ve(e,n))}function xp(e){var t=e.memoizedState,n=0;t!==null&&(n=t.retryLane),uc(e,n)}function _p(e,t){var n=0;switch(e.tag){case 13:var r=e.stateNode,l=e.memoizedState;l!==null&&(n=l.retryLane);break;case 19:r=e.stateNode;break;default:throw Error(v(314))}r!==null&&r.delete(t),uc(e,n)}var sc;sc=function(e,t,n){if(e!==null)if(e.memoizedProps!==t.pendingProps||he.current)me=!0;else{if(!(e.lanes&n)&&!(t.flags&128))return me=!1,fp(e,t,n);me=!!(e.flags&131072)}else me=!1,F&&t.flags&1048576&&fa(t,Gr,t.index);switch(t.lanes=0,t.tag){case 2:var r=t.type;Lr(e,t),e=t.pendingProps;var l=rn(t,se.current);en(t,n),l=zo(null,t,r,e,l,n);var i=Oo();return t.flags|=1,typeof l=="object"&&l!==null&&typeof l.render=="function"&&l.$$typeof===void 0?(t.tag=1,t.memoizedState=null,t.updateQueue=null,ye(r)?(i=!0,Xr(t)):i=!1,t.memoizedState=l.state!==null&&l.state!==void 0?l.state:null,xo(t),l.updater=ml,t.stateNode=l,l._reactInternals=t,Ri(t,r,e,n),t=Mi(null,t,r,!0,i,n)):(t.tag=0,F&&i&&yo(t),ae(null,t,l,n),t=t.child),t;case 16:r=t.elementType;e:{switch(Lr(e,t),e=t.pendingProps,l=r._init,r=l(r._payload),t.type=r,l=t.tag=Np(r),e=Ie(r,e),l){case 0:t=Ai(null,t,r,e,n);break e;case 1:t=Bu(null,t,r,e,n);break e;case 11:t=$u(null,t,r,e,n);break e;case 14:t=Vu(null,t,r,Ie(r.type,e),n);break e}throw Error(v(306,r,""))}return t;case 0:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:Ie(r,l),Ai(e,t,r,l,n);case 1:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:Ie(r,l),Bu(e,t,r,l,n);case 3:e:{if(Qa(t),e===null)throw Error(v(387));r=t.pendingProps,i=t.memoizedState,l=i.element,ha(e,t),qr(t,r,null,n);var o=t.memoizedState;if(r=o.element,i.isDehydrated)if(i={element:r,isDehydrated:!1,cache:o.cache,pendingSuspenseBoundaries:o.pendingSuspenseBoundaries,transitions:o.transitions},t.updateQueue.baseState=i,t.memoizedState=i,t.flags&256){l=sn(Error(v(423)),t),t=Hu(e,t,r,n,l);break e}else if(r!==l){l=sn(Error(v(424)),t),t=Hu(e,t,r,n,l);break e}else for(we=ct(t.stateNode.containerInfo.firstChild),ke=t,F=!0,Me=null,n=wa(t,null,r,n),t.child=n;n;)n.flags=n.flags&-3|4096,n=n.sibling;else{if(ln(),r===l){t=qe(e,t,n);break e}ae(e,t,r,n)}t=t.child}return t;case 5:return ka(t),e===null&&Oi(t),r=t.type,l=t.pendingProps,i=e!==null?e.memoizedProps:null,o=l.children,_i(r,l)?o=null:i!==null&&_i(r,i)&&(t.flags|=32),Wa(e,t),ae(e,t,o,n),t.child;case 6:return e===null&&Oi(t),null;case 13:return Ka(e,t,n);case 4:return _o(t,t.stateNode.containerInfo),r=t.pendingProps,e===null?t.child=on(t,null,r,n):ae(e,t,r,n),t.child;case 11:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:Ie(r,l),$u(e,t,r,l,n);case 7:return ae(e,t,t.pendingProps,n),t.child;case 8:return ae(e,t,t.pendingProps.children,n),t.child;case 12:return ae(e,t,t.pendingProps.children,n),t.child;case 10:e:{if(r=t.type._context,l=t.pendingProps,i=t.memoizedProps,o=l.value,A(Zr,r._currentValue),r._currentValue=o,i!==null)if(Fe(i.value,o)){if(i.children===l.children&&!he.current){t=qe(e,t,n);break e}}else for(i=t.child,i!==null&&(i.return=t);i!==null;){var u=i.dependencies;if(u!==null){o=i.child;for(var s=u.firstContext;s!==null;){if(s.context===r){if(i.tag===1){s=Ye(-1,n&-n),s.tag=2;var c=i.updateQueue;if(c!==null){c=c.shared;var m=c.pending;m===null?s.next=s:(s.next=m.next,m.next=s),c.pending=s}}i.lanes|=n,s=i.alternate,s!==null&&(s.lanes|=n),Ti(i.return,n,t),u.lanes|=n;break}s=s.next}}else if(i.tag===10)o=i.type===t.type?null:i.child;else if(i.tag===18){if(o=i.return,o===null)throw Error(v(341));o.lanes|=n,u=o.alternate,u!==null&&(u.lanes|=n),Ti(o,n,t),o=i.sibling}else o=i.child;if(o!==null)o.return=i;else for(o=i;o!==null;){if(o===t){o=null;break}if(i=o.sibling,i!==null){i.return=o.return,o=i;break}o=o.return}i=o}ae(e,t,l.children,n),t=t.child}return t;case 9:return l=t.type,r=t.pendingProps.children,en(t,n),l=Oe(l),r=r(l),t.flags|=1,ae(e,t,r,n),t.child;case 14:return r=t.type,l=Ie(r,t.pendingProps),l=Ie(r.type,l),Vu(e,t,r,l,n);case 15:return Ba(e,t,t.type,t.pendingProps,n);case 17:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:Ie(r,l),Lr(e,t),t.tag=1,ye(r)?(e=!0,Xr(t)):e=!1,en(t,n),va(t,r,l),Ri(t,r,l,n),Mi(null,t,r,!0,e,n);case 19:return Xa(e,t,n);case 22:return Ha(e,t,n)}throw Error(v(156,t.tag))};function ac(e,t){return Ms(e,t)}function Cp(e,t,n,r){this.tag=e,this.key=n,this.sibling=this.child=this.return=this.stateNode=this.type=this.elementType=null,this.index=0,this.ref=null,this.pendingProps=t,this.dependencies=this.memoizedState=this.updateQueue=this.memoizedProps=null,this.mode=r,this.subtreeFlags=this.flags=0,this.deletions=null,this.childLanes=this.lanes=0,this.alternate=null}function Ne(e,t,n,r){return new Cp(e,t,n,r)}function Uo(e){return e=e.prototype,!(!e||!e.isReactComponent)}function Np(e){if(typeof e=="function")return Uo(e)?1:0;if(e!=null){if(e=e.$$typeof,e===ro)return 11;if(e===lo)return 14}return 2}function mt(e,t){var n=e.alternate;return n===null?(n=Ne(e.tag,t,e.key,e.mode),n.elementType=e.elementType,n.type=e.type,n.stateNode=e.stateNode,n.alternate=e,e.alternate=n):(n.pendingProps=t,n.type=e.type,n.flags=0,n.subtreeFlags=0,n.deletions=null),n.flags=e.flags&14680064,n.childLanes=e.childLanes,n.lanes=e.lanes,n.child=e.child,n.memoizedProps=e.memoizedProps,n.memoizedState=e.memoizedState,n.updateQueue=e.updateQueue,t=e.dependencies,n.dependencies=t===null?null:{lanes:t.lanes,firstContext:t.firstContext},n.sibling=e.sibling,n.index=e.index,n.ref=e.ref,n}function Ar(e,t,n,r,l,i){var o=2;if(r=e,typeof e=="function")Uo(e)&&(o=1);else if(typeof e=="string")o=5;else e:switch(e){case Ut:return Ot(n.children,l,i,t);case no:o=8,l|=8;break;case ni:return e=Ne(12,n,t,l|2),e.elementType=ni,e.lanes=i,e;case ri:return e=Ne(13,n,t,l),e.elementType=ri,e.lanes=i,e;case li:return e=Ne(19,n,t,l),e.elementType=li,e.lanes=i,e;case gs:return gl(n,l,i,t);default:if(typeof e=="object"&&e!==null)switch(e.$$typeof){case ys:o=10;break e;case vs:o=9;break e;case ro:o=11;break e;case lo:o=14;break e;case tt:o=16,r=null;break e}throw Error(v(130,e==null?e:typeof e,""))}return t=Ne(o,n,t,l),t.elementType=e,t.type=r,t.lanes=i,t}function Ot(e,t,n,r){return e=Ne(7,e,r,t),e.lanes=n,e}function gl(e,t,n,r){return e=Ne(22,e,r,t),e.elementType=gs,e.lanes=n,e.stateNode={isHidden:!1},e}function Zl(e,t,n){return e=Ne(6,e,null,t),e.lanes=n,e}function Jl(e,t,n){return t=Ne(4,e.children!==null?e.children:[],e.key,t),t.lanes=n,t.stateNode={containerInfo:e.containerInfo,pendingChildren:null,implementation:e.implementation},t}function Pp(e,t,n,r,l){this.tag=t,this.containerInfo=e,this.finishedWork=this.pingCache=this.current=this.pendingChildren=null,this.timeoutHandle=-1,this.callbackNode=this.pendingContext=this.context=null,this.callbackPriority=0,this.eventTimes=Ll(0),this.expirationTimes=Ll(-1),this.entangledLanes=this.finishedLanes=this.mutableReadLanes=this.expiredLanes=this.pingedLanes=this.suspendedLanes=this.pendingLanes=0,this.entanglements=Ll(0),this.identifierPrefix=r,this.onRecoverableError=l,this.mutableSourceEagerHydrationData=null}function $o(e,t,n,r,l,i,o,u,s){return e=new Pp(e,t,n,u,s),t===1?(t=1,i===!0&&(t|=8)):t=0,i=Ne(3,null,null,t),e.current=i,i.stateNode=e,i.memoizedState={element:r,isDehydrated:n,cache:null,transitions:null,pendingSuspenseBoundaries:null},xo(i),e}function zp(e,t,n){var r=3"u"||typeof __REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE!="function"))try{__REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE(t)}catch(n){console.error(n)}}t(),e.exports=Ee})(Of);var pc,ts=bl;pc=ts.createRoot,ts.hydrateRoot;const Y=new kf,Ip=["audio-classification","audio-to-audio","automatic-speech-recognition","conversational","depth-estimation","document-question-answering","feature-extraction","fill-mask","graph-ml","image-classification","image-segmentation","image-to-image","image-to-text","multiple-choice","object-detection","other","question-answering","reinforcement-learning","robotics","sentence-similarity","summarization","table-question-answering","table-to-text","tabular-classification","tabular-regression","tabular-to-text","text-classification","text-generation","text-retrieval","text-to-image","text-to-speech","text2text-generation","time-series-forecasting","token-classification","translation","unconditional-image-generation","video-classification","visual-question-answering","voice-activity-detection","zero-shot-classification","zero-shot-image-classification"].filter(e=>Object.getOwnPropertyNames(Y).includes(zf(e))),ql={},Ap=async e=>{if(ql[e])return ql[e];const t=[];for await(const n of Kc({search:{task:e}}))t.push(n);return t.sort((n,r)=>n.downloads>r.downloads?-1:n.downloadsr.likes?-1:n.likesr.name?-1:n.nameze("div",{className:"w-full",children:[P("p",{className:"text-xl",children:"Task"}),ze("select",{className:"bg-yellow-200 cursor-pointer py-6 text-center w-full",onChange:t=>e.setTask(t.target.value),placeholder:"Select a task",value:e.task,children:[P("option",{children:"Select a task"}),Ip.map(t=>P("option",{value:t,children:t},t))]})]}),jp=e=>{const[t,n]=ne.useState(!1),[r,l]=ne.useState([]);return ne.useEffect(()=>{e.task&&(n(!0),Ap(e.task).then(i=>l(i)).finally(()=>n(!1)))},[e.task]),r.length>0?ze("div",{className:"w-full",children:[P("p",{className:"text-xl",children:"Model"}),ze("select",{className:"bg-yellow-200 cursor-pointer py-6 text-center w-full",onChange:i=>e.setModel(i.target.value),placeholder:"Select a model",value:e.model,children:[P("option",{children:"Select a model"}),r.map(i=>P("option",{value:i.name,children:i.name},i.name))]})]}):P("p",{className:"text-center w-full",children:e.task?t?"Loading models for this task":"No models available for this task":"Select a task to view available models"})},Dp=e=>ze("div",{className:"w-full",children:[P("p",{className:"text-xl",children:"Inputs"}),e.inputs?P("audio",{className:"w-full",controls:!0,src:URL.createObjectURL(e.inputs)}):ze("label",{className:"bg-yellow-200 block cursor-pointer py-6 text-center w-full",children:["No file chosen",P("input",{accept:"audio/*",className:"hidden",onChange:t=>{t.target.files&&t.target.files[0]&&e.setInputs(t.target.files[0])},type:"file"})]})]}),Fp=e=>ze("div",{className:"w-full",children:[P("p",{className:"text-xl",children:"Inputs"}),e.inputs?P("img",{className:"w-full",src:URL.createObjectURL(e.inputs)}):ze("label",{className:"bg-yellow-200 block cursor-pointer py-6 text-center w-full",children:["No file chosen",P("input",{accept:"image/*",className:"hidden",onChange:t=>{t.target.files&&t.target.files[0]&&e.setInputs(t.target.files[0])},type:"file"})]})]}),Up=e=>ze("div",{className:"w-full",children:[P("p",{className:"text-xl",children:"Inputs"}),P("input",{className:"bg-yellow-200 py-6 text-center w-full",onChange:t=>{t.target.value?e.setInputs(t.target.value):e.setInputs("")},type:"text",value:e.inputs??""})]}),$p=e=>e.model&&e.task?["audio-classification","automatic-speech-recognition"].includes(e.task)?P(Dp,{inputs:e.inputs,model:e.model,setInputs:e.setInputs,task:e.task}):["image-classification","image-segmentation","image-to-text","object-detection"].includes(e.task)?P(Fp,{inputs:e.inputs,model:e.model,setInputs:e.setInputs,task:e.task}):["conversational","feature-extraction","fill-mask","question-answering","summarization","table-question-answering","text-classification","text-generation","text-to-image","token-classification","translation","zero-shot-classification"].includes(e.task)?P(Up,{inputs:e.inputs,model:e.model,setInputs:e.setInputs,task:e.task}):P("div",{className:"w-full",children:P("p",{className:"text-center",children:"Inference for this task is not yet supported."})}):P(ne.Fragment,{}),Vp=e=>{if(e.inputs&&e.model&&e.task){const t=()=>{e.setInputs(void 0),e.setOutput(void 0)};return P("button",{className:`border-4 border-yellow-200 py-6 text-center w-full ${e.loading?"cursor-not-allowed opacity-50":""}`,disabled:e.loading,onClick:t,children:"Clear"})}return P(ne.Fragment,{})},Bp=e=>{if(e.inputs&&e.model&&e.task){const t=async()=>{if(e.inputs&&e.model&&e.task){e.setLoading(!0);try{switch(e.task){case"audio-classification":{const n=await Y.audioClassification({data:e.inputs,model:e.model});e.setOutput(n);break}case"automatic-speech-recognition":{const n=await Y.automaticSpeechRecognition({data:e.inputs,model:e.model});e.setOutput(n);break}case"conversational":{const n=await Y.conversational({inputs:{text:e.inputs},model:e.model});e.setOutput(n);break}case"feature-extraction":{const n=await Y.featureExtraction({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"fill-mask":{const n=await Y.fillMask({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"image-classification":{const n=await Y.imageClassification({data:e.inputs,model:e.model});e.setOutput(n);break}case"image-segmentation":{const n=await Y.imageSegmentation({data:e.inputs,model:e.model});e.setOutput(n);break}case"image-to-text":{const n=await Y.imageToText({data:e.inputs,model:e.model});e.setOutput(n);break}case"object-detection":{const n=await Y.objectDetection({data:e.inputs,model:e.model});e.setOutput(n);break}case"question-answering":{const n=await Y.questionAnswering({inputs:{context:e.inputs,question:e.inputs},model:e.model});e.setOutput(n);break}case"summarization":{const n=await Y.summarization({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"table-question-answering":{const n=await Y.tableQuestionAnswering({inputs:{query:e.inputs,table:{[e.inputs]:[e.inputs]}},model:e.model});e.setOutput(n);break}case"text-classification":{const n=await Y.textClassification({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"text-generation":{const n=await Y.textGeneration({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"text-to-image":{const n=await Y.textToImage({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"token-classification":{const n=await Y.tokenClassification({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"translation":{const n=await Y.translation({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"zero-shot-classification":{const n=await Y.zeroShotClassification({inputs:e.inputs,model:e.model,parameters:{candidate_labels:[e.inputs]}});e.setOutput(n);break}}}catch(n){n instanceof Error&&e.setOutput(n.message)}e.setLoading(!1)}};return P("button",{className:`bg-yellow-200 py-6 text-center w-full ${e.loading?"cursor-not-allowed opacity-50":""}`,disabled:e.loading,onClick:t,children:e.loading?"Submitting":"Submit"})}return P(ne.Fragment,{})},Hp=e=>ze("div",{className:"w-full",children:[P("p",{className:"text-xl",children:"Output"}),P("img",{className:`w-full ${e.loading?"cursor-wait opacity-50":""}`,src:URL.createObjectURL(e.output)})]}),Wp=e=>{const t=(()=>{try{return JSON.stringify(e.output,void 0,2)}catch(n){if(n instanceof Error)return`Error during JSON.stringify: ${n.message}`}})();return ze("div",{className:"w-full",children:[P("p",{className:"text-xl",children:"Output"}),P("pre",{className:`bg-yellow-200 p-6 select-text w-full whitespace-pre-wrap ${e.loading?"cursor-wait opacity-50":""}`,children:t})]})},Qp=e=>e.output&&e.task?["text-to-image"].includes(e.task)?P(Hp,{loading:e.loading,output:e.output}):P(Wp,{loading:e.loading,output:e.output}):P(ne.Fragment,{}),Kp=()=>{const[e,t]=ne.useState(),[n,r]=ne.useState(),[l,i]=ne.useState(),[o,u]=ne.useState(!1),[s,c]=ne.useState();return P("div",{className:"bg-yellow-500 flex flex-col h-full items-center min-h-screen min-w-screen overflow-auto w-full",children:ze("div",{className:"flex flex-col items-center justify-center py-24 space-y-12 w-2/3 lg:w-1/3",children:[P("header",{className:"text-center text-6xl",children:"🤗"}),P(Mp,{setTask:t,task:e}),P(jp,{model:n,setModel:r,task:e}),P($p,{inputs:l,model:n,setInputs:i,task:e}),P(Vp,{inputs:l,loading:o,model:n,setInputs:i,setOutput:c,task:e}),P(Bp,{inputs:l,loading:o,model:n,setLoading:u,setOutput:c,task:e}),P(Qp,{loading:o,output:s,task:e})]})})},Xp=()=>{const e="root",t=document.getElementById(e);if(t){const n=pc(t),r=P(ne.StrictMode,{children:P(Kp,{})});n.render(r)}};Xp(); diff --git a/spaces/milyiyo/reimagine-it/captioning/utils/eval_multi.py b/spaces/milyiyo/reimagine-it/captioning/utils/eval_multi.py deleted file mode 100644 index 83907410b806a50002aa32db289ca86cff72f45d..0000000000000000000000000000000000000000 --- a/spaces/milyiyo/reimagine-it/captioning/utils/eval_multi.py +++ /dev/null @@ -1,218 +0,0 @@ -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import torch -import torch.nn as nn - -import numpy as np -import json -from json import encoder -import random -import string -import time -import os -import sys -from . import misc as utils -from eval_utils import getCOCO - -from .div_utils import compute_div_n, compute_global_div_n - -import sys -try: - sys.path.append("coco-caption") - annFile = 'coco-caption/annotations/captions_val2014.json' - from pycocotools.coco import COCO - from pycocoevalcap.eval import COCOEvalCap - from pycocoevalcap.eval_spice import COCOEvalCapSpice - from pycocoevalcap.tokenizer.ptbtokenizer import PTBTokenizer - from pycocoevalcap.bleu.bleu import Bleu - sys.path.append("cider") - from pyciderevalcap.cider.cider import Cider -except: - print('Warning: requirements for eval_multi not satisfied') - - -def eval_allspice(dataset, preds_n, model_id, split): - coco = getCOCO(dataset) - valids = coco.getImgIds() - - capsById = {} - for d in preds_n: - capsById[d['image_id']] = capsById.get(d['image_id'], []) + [d] - - # filter results to only those in MSCOCO validation set (will be about a third) - preds_filt_n = [p for p in preds_n if p['image_id'] in valids] - print('using %d/%d predictions_n' % (len(preds_filt_n), len(preds_n))) - cache_path_n = os.path.join('eval_results/', model_id + '_' + split + '_n.json') - json.dump(preds_filt_n, open(cache_path_n, 'w')) # serialize to temporary json file. Sigh, COCO API... - - # Eval AllSPICE - cocoRes_n = coco.loadRes(cache_path_n) - cocoEvalAllSPICE = COCOEvalCapSpice(coco, cocoRes_n) - cocoEvalAllSPICE.params['image_id'] = cocoRes_n.getImgIds() - cocoEvalAllSPICE.evaluate() - - out = {} - for metric, score in cocoEvalAllSPICE.eval.items(): - out['All'+metric] = score - - imgToEvalAllSPICE = cocoEvalAllSPICE.imgToEval - # collect SPICE_sub_score - for k in list(imgToEvalAllSPICE.values())[0]['SPICE'].keys(): - if k != 'All': - out['AllSPICE_'+k] = np.array([v['SPICE'][k]['f'] for v in imgToEvalAllSPICE.values()]) - out['AllSPICE_'+k] = (out['AllSPICE_'+k][out['AllSPICE_'+k]==out['AllSPICE_'+k]]).mean() - for p in preds_filt_n: - image_id, caption = p['image_id'], p['caption'] - imgToEvalAllSPICE[image_id]['caption'] = capsById[image_id] - return {'overall': out, 'imgToEvalAllSPICE': imgToEvalAllSPICE} - -def eval_oracle(dataset, preds_n, model_id, split): - cache_path = os.path.join('eval_results/', model_id + '_' + split + '_n.json') - - coco = getCOCO(dataset) - valids = coco.getImgIds() - - capsById = {} - for d in preds_n: - capsById[d['image_id']] = capsById.get(d['image_id'], []) + [d] - - sample_n = capsById[list(capsById.keys())[0]] - for i in range(len(capsById[list(capsById.keys())[0]])): - preds = [_[i] for _ in capsById.values()] - - json.dump(preds, open(cache_path, 'w')) # serialize to temporary json file. Sigh, COCO API... - - cocoRes = coco.loadRes(cache_path) - cocoEval = COCOEvalCap(coco, cocoRes) - cocoEval.params['image_id'] = cocoRes.getImgIds() - cocoEval.evaluate() - - imgToEval = cocoEval.imgToEval - for img_id in capsById.keys(): - tmp = imgToEval[img_id] - for k in tmp['SPICE'].keys(): - if k != 'All': - tmp['SPICE_'+k] = tmp['SPICE'][k]['f'] - if tmp['SPICE_'+k] != tmp['SPICE_'+k]: # nan - tmp['SPICE_'+k] = -100 - tmp['SPICE'] = tmp['SPICE']['All']['f'] - if tmp['SPICE'] != tmp['SPICE']: tmp['SPICE'] = -100 - capsById[img_id][i]['scores'] = imgToEval[img_id] - - out = {'overall': {}, 'ImgToEval': {}} - for img_id in capsById.keys(): - out['ImgToEval'][img_id] = {} - for metric in capsById[img_id][0]['scores'].keys(): - if metric == 'image_id': continue - out['ImgToEval'][img_id]['oracle_'+metric] = max([_['scores'][metric] for _ in capsById[img_id]]) - out['ImgToEval'][img_id]['avg_'+metric] = sum([_['scores'][metric] for _ in capsById[img_id]]) / len(capsById[img_id]) - out['ImgToEval'][img_id]['captions'] = capsById[img_id] - for metric in list(out['ImgToEval'].values())[0].keys(): - if metric == 'captions': - continue - tmp = np.array([_[metric] for _ in out['ImgToEval'].values()]) - tmp = tmp[tmp!=-100] - out['overall'][metric] = tmp.mean() - - return out - -def eval_div_stats(dataset, preds_n, model_id, split): - tokenizer = PTBTokenizer() - - capsById = {} - for i, d in enumerate(preds_n): - d['id'] = i - capsById[d['image_id']] = capsById.get(d['image_id'], []) + [d] - - n_caps_perimg = len(capsById[list(capsById.keys())[0]]) - print(n_caps_perimg) - _capsById = capsById # save the untokenized version - capsById = tokenizer.tokenize(capsById) - - div_1, adiv_1 = compute_div_n(capsById,1) - div_2, adiv_2 = compute_div_n(capsById,2) - - globdiv_1, _= compute_global_div_n(capsById,1) - - print('Diversity Statistics are as follows: \n Div1: %.2f, Div2: %.2f, gDiv1: %d\n'%(div_1,div_2, globdiv_1)) - - # compute mbleu - scorer = Bleu(4) - all_scrs = [] - scrperimg = np.zeros((n_caps_perimg, len(capsById))) - - for i in range(n_caps_perimg): - tempRefsById = {} - candsById = {} - for k in capsById: - tempRefsById[k] = capsById[k][:i] + capsById[k][i+1:] - candsById[k] = [capsById[k][i]] - - score, scores = scorer.compute_score(tempRefsById, candsById) - all_scrs.append(score) - scrperimg[i,:] = scores[1] - - all_scrs = np.array(all_scrs) - - out = {} - out['overall'] = {'Div1': div_1, 'Div2': div_2, 'gDiv1': globdiv_1} - for k, score in zip(range(4), all_scrs.mean(axis=0).tolist()): - out['overall'].update({'mBLeu_%d'%(k+1): score}) - imgToEval = {} - for i,imgid in enumerate(capsById.keys()): - imgToEval[imgid] = {'mBleu_2' : scrperimg[:,i].mean()} - imgToEval[imgid]['individuals'] = [] - for j, d in enumerate(_capsById[imgid]): - imgToEval[imgid]['individuals'].append(preds_n[d['id']]) - imgToEval[imgid]['individuals'][-1]['mBleu_2'] = scrperimg[j,i] - out['ImgToEval'] = imgToEval - - print('Mean mutual Bleu scores on this set is:\nmBLeu_1, mBLeu_2, mBLeu_3, mBLeu_4') - print(all_scrs.mean(axis=0)) - - return out - -def eval_self_cider(dataset, preds_n, model_id, split): - cache_path = os.path.join('eval_results/', model_id + '_' + split + '_n.json') - - coco = getCOCO(dataset) - valids = coco.getImgIds() - - # Get Cider_scorer - Cider_scorer = Cider(df='corpus') - - tokenizer = PTBTokenizer() - gts = {} - for imgId in valids: - gts[imgId] = coco.imgToAnns[imgId] - gts = tokenizer.tokenize(gts) - - for imgId in valids: - Cider_scorer.cider_scorer += (None, gts[imgId]) - Cider_scorer.cider_scorer.compute_doc_freq() - Cider_scorer.cider_scorer.ref_len = np.log(float(len(Cider_scorer.cider_scorer.crefs))) - - # Prepare captions - capsById = {} - for d in preds_n: - capsById[d['image_id']] = capsById.get(d['image_id'], []) + [d] - - capsById = tokenizer.tokenize(capsById) - imgIds = list(capsById.keys()) - scores = Cider_scorer.my_self_cider([capsById[_] for _ in imgIds]) - - def get_div(eigvals): - eigvals = np.clip(eigvals, 0, None) - return -np.log(np.sqrt(eigvals[-1]) / (np.sqrt(eigvals).sum())) / np.log(len(eigvals)) - sc_scores = [get_div(np.linalg.eigvalsh(_/10)) for _ in scores] - score = np.mean(np.array(sc_scores)) - - imgToEval = {} - for i, image_id in enumerate(imgIds): - imgToEval[image_id] = {'self_cider': sc_scores[i], 'self_cider_mat': scores[i].tolist()} - return {'overall': {'self_cider': score}, 'imgToEval': imgToEval} - - - return score diff --git a/spaces/mpuig/gpt3-email-generator/app.py b/spaces/mpuig/gpt3-email-generator/app.py deleted file mode 100644 index 44411c1379ec49bfb3e0dd61397f33a4a73c6579..0000000000000000000000000000000000000000 --- a/spaces/mpuig/gpt3-email-generator/app.py +++ /dev/null @@ -1,136 +0,0 @@ -from random import choice - -import streamlit as st -import openai - -PROMPT_TEMPLATE = "Write a {tone} email to the customers of a {company_type} offering {offer}" - -VOICE_TONE_OPTIONS = "funny,formal,professional,informal,friendly,humorous," \ - "serious,optimistic,motivating,respectful,assertive," \ - "conversational,urgent".split(",") - -COMPANY_TYPE_OPTIONS = "bank,insurance,telecommunications (telco),retail,transportation".split(",") -# "pharmaceutical,energy,automotive,real estate,technology," \ -# "hospitality,food and beverage,healthcare,manufacturing,construction," \ -# "mining,agriculture,e-commerce,entertainment," \ -# "consulting services,accounting services,legal services".split(",") - -EXAMPLE_OFFERS = { - "bank": [ - "Checking Accounts that allows customers to deposit and withdraw funds, write checks, and make electronic transactions", - "Savings Accounts, where customers can deposit money and earn interest on their savings", - "Certificates of Deposit, a type of savings account where customers deposit money for a fixed term and earn a higher rate of interest", - "Personal Loans, a loan offered to individuals for personal use, such as home improvement, debt consolidation, or medical expenses", - "Home Loans, a loan for the purpose of purchasing or refinancing a home", - ], - "insurance": [ - "Auto Insurance: A type of insurance policy that provides coverage for losses related to an individual's car, including liability, collision, and comprehensive coverage", - "Home Insurance: A type of insurance policy that provides coverage for losses related to an individual's home, including protection for the structure, personal belongings, and liability coverage", - "Life Insurance: A type of insurance policy that provides financial protection to an individual's family in the event of their death", - "Health Insurance: A type of insurance policy that provides coverage for medical expenses and treatments, including doctor visits, hospital stays, and prescription drugs", - "Business Insurance: A type of insurance policy that provides coverage for losses related to a business, including liability, property, and workers' compensation coverage", - ], - "telecommunications (telco)": [ - "Postpaid Plan: A postpaid plan provides customers with a monthly bill for services used. The customer typically receives a set amount of data, minutes, and texts for a fixed price, with the option to add extra services for an additional fee", - "Prepaid Plan: A prepaid plan allows customers to pay for services in advance, before they use them. The customer adds credit to their account, which is then deducted for each call, text, or data usage", - "Family Plan: A family plan allows multiple users to share a single account, pooling their data, minutes, and texts. This type of plan is often more cost-effective than individual plans and is popular with families or groups of friends", - "Unlimited Plan: An unlimited plan provides customers with unlimited data, minutes, and texts for a fixed monthly fee. These plans are attractive to customers who use their mobile devices frequently and need a lot of data", - "Roaming Plan: A roaming plan provides customers with the ability to use their mobile devices while traveling abroad. The customer pays a fee for each day they use their device, and is provided with a set amount of data, minutes, and texts while they are overseas", - ], - "retail": [ - "Buy one, get one free: Customers can purchase one product and receive a second product of equal or lesser value for free", - "Limited-time discount: A temporary reduction in price for a specific product or product line, designed to encourage customers to make a purchase quickly", - "Bundled offer: A package deal that combines multiple products or services at a discounted price, often as a way to promote complementary products", - "Loyalty program: A reward system that incentivizes customers to continue making purchases by offering points, coupons, or other benefits for their spending", - "Free gift with purchase: Customers receive a complimentary item when they make a purchase, often to promote new products or drive sales of slower-moving inventory.", - ], - "transportation": [ - "Express Delivery Service - This offer would be ideal for customers who need to have their packages delivered quickly and with a guaranteed delivery time. This could be done through the use of priority shipping, courier services, and specialized delivery vehicles", - "Freight Shipping - This offer would target customers who need to transport large quantities of goods over long distances. The company would provide the necessary resources, such as shipping containers, trailers, and trucks, to safely transport the goods from point A to point B", - "Logistics Solutions - This offer would provide customers with a comprehensive set of services for managing their supply chain. This could include warehousing, inventory management, and order fulfillment services, among others", - "Shuttle Services - This offer would target customers who need to transport groups of people from one location to another, such as airport transfers, school trips, and group tours. The company would provide the necessary vehicles and drivers to safely transport the passengers", - "Last-Mile Delivery - This offer would be ideal for customers who need to have their packages delivered directly to the end customer. This could be done through the use of delivery vehicles, bicycles, and even drones, depending on the needs of the customer", - ] -} - -openai.api_key = st.secrets["openai-api-key"] - - -def generate_email(prompt: str, max_tokens: int = 256) -> str: - """ - Returns a generated an email using GPT3 with a certain prompt and starting sentence - """ - - completions = openai.Completion.create( - model="text-davinci-003", - prompt=prompt, - temperature=0.7, - max_tokens=max_tokens, - top_p=1, - frequency_penalty=0, - presence_penalty=0 - ) - message = completions.choices[0].text - return message - - -def company_type_changed(): - company_type = st.session_state['company_type'] - st.session_state['offer'] = choice(EXAMPLE_OFFERS.get(company_type)) - - -def main(): - st.title("Email Generator") - st.text("by Marc Puig") - - st.sidebar.markdown("### :arrow_right: Parameters") - - email_tone = st.sidebar.selectbox( - label="Email voice tone", - options=(sorted(VOICE_TONE_OPTIONS)) - ), - - email_company_type = st.sidebar.selectbox( - label="Company type", - key="company_type", - options=(sorted(COMPANY_TYPE_OPTIONS)), - on_change=company_type_changed, - ) - - if 'offer' not in st.session_state: - st.session_state['offer'] = choice(EXAMPLE_OFFERS.get(email_company_type)) - - email_offer = st.sidebar.text_area( - label="Offer description", - key="offer", - value=st.session_state['offer'], - height=200, - ) - - email_include_emojis = st.sidebar.checkbox('Include emojis 🤩') - - prompt_input = None - - if email_tone and email_company_type and email_offer: - prompt_input = PROMPT_TEMPLATE.format(tone=email_tone, company_type=email_company_type, offer=email_offer) - if email_include_emojis: - prompt_input = prompt_input + ", including emojis" - - max_tokens_input = st.slider( - label="How many characters do you want your email to be? ", - help="A typical email is usually 100-500 characters", - min_value=64, - max_value=400, - value=200 - ) - - with st.form(key="form"): - if st.form_submit_button(label='Generate email', disabled=prompt_input is None or len(prompt_input) == 0): - with st.spinner("Generating email..."): - output = generate_email(prompt_input, max_tokens=max_tokens_input) - st.markdown("----") - st.markdown(output) - - -if __name__ == "__main__": - main() diff --git a/spaces/mrloler/oai-claude/src/utils.js b/spaces/mrloler/oai-claude/src/utils.js deleted file mode 100644 index df50ddf2150e882f01aebcaa91ed05b60ea97585..0000000000000000000000000000000000000000 --- a/spaces/mrloler/oai-claude/src/utils.js +++ /dev/null @@ -1,191 +0,0 @@ -const FormData = require('form-data'); - -const wait = (duration) => { - return new Promise((resolve) => { - setTimeout(() => { - resolve(); - }, duration); - }); -}; - -function buildPrompt(messages) { - prompt = "\n\n" + preparePrompt(messages); - return prompt; - //do not escape for now - const escapedPrompt = prompt.replace(/\r?\n|\r/g, '\\n').replace(/"/g, '\\"'); - return escapedPrompt; -}; - -const readBody = (res, json) => new Promise((resolve, reject) => { - let buffer = ''; - - res.on('data', chunk => { - buffer += chunk; - }); - - res.on('end', () => { - try { - if (json) buffer = JSON.parse(buffer); - resolve(buffer); - } catch (e) { - console.error(buffer); - reject(e); - } - }); -}) - -function preparePrompt(messages) { - return messages.filter(m => m.content?.trim()).map(m => { - let author = ''; - switch (m.role) { - case 'user': author = 'Human'; break; - case 'assistant': author = 'Assistant'; break; - case 'system': author = 'System Note'; break; - default: author = m.role; break; - } - - return `${author}: ${m.content.trim()}`; - }).join('\n\n'); -} - -const currentTime = () => { - const date = new Date(); - const year = date.getFullYear(); - const month = String(date.getMonth() + 1).padStart(2, '0'); - const day = String(date.getDate()).padStart(2, '0'); - const hours = String(date.getHours()).padStart(2, '0'); - const minutes = String(date.getMinutes()).padStart(2, '0'); - const seconds = String(date.getSeconds()).padStart(2, '0'); - const milliseconds = String(date.getMilliseconds()).padStart(3, '0'); - - return `${year}-${month}-${day} ${hours}:${minutes}:${seconds}.${milliseconds}`; -}; - -const genHeaders = (config) => { - return { - 'Cookie': `d=${config.cookie};`, - 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/112.0', - } -} - -function splitJsonArray(jsonArray, maxLength) { - let result = []; - let currentChunk = []; - let currentLength = 2; // Accounts for the opening and closing square brackets in the JSON array - - const jail = '(OOC: This is just a part of the context, reply only with "OOC: understood")'; - const assistant = "\n\nAssistant: "; - - const addObjectToChunk = (object, chunk) => { - chunk.push(object); - return currentLength + JSON.stringify(object).length + 1; - }; - - const appendTextToContent = (object, text) => { - const newObj = JSON.parse(JSON.stringify(object)); - newObj.content += text; - return newObj; - }; - - for (const obj of jsonArray) { - const objLength = JSON.stringify(obj).length + 1; - - if (currentLength + objLength <= maxLength) { - currentLength = addObjectToChunk(obj, currentChunk); - } else { - const lastObjectInChunk = currentChunk[currentChunk.length - 1]; - if (!lastObjectInChunk) continue; - const lastObjectWithJail = appendTextToContent(lastObjectInChunk, ` ${jail}`); - const lastObjectWithJailLength = JSON.stringify(lastObjectWithJail).length + 1; - - if (currentLength - JSON.stringify(lastObjectInChunk).length - 1 + lastObjectWithJailLength <= maxLength) { - currentChunk[currentChunk.length - 1] = lastObjectWithJail; - } - - result.push(currentChunk); - currentChunk = [obj]; - currentLength = 2 + objLength; - } - } - - if (currentChunk.length > 0) { - result.push(currentChunk); - } - - const lastChunk = result[result.length - 1]; - const lastObjectInLastChunk = lastChunk[lastChunk.length - 1]; - const lastObjectWithAssistant = appendTextToContent(lastObjectInLastChunk, assistant); - const lastObjectWithAssistantLength = JSON.stringify(lastObjectWithAssistant).length + 1; - - if (currentLength - JSON.stringify(lastObjectInLastChunk).length - 1 + lastObjectWithAssistantLength <= maxLength) { - lastChunk[lastChunk.length - 1] = lastObjectWithAssistant; - } - - return result; -} - -function convertToUnixTime(date) { - const unixTime = Math.floor(date.getTime() / 1000); - const randomDigit = Math.floor(Math.random() * 10); - return `${unixTime}.xxxxx${randomDigit}`; -} - -function createBaseForm(config) { - const form = new FormData(); - form.append('token', config.token); - form.append('channel', `${config.claudeId}`); - form.append('_x_mode', 'online'); - form.append('_x_sonic', 'true'); - return form; -} - -// Add the utility functions here -// e.g. escapePrompt, readBody, preparePrompt, currentTime, headers, convertToUnixTime, createBaseForm - -const dataToResponse = ( - data, - promptTokens, - completionTokens, - stream = false, - reason = null -) => { - const currDate = new Date(); - const contentData = { content: data, role: 'assistant' }; - const contentName = stream ? 'delta' : 'message'; - - return { - choices: [ - { - [contentName]: !!data ? contentData : {}, - finish_reason: reason, - index: 0, - }, - ], - created: currDate.getTime(), - id: `chatcmpl-${(Math.random().toString(36).slice(2))}`, - object: 'chat.completion.chunk', - usage: { - prompt_tokens: promptTokens, - completion_tokens: completionTokens, - total_tokens: promptTokens + completionTokens, - }, - }; -}; - -const stats = { - prompts: [] -} - -module.exports = { - buildPrompt, - readBody, - preparePrompt, - currentTime, - genHeaders, - convertToUnixTime, - createBaseForm, - splitJsonArray, - wait, - dataToResponse, - stats, -}; \ No newline at end of file diff --git a/spaces/mrm8488/speech-to-diffusion/README.md b/spaces/mrm8488/speech-to-diffusion/README.md deleted file mode 100644 index 5b7b0535368d21ce73e410b22ee418d1e0f88048..0000000000000000000000000000000000000000 --- a/spaces/mrm8488/speech-to-diffusion/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Speech To Diffusion -emoji: 🌖 -colorFrom: indigo -colorTo: yellow -sdk: gradio -sdk_version: 3.3 -app_file: app.py -pinned: false -license: wtfpl ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mrrandom123/image_creative_caption_new/app.py b/spaces/mrrandom123/image_creative_caption_new/app.py deleted file mode 100644 index b0ef195f114baf72e102bb0a67a3ad41db94e55a..0000000000000000000000000000000000000000 --- a/spaces/mrrandom123/image_creative_caption_new/app.py +++ /dev/null @@ -1,56 +0,0 @@ -import streamlit as st -import os -import cohere -from PIL import Image -from transformers import BlipProcessor, BlipForConditionalGeneration, AutoTokenizer -import itertools -from nltk.corpus import stopwords -import nltk -import easyocr -import torch -import numpy as np -nltk.download('stopwords') - -COHERE_API_KEY = os.getenv('COHERE_API_KEY') -co_client = cohere.Client(COHERE_API_KEY) - - -processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") -model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base") - -tokenizer = AutoTokenizer.from_pretrained("nlpconnect/vit-gpt2-image-captioning") -reader = easyocr.Reader(['en']) -# set up Streamlit app -st.set_page_config(layout='wide', page_title='Image Hashtag Recommender') - -def genrate_caption(image_file): - image = Image.open(image_file).convert('RGB') - inputs = processor(image, return_tensors="pt") - output_ids = model.generate(**inputs) - output_text = processor.decode(output_ids[0], skip_special_tokens=True) - return output_text - -st.title("Image Caption and HashTag Generator") -image_file = st.file_uploader("Upload an image", type=["jpg", "jpeg", "png"]) - -def creative_caption(text): - return co_client.generate(prompt=f"Write some trendy, catchy, exciting, innovative, captivating, creative and engaging instagram captions for the following prompt - {text}").generations[0].text - - -def caption_hashtags(text): - return co_client.generate(prompt=f"Write 10 trendy instagram hashtags for the following prompt - {text}").generations[0].text - -if image_file is not None: - try: - caption = genrate_caption(image_file) - caption_text = creative_caption(caption) - hashtags = caption_hashtags(caption) - if len(caption) > 0: - st.write(f"Caption : {caption}") - st.write(f"Creative Caption : {caption_text}") - st.write(f"Creative hashtags : {hashtags}") - - else: - st.write("No caption found for this image.") - except Exception as e: - st.write(f"Error: {e}") diff --git a/spaces/mrwenchen/stabilityai-stable-diffusion-2-1/app.py b/spaces/mrwenchen/stabilityai-stable-diffusion-2-1/app.py deleted file mode 100644 index 0160420876923d89f2ab5fccb9f4d13725e29972..0000000000000000000000000000000000000000 --- a/spaces/mrwenchen/stabilityai-stable-diffusion-2-1/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/stabilityai/stable-diffusion-2-1").launch() \ No newline at end of file diff --git a/spaces/mshkdm/VToonify/vtoonify/model/vgg.py b/spaces/mshkdm/VToonify/vtoonify/model/vgg.py deleted file mode 100644 index a1043d5bd8bdd0d1484d2270ae0d33c29495856c..0000000000000000000000000000000000000000 --- a/spaces/mshkdm/VToonify/vtoonify/model/vgg.py +++ /dev/null @@ -1,60 +0,0 @@ -import torch -import torch.nn as nn -import torchvision - -# VGG architecter, used for the perceptual loss using a pretrained VGG network -class VGG19(torch.nn.Module): - def __init__(self, requires_grad=False): - super().__init__() - vgg_pretrained_features = torchvision.models.vgg19(pretrained=True).features - self.slice1 = torch.nn.Sequential() - self.slice2 = torch.nn.Sequential() - self.slice3 = torch.nn.Sequential() - self.slice4 = torch.nn.Sequential() - self.slice5 = torch.nn.Sequential() - self.slice6 = torch.nn.Sequential() - for x in range(2): - self.slice1.add_module(str(x), vgg_pretrained_features[x]) - for x in range(2, 7): - self.slice2.add_module(str(x), vgg_pretrained_features[x]) - for x in range(7, 12): - self.slice3.add_module(str(x), vgg_pretrained_features[x]) - for x in range(12, 21): - self.slice4.add_module(str(x), vgg_pretrained_features[x]) - for x in range(21, 32): - self.slice5.add_module(str(x), vgg_pretrained_features[x]) - for x in range(32, 36): - self.slice6.add_module(str(x), vgg_pretrained_features[x]) - if not requires_grad: - for param in self.parameters(): - param.requires_grad = False - - self.pool = nn.AdaptiveAvgPool2d(output_size=1) - - self.mean = torch.tensor([0.485, 0.456, 0.406]).view(1,-1, 1, 1).cuda() * 2 - 1 - self.std = torch.tensor([0.229, 0.224, 0.225]).view(1,-1, 1, 1).cuda() * 2 - - def forward(self, X): # relui_1 - X = (X-self.mean)/self.std - h_relu1 = self.slice1(X) - h_relu2 = self.slice2(h_relu1) - h_relu3 = self.slice3(h_relu2) - h_relu4 = self.slice4(h_relu3) - h_relu5 = self.slice5[:-2](h_relu4) - out = [h_relu1, h_relu2, h_relu3, h_relu4, h_relu5] - return out - -# Perceptual loss that uses a pretrained VGG network -class VGGLoss(nn.Module): - def __init__(self): - super(VGGLoss, self).__init__() - self.vgg = VGG19().cuda() - self.criterion = nn.L1Loss() - self.weights = [1.0 / 32, 1.0 / 16, 1.0 / 8, 1.0 / 4, 1.0] - - def forward(self, x, y): - x_vgg, y_vgg = self.vgg(x), self.vgg(y) - loss = 0 - for i in range(len(x_vgg)): - loss += self.weights[i] * self.criterion(x_vgg[i], y_vgg[i].detach()) - return loss \ No newline at end of file diff --git a/spaces/mshukor/UnIVAL/data/data_utils.py b/spaces/mshukor/UnIVAL/data/data_utils.py deleted file mode 100644 index d45beb1aca2e55b1ca9b2c01ce1a869ad9a2121d..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/data/data_utils.py +++ /dev/null @@ -1,601 +0,0 @@ -# Copyright 2022 The OFA-Sys Team. -# All rights reserved. -# This source code is licensed under the Apache 2.0 license -# found in the LICENSE file in the root directory. - -try: - from collections.abc import Iterable -except ImportError: - from collections import Iterable -import contextlib -import itertools -import logging -import re -import warnings -from typing import Optional, Tuple - -import numpy as np -import torch - -from fairseq.file_io import PathManager -from fairseq import utils -import os - -logger = logging.getLogger(__name__) - - -def infer_language_pair(path): - """Infer language pair from filename: .-.(...).idx""" - src, dst = None, None - for filename in PathManager.ls(path): - parts = filename.split(".") - if len(parts) >= 3 and len(parts[1].split("-")) == 2: - return parts[1].split("-") - return src, dst - - -def collate_tokens( - values, - pad_idx, - eos_idx=None, - left_pad=False, - move_eos_to_beginning=False, - pad_to_length=None, - pad_to_multiple=1, - pad_to_bsz=None, -): - """Convert a list of 1d tensors into a padded 2d tensor.""" - size = max(v.size(0) for v in values) - size = size if pad_to_length is None else max(size, pad_to_length) - if pad_to_multiple != 1 and size % pad_to_multiple != 0: - size = int(((size - 0.1) // pad_to_multiple + 1) * pad_to_multiple) - - def copy_tensor(src, dst): - assert dst.numel() == src.numel() - if move_eos_to_beginning: - if eos_idx is None: - # if no eos_idx is specified, then use the last token in src - dst[0] = src[-1] - else: - dst[0] = eos_idx - dst[1:] = src[:-1] - else: - dst.copy_(src) - - if values[0].dim() == 1: - res = values[0].new(len(values), size).fill_(pad_idx) - elif values[0].dim() == 2: - assert move_eos_to_beginning is False - res = values[0].new(len(values), size, values[0].size(1)).fill_(pad_idx) - else: - raise NotImplementedError - - for i, v in enumerate(values): - copy_tensor(v, res[i][size - len(v) :] if left_pad else res[i][: len(v)]) - return res - - -def load_indexed_dataset( - path, dictionary=None, dataset_impl=None, combine=False, default="cached" -): - """A helper function for loading indexed datasets. - - Args: - path (str): path to indexed dataset (e.g., 'data-bin/train') - dictionary (~fairseq.data.Dictionary): data dictionary - dataset_impl (str, optional): which dataset implementation to use. If - not provided, it will be inferred automatically. For legacy indexed - data we use the 'cached' implementation by default. - combine (bool, optional): automatically load and combine multiple - datasets. For example, if *path* is 'data-bin/train', then we will - combine 'data-bin/train', 'data-bin/train1', ... and return a - single ConcatDataset instance. - """ - import fairseq.data.indexed_dataset as indexed_dataset - from fairseq.data.concat_dataset import ConcatDataset - - datasets = [] - for k in itertools.count(): - path_k = path + (str(k) if k > 0 else "") - try: - path_k = indexed_dataset.get_indexed_dataset_to_local(path_k) - except Exception as e: - if "StorageException: [404] Path not found" in str(e): - logger.warning(f"path_k: {e} not found") - else: - raise e - - dataset_impl_k = dataset_impl - if dataset_impl_k is None: - dataset_impl_k = indexed_dataset.infer_dataset_impl(path_k) - dataset = indexed_dataset.make_dataset( - path_k, - impl=dataset_impl_k or default, - fix_lua_indexing=True, - dictionary=dictionary, - ) - if dataset is None: - break - logger.info("loaded {:,} examples from: {}".format(len(dataset), path_k)) - datasets.append(dataset) - if not combine: - break - if len(datasets) == 0: - return None - elif len(datasets) == 1: - return datasets[0] - else: - return ConcatDataset(datasets) - - -@contextlib.contextmanager -def numpy_seed(seed, *addl_seeds): - """Context manager which seeds the NumPy PRNG with the specified seed and - restores the state afterward""" - if seed is None: - yield - return - if len(addl_seeds) > 0: - seed = int(hash((seed, *addl_seeds)) % 1e6) - state = np.random.get_state() - np.random.seed(seed) - try: - yield - finally: - np.random.set_state(state) - - -def collect_filtered(function, iterable, filtered): - """ - Similar to :func:`filter` but collects filtered elements in ``filtered``. - - Args: - function (callable): function that returns ``False`` for elements that - should be filtered - iterable (iterable): iterable to filter - filtered (list): list to store filtered elements - """ - for el in iterable: - if function(el): - yield el - else: - filtered.append(el) - - -def _filter_by_size_dynamic(indices, size_fn, max_positions, raise_exception=False): - def compare_leq(a, b): - return a <= b if not isinstance(a, tuple) else max(a) <= b - - def check_size(idx): - if isinstance(max_positions, float) or isinstance(max_positions, int): - return size_fn(idx) <= max_positions - elif isinstance(max_positions, dict): - idx_size = size_fn(idx) - assert isinstance(idx_size, dict) - intersect_keys = set(max_positions.keys()) & set(idx_size.keys()) - return all( - all( - a is None or b is None or a <= b - for a, b in zip(idx_size[key], max_positions[key]) - ) - for key in intersect_keys - ) - else: - # For MultiCorpusSampledDataset, will generalize it later - if not isinstance(size_fn(idx), Iterable): - return all(size_fn(idx) <= b for b in max_positions) - return all( - a is None or b is None or a <= b - for a, b in zip(size_fn(idx), max_positions) - ) - - ignored = [] - itr = collect_filtered(check_size, indices, ignored) - indices = np.fromiter(itr, dtype=np.int64, count=-1) - return indices, ignored - - -def filter_by_size(indices, dataset, max_positions, raise_exception=False): - """ - [deprecated] Filter indices based on their size. - Use `FairseqDataset::filter_indices_by_size` instead. - - Args: - indices (List[int]): ordered list of dataset indices - dataset (FairseqDataset): fairseq dataset instance - max_positions (tuple): filter elements larger than this size. - Comparisons are done component-wise. - raise_exception (bool, optional): if ``True``, raise an exception if - any elements are filtered (default: False). - """ - warnings.warn( - "data_utils.filter_by_size is deprecated. " - "Use `FairseqDataset::filter_indices_by_size` instead.", - stacklevel=2, - ) - if isinstance(max_positions, float) or isinstance(max_positions, int): - if hasattr(dataset, "sizes") and isinstance(dataset.sizes, np.ndarray): - ignored = indices[dataset.sizes[indices] > max_positions].tolist() - indices = indices[dataset.sizes[indices] <= max_positions] - elif ( - hasattr(dataset, "sizes") - and isinstance(dataset.sizes, list) - and len(dataset.sizes) == 1 - ): - ignored = indices[dataset.sizes[0][indices] > max_positions].tolist() - indices = indices[dataset.sizes[0][indices] <= max_positions] - else: - indices, ignored = _filter_by_size_dynamic( - indices, dataset.size, max_positions - ) - else: - indices, ignored = _filter_by_size_dynamic(indices, dataset.size, max_positions) - - if len(ignored) > 0 and raise_exception: - raise Exception( - ( - "Size of sample #{} is invalid (={}) since max_positions={}, " - "skip this example with --skip-invalid-size-inputs-valid-test" - ).format(ignored[0], dataset.size(ignored[0]), max_positions) - ) - if len(ignored) > 0: - logger.warning( - ( - "{} samples have invalid sizes and will be skipped, " - "max_positions={}, first few sample ids={}" - ).format(len(ignored), max_positions, ignored[:10]) - ) - return indices - - -def filter_paired_dataset_indices_by_size(src_sizes, tgt_sizes, indices, max_sizes): - """Filter a list of sample indices. Remove those that are longer - than specified in max_sizes. - - Args: - indices (np.array): original array of sample indices - max_sizes (int or list[int] or tuple[int]): max sample size, - can be defined separately for src and tgt (then list or tuple) - - Returns: - np.array: filtered sample array - list: list of removed indices - """ - if max_sizes is None: - return indices, [] - if type(max_sizes) in (int, float): - max_src_size, max_tgt_size = max_sizes, max_sizes - else: - max_src_size, max_tgt_size = max_sizes - if tgt_sizes is None: - ignored = indices[src_sizes[indices] > max_src_size] - else: - ignored = indices[ - (src_sizes[indices] > max_src_size) | (tgt_sizes[indices] > max_tgt_size) - ] - if len(ignored) > 0: - if tgt_sizes is None: - indices = indices[src_sizes[indices] <= max_src_size] - else: - indices = indices[ - (src_sizes[indices] <= max_src_size) - & (tgt_sizes[indices] <= max_tgt_size) - ] - return indices, ignored.tolist() - - -def batch_by_size( - indices, - num_tokens_fn, - num_tokens_vec=None, - max_tokens=None, - max_sentences=None, - required_batch_size_multiple=1, - fixed_shapes=None, -): - """ - Yield mini-batches of indices bucketed by size. Batches may contain - sequences of different lengths. - - Args: - indices (List[int]): ordered list of dataset indices - num_tokens_fn (callable): function that returns the number of tokens at - a given index - num_tokens_vec (List[int], optional): precomputed vector of the number - of tokens for each index in indices (to enable faster batch generation) - max_tokens (int, optional): max number of tokens in each batch - (default: None). - max_sentences (int, optional): max number of sentences in each - batch (default: None). - required_batch_size_multiple (int, optional): require batch size to - be less than N or a multiple of N (default: 1). - fixed_shapes (List[Tuple[int, int]], optional): if given, batches will - only be created with the given shapes. *max_sentences* and - *required_batch_size_multiple* will be ignored (default: None). - """ - try: - from fairseq.data.data_utils_fast import ( - batch_by_size_fn, - batch_by_size_vec, - batch_fixed_shapes_fast, - ) - except ImportError: - raise ImportError( - "Please build Cython components with: " - "`python setup.py build_ext --inplace`" - ) - except ValueError: - raise ValueError( - "Please build (or rebuild) Cython components with `python setup.py build_ext --inplace`." - ) - - # added int() to avoid TypeError: an integer is required - max_tokens = ( - int(max_tokens) if max_tokens is not None else -1 - ) - max_sentences = max_sentences if max_sentences is not None else -1 - bsz_mult = required_batch_size_multiple - - if not isinstance(indices, np.ndarray): - indices = np.fromiter(indices, dtype=np.int64, count=-1) - - if num_tokens_vec is not None and not isinstance(num_tokens_vec, np.ndarray): - num_tokens_vec = np.fromiter(num_tokens_vec, dtype=np.int64, count=-1) - - if fixed_shapes is None: - if num_tokens_vec is None: - return batch_by_size_fn( - indices, - num_tokens_fn, - max_tokens, - max_sentences, - bsz_mult, - ) - else: - return batch_by_size_vec( - indices, - num_tokens_vec, - max_tokens, - max_sentences, - bsz_mult, - ) - - else: - fixed_shapes = np.array(fixed_shapes, dtype=np.int64) - sort_order = np.lexsort( - [ - fixed_shapes[:, 1].argsort(), # length - fixed_shapes[:, 0].argsort(), # bsz - ] - ) - fixed_shapes_sorted = fixed_shapes[sort_order] - return batch_fixed_shapes_fast(indices, num_tokens_fn, fixed_shapes_sorted) - - -def post_process(sentence: str, symbol: str): - if symbol == "sentencepiece": - sentence = sentence.replace(" ", "").replace("\u2581", " ").strip() - elif symbol == "wordpiece": - sentence = sentence.replace(" ", "").replace("_", " ").strip() - elif symbol == "letter": - sentence = sentence.replace(" ", "").replace("|", " ").strip() - elif symbol == "silence": - import re - sentence = sentence.replace("", "") - sentence = re.sub(' +', ' ', sentence).strip() - elif symbol == "_EOW": - sentence = sentence.replace(" ", "").replace("_EOW", " ").strip() - elif symbol in {"subword_nmt", "@@ ", "@@"}: - if symbol == "subword_nmt": - symbol = "@@ " - sentence = (sentence + " ").replace(symbol, "").rstrip() - elif symbol == "none": - pass - elif symbol is not None: - raise NotImplementedError(f"Unknown post_process option: {symbol}") - return sentence - - -def compute_mask_indices( - shape: Tuple[int, int], - padding_mask: Optional[torch.Tensor], - mask_prob: float, - mask_length: int, - mask_type: str = "static", - mask_other: float = 0.0, - min_masks: int = 0, - no_overlap: bool = False, - min_space: int = 0, -) -> np.ndarray: - """ - Computes random mask spans for a given shape - - Args: - shape: the the shape for which to compute masks. - should be of size 2 where first element is batch size and 2nd is timesteps - padding_mask: optional padding mask of the same size as shape, which will prevent masking padded elements - mask_prob: probability for each token to be chosen as start of the span to be masked. this will be multiplied by - number of timesteps divided by length of mask span to mask approximately this percentage of all elements. - however due to overlaps, the actual number will be smaller (unless no_overlap is True) - mask_type: how to compute mask lengths - static = fixed size - uniform = sample from uniform distribution [mask_other, mask_length*2] - normal = sample from normal distribution with mean mask_length and stdev mask_other. mask is min 1 element - poisson = sample from possion distribution with lambda = mask length - min_masks: minimum number of masked spans - no_overlap: if false, will switch to an alternative recursive algorithm that prevents spans from overlapping - min_space: only used if no_overlap is True, this is how many elements to keep unmasked between spans - """ - - bsz, all_sz = shape - mask = np.full((bsz, all_sz), False) - - all_num_mask = int( - # add a random number for probabilistic rounding - mask_prob * all_sz / float(mask_length) - + np.random.rand() - ) - - all_num_mask = max(min_masks, all_num_mask) - - mask_idcs = [] - for i in range(bsz): - if padding_mask is not None: - sz = all_sz - padding_mask[i].long().sum().item() - num_mask = int( - # add a random number for probabilistic rounding - mask_prob * sz / float(mask_length) - + np.random.rand() - ) - num_mask = max(min_masks, num_mask) - else: - sz = all_sz - num_mask = all_num_mask - - if mask_type == "static": - lengths = np.full(num_mask, mask_length) - elif mask_type == "uniform": - lengths = np.random.randint(mask_other, mask_length * 2 + 1, size=num_mask) - elif mask_type == "normal": - lengths = np.random.normal(mask_length, mask_other, size=num_mask) - lengths = [max(1, int(round(x))) for x in lengths] - elif mask_type == "poisson": - lengths = np.random.poisson(mask_length, size=num_mask) - lengths = [int(round(x)) for x in lengths] - else: - raise Exception("unknown mask selection " + mask_type) - - if sum(lengths) == 0: - lengths[0] = min(mask_length, sz - 1) - - if no_overlap: - mask_idc = [] - - def arrange(s, e, length, keep_length): - span_start = np.random.randint(s, e - length) - mask_idc.extend(span_start + i for i in range(length)) - - new_parts = [] - if span_start - s - min_space >= keep_length: - new_parts.append((s, span_start - min_space + 1)) - if e - span_start - keep_length - min_space > keep_length: - new_parts.append((span_start + length + min_space, e)) - return new_parts - - parts = [(0, sz)] - min_length = min(lengths) - for length in sorted(lengths, reverse=True): - lens = np.fromiter( - (e - s if e - s >= length + min_space else 0 for s, e in parts), - np.int, - ) - l_sum = np.sum(lens) - if l_sum == 0: - break - probs = lens / np.sum(lens) - c = np.random.choice(len(parts), p=probs) - s, e = parts.pop(c) - parts.extend(arrange(s, e, length, min_length)) - mask_idc = np.asarray(mask_idc) - else: - min_len = min(lengths) - if sz - min_len <= num_mask: - min_len = sz - num_mask - 1 - - mask_idc = np.random.choice(sz - min_len, num_mask, replace=False) - - mask_idc = np.asarray( - [ - mask_idc[j] + offset - for j in range(len(mask_idc)) - for offset in range(lengths[j]) - ] - ) - - mask_idcs.append(np.unique(mask_idc[mask_idc < sz])) - - min_len = min([len(m) for m in mask_idcs]) - for i, mask_idc in enumerate(mask_idcs): - if len(mask_idc) > min_len: - mask_idc = np.random.choice(mask_idc, min_len, replace=False) - mask[i, mask_idc] = True - - return mask - - -def get_mem_usage(): - try: - import psutil - - mb = 1024 * 1024 - return f"used={psutil.virtual_memory().used / mb}Mb; avail={psutil.virtual_memory().available / mb}Mb" - except ImportError: - return "N/A" - - -# lens: torch.LongTensor -# returns: torch.BoolTensor -def lengths_to_padding_mask(lens): - bsz, max_lens = lens.size(0), torch.max(lens).item() - mask = torch.arange(max_lens).to(lens.device).view(1, max_lens) - mask = mask.expand(bsz, -1) >= lens.view(bsz, 1).expand(-1, max_lens) - return mask - - -# lens: torch.LongTensor -# returns: torch.BoolTensor -def lengths_to_mask(lens): - return ~lengths_to_padding_mask(lens) - - -def get_buckets(sizes, num_buckets): - buckets = np.unique( - np.percentile( - sizes, - np.linspace(0, 100, num_buckets + 1), - interpolation='lower', - )[1:] - ) - return buckets - - -def get_bucketed_sizes(orig_sizes, buckets): - sizes = np.copy(orig_sizes) - assert np.min(sizes) >= 0 - start_val = -1 - for end_val in buckets: - mask = (sizes > start_val) & (sizes <= end_val) - sizes[mask] = end_val - start_val = end_val - return sizes - - - -def _find_extra_valid_paths(dataset_path: str) -> set: - paths = utils.split_paths(dataset_path) - all_valid_paths = set() - for sub_dir in paths: - contents = PathManager.ls(sub_dir) - valid_paths = [c for c in contents if re.match("valid*[0-9].*", c) is not None] - all_valid_paths |= {os.path.basename(p) for p in valid_paths} - # Remove .bin, .idx etc - roots = {os.path.splitext(p)[0] for p in all_valid_paths} - return roots - - -def raise_if_valid_subsets_unintentionally_ignored(train_cfg) -> None: - """Raises if there are paths matching 'valid*[0-9].*' which are not combined or ignored.""" - if ( - train_cfg.dataset.ignore_unused_valid_subsets - or train_cfg.dataset.combine_valid_subsets - or train_cfg.dataset.disable_validation - or not hasattr(train_cfg.task, "data") - ): - return - other_paths = _find_extra_valid_paths(train_cfg.task.data) - specified_subsets = train_cfg.dataset.valid_subset.split(",") - ignored_paths = [p for p in other_paths if p not in specified_subsets] - if ignored_paths: - advice = "Set --combine-val to combine them or --ignore-unused-valid-subsets to ignore them." - msg = f"Valid paths {ignored_paths} will be ignored. {advice}" - raise ValueError(msg) diff --git a/spaces/mw00/chess-classification/app.py b/spaces/mw00/chess-classification/app.py deleted file mode 100644 index 1621b7c41ca22d9228a37dddc4aa0e0664411846..0000000000000000000000000000000000000000 --- a/spaces/mw00/chess-classification/app.py +++ /dev/null @@ -1,26 +0,0 @@ -# AUTOGENERATED! DO NOT EDIT! File to edit: chess-deployment.ipynb. - -# %% auto 0 -__all__ = ['learn', 'categories', 'image', 'label', 'examples', 'intf', 'classify_image'] - -# %% chess-deployment.ipynb 1 -from fastai.vision.all import * -import gradio as gr - -# %% chess-deployment.ipynb 3 -learn = load_learner("chess-model.pkl") - -# %% chess-deployment.ipynb 5 -categories = ("Bishop", "King", "Knight", "Pawn", "Queen", "Rook") - -def classify_image(img): - pred,pred_idx,probs = learn.predict(img) - return dict(zip(categories, map(float, probs))) - -# %% chess-deployment.ipynb 7 -image = gr.inputs.Image(shape=(192,192)) -label = gr.outputs.Label() -examples = ["bishop.png", "king.jpg", "knight.png", "rook.jpg", "pawn.jpg"] - -intf = gr.Interface(fn=classify_image, inputs=image, outputs=label, examples=examples, title="Chess Piece Classifier", description="Classify chess pieces into King, Queen, Bishop, Knight, Rook, or Pawn") -intf.launch(inline=False) diff --git a/spaces/mygyasir/Real-Time-Voice-Cloning/utils/__init__.py b/spaces/mygyasir/Real-Time-Voice-Cloning/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/nanglo123/GTSRB-Deployment/create_model.py b/spaces/nanglo123/GTSRB-Deployment/create_model.py deleted file mode 100644 index 5ba000a50ea6313abcd648ee55f433b37096280f..0000000000000000000000000000000000000000 --- a/spaces/nanglo123/GTSRB-Deployment/create_model.py +++ /dev/null @@ -1,17 +0,0 @@ -import torch -import torchvision -from model_class import CNNTraffic -from torch import nn -from torchvision import transforms - -def create_CNN(seed:int=42): - - model = CNNTraffic(input_shape=3,output_shape=43) - - - for param in model.parameters(): - param.requires_grad = False - - - - return model diff --git a/spaces/nateraw/dino-clips/dino/video_generation.py b/spaces/nateraw/dino-clips/dino/video_generation.py deleted file mode 100644 index 82b796961b536b67bd3549a38ae82b42fa2769d6..0000000000000000000000000000000000000000 --- a/spaces/nateraw/dino-clips/dino/video_generation.py +++ /dev/null @@ -1,388 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import os -import glob -import sys -import argparse -import cv2 - -from tqdm import tqdm -import matplotlib.pyplot as plt -import torch -import torch.nn as nn -import torchvision -from torchvision import transforms as pth_transforms -import numpy as np -from PIL import Image - -import utils -import vision_transformer as vits - - -FOURCC = { - "mp4": cv2.VideoWriter_fourcc(*"MP4V"), - "avi": cv2.VideoWriter_fourcc(*"XVID"), -} -DEVICE = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") - - -class VideoGenerator: - def __init__(self, args): - self.args = args - # self.model = None - # Don't need to load model if you only want a video - if not self.args.video_only: - self.model = self.__load_model() - - def run(self): - if self.args.input_path is None: - print(f"Provided input path {self.args.input_path} is non valid.") - sys.exit(1) - else: - if self.args.video_only: - self._generate_video_from_images( - self.args.input_path, self.args.output_path - ) - else: - # If input path exists - if os.path.exists(self.args.input_path): - frames_folder = os.path.join(self.args.output_path, "frames") - os.makedirs(frames_folder, exist_ok=True) - # If input is a video file - if os.path.isfile(self.args.input_path): - attention_folder = os.path.join( - self.args.output_path, "attention" - ) - - os.makedirs(attention_folder, exist_ok=True) - - self._extract_frames_from_video( - self.args.input_path, frames_folder - ) - - self._inference( - frames_folder, - attention_folder, - ) - - self._generate_video_from_images( - attention_folder, self.args.output_path - ) - self._generate_video_from_images( - frames_folder, - self.args.output_path, - file_pattern="reshaped-*.jpg", - out_video_name="original-reshaped" - ) - # If input is a folder of already extracted frames - if os.path.isdir(self.args.input_path): - attention_folder = os.path.join( - self.args.output_path, "attention" - ) - - os.makedirs(attention_folder, exist_ok=True) - - self._inference(self.args.input_path, attention_folder) - - self._generate_video_from_images( - attention_folder, self.args.output_path - ) - self._generate_video_from_images( - frames_folder, - self.args.output_path, - file_pattern="reshaped-*.jpg", - out_video_name="original-reshaped" - ) - # If input path doesn't exists - else: - print(f"Provided input path {self.args.input_path} doesn't exists.") - sys.exit(1) - - def _extract_frames_from_video(self, inp: str, out: str): - vidcap = cv2.VideoCapture(inp) - self.args.fps = vidcap.get(cv2.CAP_PROP_FPS) - - print(f"Video: {inp} ({self.args.fps} fps)") - print(f"Extracting frames to {out}") - - success, image = vidcap.read() - count = 0 - while success: - cv2.imwrite( - os.path.join(out, f"frame-{count:04}.jpg"), - image, - ) - success, image = vidcap.read() - count += 1 - - def _generate_video_from_images(self, inp: str, out: str, file_pattern="attn-*.jpg", out_video_name="video"): - img_array = [] - attention_images_list = sorted(glob.glob(os.path.join(inp, file_pattern))) - - # Get size of the first image - with open(attention_images_list[0], "rb") as f: - img = Image.open(f) - img = img.convert("RGB") - size = (img.width, img.height) - img_array.append(cv2.cvtColor(np.array(img), cv2.COLOR_RGB2BGR)) - - print(f"Generating video {size} to {out}") - - for filename in tqdm(attention_images_list[1:]): - with open(filename, "rb") as f: - img = Image.open(f) - img = img.convert("RGB") - img_array.append(cv2.cvtColor(np.array(img), cv2.COLOR_RGB2BGR)) - - out = cv2.VideoWriter( - os.path.join(out, f"{out_video_name}." + self.args.video_format), - FOURCC[self.args.video_format], - self.args.fps, - size, - ) - - for i in range(len(img_array)): - out.write(img_array[i]) - out.release() - print("Done") - - def _inference(self, inp: str, out: str): - print(f"Generating attention images to {out}") - - for img_path in tqdm(sorted(glob.glob(os.path.join(inp, "*.jpg")))): - with open(img_path, "rb") as f: - img_in = Image.open(f) - img_in = img_in.convert("RGB") - - if self.args.resize is not None: - transform = pth_transforms.Compose( - [ - pth_transforms.ToTensor(), - pth_transforms.Resize(self.args.resize), - pth_transforms.Normalize( - (0.485, 0.456, 0.406), (0.229, 0.224, 0.225) - ), - ] - ) - else: - transform = pth_transforms.Compose( - [ - pth_transforms.ToTensor(), - pth_transforms.Normalize( - (0.485, 0.456, 0.406), (0.229, 0.224, 0.225) - ), - ] - ) - - img = transform(img_in) - - # make the image divisible by the patch size - w, h = ( - img.shape[1] - img.shape[1] % self.args.patch_size, - img.shape[2] - img.shape[2] % self.args.patch_size, - ) - img = img[:, :w, :h].unsqueeze(0) - w_featmap = img.shape[-2] // self.args.patch_size - h_featmap = img.shape[-1] // self.args.patch_size - - attentions = self.model.get_last_selfattention(img.to(DEVICE)) - nh = attentions.shape[1] # number of head - - # we keep only the output patch attention - attentions = attentions[0, :, 0, 1:].reshape(nh, -1) - - # we keep only a certain percentage of the mass - val, idx = torch.sort(attentions) - val /= torch.sum(val, dim=1, keepdim=True) - cumval = torch.cumsum(val, dim=1) - th_attn = cumval > (1 - self.args.threshold) - idx2 = torch.argsort(idx) - for head in range(nh): - th_attn[head] = th_attn[head][idx2[head]] - th_attn = th_attn.reshape(nh, w_featmap, h_featmap).float() - # interpolate - th_attn = ( - nn.functional.interpolate( - th_attn.unsqueeze(0), - scale_factor=self.args.patch_size, - mode="nearest", - )[0] - .cpu() - .numpy() - ) - - attentions = attentions.reshape(nh, w_featmap, h_featmap) - attentions = ( - nn.functional.interpolate( - attentions.unsqueeze(0), - scale_factor=self.args.patch_size, - mode="nearest", - )[0] - .cpu() - .numpy() - ) - - # save attentions heatmaps - fname = os.path.join(out, "attn-" + os.path.basename(img_path)) - plt.imsave( - fname=fname, - arr=sum( - attentions[i] * 1 / attentions.shape[0] - for i in range(attentions.shape[0]) - ), - cmap="inferno", - format="jpg", - ) - fname = os.path.join(os.path.dirname(out), "frames/reshaped-" + os.path.basename(img_path)) - img_in = img_in.resize((attentions[0].shape[1], attentions[0].shape[0])) - img_in.save(fname) - - def __load_model(self): - # build model - model = vits.__dict__[self.args.arch]( - patch_size=self.args.patch_size, num_classes=0 - ) - for p in model.parameters(): - p.requires_grad = False - model.eval() - model.to(DEVICE) - - if os.path.isfile(self.args.pretrained_weights): - state_dict = torch.load(self.args.pretrained_weights, map_location="cpu") - if ( - self.args.checkpoint_key is not None - and self.args.checkpoint_key in state_dict - ): - print( - f"Take key {self.args.checkpoint_key} in provided checkpoint dict" - ) - state_dict = state_dict[self.args.checkpoint_key] - state_dict = {k.replace("module.", ""): v for k, v in state_dict.items()} - # remove `backbone.` prefix induced by multicrop wrapper - state_dict = {k.replace("backbone.", ""): v for k, v in state_dict.items()} - msg = model.load_state_dict(state_dict, strict=False) - print( - "Pretrained weights found at {} and loaded with msg: {}".format( - self.args.pretrained_weights, msg - ) - ) - else: - print( - "Please use the `--pretrained_weights` argument to indicate the path of the checkpoint to evaluate." - ) - url = None - if self.args.arch == "vit_small" and self.args.patch_size == 16: - url = "dino_deitsmall16_pretrain/dino_deitsmall16_pretrain.pth" - elif self.args.arch == "vit_small" and self.args.patch_size == 8: - url = "dino_deitsmall8_300ep_pretrain/dino_deitsmall8_300ep_pretrain.pth" # model used for visualizations in our paper - elif self.args.arch == "vit_base" and self.args.patch_size == 16: - url = "dino_vitbase16_pretrain/dino_vitbase16_pretrain.pth" - elif self.args.arch == "vit_base" and self.args.patch_size == 8: - url = "dino_vitbase8_pretrain/dino_vitbase8_pretrain.pth" - if url is not None: - print( - "Since no pretrained weights have been provided, we load the reference pretrained DINO weights." - ) - state_dict = torch.hub.load_state_dict_from_url( - url="https://dl.fbaipublicfiles.com/dino/" + url - ) - model.load_state_dict(state_dict, strict=True) - else: - print( - "There is no reference weights available for this model => We use random weights." - ) - return model - - -def parse_args(): - parser = argparse.ArgumentParser("Generation self-attention video") - parser.add_argument( - "--arch", - default="vit_small", - type=str, - choices=["vit_tiny", "vit_small", "vit_base"], - help="Architecture (support only ViT atm).", - ) - parser.add_argument( - "--patch_size", default=8, type=int, help="Patch resolution of the self.model." - ) - parser.add_argument( - "--pretrained_weights", - default="", - type=str, - help="Path to pretrained weights to load.", - ) - parser.add_argument( - "--checkpoint_key", - default="teacher", - type=str, - help='Key to use in the checkpoint (example: "teacher")', - ) - parser.add_argument( - "--input_path", - required=True, - type=str, - help="""Path to a video file if you want to extract frames - or to a folder of images already extracted by yourself. - or to a folder of attention images.""", - ) - parser.add_argument( - "--output_path", - default="./", - type=str, - help="""Path to store a folder of frames and / or a folder of attention images. - and / or a final video. Default to current directory.""", - ) - parser.add_argument( - "--threshold", - type=float, - default=0.6, - help="""We visualize masks - obtained by thresholding the self-attention maps to keep xx percent of the mass.""", - ) - parser.add_argument( - "--resize", - default=None, - type=int, - nargs="+", - help="""Apply a resize transformation to input image(s). Use if OOM error. - Usage (single or W H): --resize 512, --resize 720 1280""", - ) - parser.add_argument( - "--video_only", - action="store_true", - help="""Use this flag if you only want to generate a video and not all attention images. - If used, --input_path must be set to the folder of attention images. Ex: ./attention/""", - ) - parser.add_argument( - "--fps", - default=30.0, - type=float, - help="FPS of input / output video. Automatically set if you extract frames from a video.", - ) - parser.add_argument( - "--video_format", - default="mp4", - type=str, - choices=["mp4", "avi"], - help="Format of generated video (mp4 or avi).", - ) - - return parser.parse_args() - - -if __name__ == "__main__": - args = parse_args() - - vg = VideoGenerator(args) diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Dvdvideosoft Free Studio 503 Serial Keygen REPACK 14.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Dvdvideosoft Free Studio 503 Serial Keygen REPACK 14.md deleted file mode 100644 index 4615d1e150584492acefcfee842cbad2a7416a2c..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Dvdvideosoft Free Studio 503 Serial Keygen REPACK 14.md +++ /dev/null @@ -1,16 +0,0 @@ -
    -

    Dvdvideosoft Free Studio 503: A Complete Multimedia Package for DVDs

    -

    Dvdvideosoft Free Studio 503 is a software bundle that contains 49 free multimedia applications developed by Dvdvideosoft. These applications are divided into five sections: Downloaders, Uploaders, Converters, Recorders and Editors. You can use them to download and convert video from YouTube to MP4 and MP3, edit video and audio files, record video and audio from Skype, upload video and music to YouTube and Facebook, and more.

    -

    Dvdvideosoft Free Studio 503 Serial Keygen 14


    Download ✯✯✯ https://urlcod.com/2uIbdz



    -

    One of the most popular applications in Dvdvideosoft Free Studio 503 is the YouTube to MP3 Converter, which allows you to download and convert any YouTube video to MP3 format with high quality. You can also download YouTube playlists, channels, VEVO videos, torrent videos, and premium videos with this application. Another popular application is the Free Video Editor, which lets you cut, rotate, merge, and crop video files without losing quality.

    -

    Dvdvideosoft Free Studio 503 is compatible with Windows 11, 10, 8, 7, XP SP3. You can download it for free from the official website of Dvdvideosoft. However, if you want to unlock some premium features and remove the watermark from the edited videos, you will need a serial keygen. A serial keygen is a program that generates valid serial numbers for a software product. You can use these serial numbers to activate the software and enjoy its full functionality.

    -

    There are many websites that claim to offer serial keygens for Dvdvideosoft Free Studio 503. However, most of them are fake or malicious. They may contain viruses, malware, spyware, or adware that can harm your computer or steal your personal information. Some of them may also ask you to complete surveys or pay money to access the serial keygens. These are scams that you should avoid at all costs.

    -

    The only safe and reliable way to get a serial keygen for Dvdvideosoft Free Studio 503 is to purchase it from the official website of Dvdvideosoft. The price is only $10 for a lifetime license that covers all the applications in the bundle. You will also get free updates and technical support from Dvdvideosoft. By buying a serial keygen from Dvdvideosoft, you will not only support the developers of this amazing software but also protect your computer from potential threats.

    -

    -

    If you are looking for a complete multimedia package for DVDs that offers a wide range of free applications for downloading, converting, editing, recording, and uploading video and audio files, you should try Dvdvideosoft Free Studio 503. It is easy to use, fast, and high-quality. And if you want to enjoy its premium features without any limitations or watermarks, you should buy a serial keygen from Dvdvideosoft. It is worth every penny.

    - -

    Dvdvideosoft Free Studio 503 is not only a great software for DVDs but also for other devices and platforms. You can use it to convert video and audio files between different formats or for iPhone, iPad, iPod, Windows and Android devices. You can also make screen captures and record videos from the desktop or from Skype. You can even create your own ringtones, GIFs, and slideshows with Dvdvideosoft Free Studio 503.

    -

    Dvdvideosoft Free Studio 503 is also very user-friendly and intuitive. All the applications have a simple and clear interface that guides you through the process step by step. You can also customize the settings and preferences according to your needs and preferences. Dvdvideosoft Free Studio 503 also supports multiple languages, including English, French, German, Spanish, Italian, Russian, Chinese, Japanese, and more.

    -

    Dvdvideosoft Free Studio 503 is a software that you will love to use and recommend to your friends and family. It has everything you need to enjoy and share your multimedia files with ease and fun. Whether you want to download and convert YouTube videos to MP3, edit your own videos and audio files, record Skype calls, or upload your creations to YouTube and Facebook, Dvdvideosoft Free Studio 503 can help you do it all. And with a serial keygen from Dvdvideosoft, you can unlock its full potential and get rid of any limitations or watermarks.

    7b8c122e87
    -
    -
    \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Mary J. Blige Growing Pains Full Album Zip National Cartographe Fixed.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Mary J. Blige Growing Pains Full Album Zip National Cartographe Fixed.md deleted file mode 100644 index ab8b18e9723afa84753c81497ad34c47c3221442..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Mary J. Blige Growing Pains Full Album Zip National Cartographe Fixed.md +++ /dev/null @@ -1,15 +0,0 @@ - -

    Mary J. Blige's Growing Pains: A Soulful Journey of Self-Discovery

    -

    Mary J. Blige is one of the most influential and successful R&B singers of all time. She has sold over 80 million records worldwide and won nine Grammy Awards. Her eighth studio album, Growing Pains, released in 2007, showcases her versatility and strength as an artist and a woman.

    -

    Mary J. Blige, Growing Pains Full Album Zip national cartographe


    Download ✔✔✔ https://urlcod.com/2uIbqa



    -

    Growing Pains is a soulful journey of self-discovery, where Blige reflects on her personal and professional growth, her struggles and triumphs, and her love and happiness. The album features collaborations with Ludacris, Usher, The-Dream, Ne-Yo, and Eve, among others. The production is diverse and polished, ranging from upbeat dance tracks to smooth ballads.

    -

    The album's lead single, "Just Fine", is a catchy anthem of positivity and empowerment, where Blige declares that she is "not gon' let nothing get in my way". The song was nominated for two Grammy Awards and became a worldwide hit. Other highlights include "Work That", a motivational song about self-confidence and beauty; "Stay Down", a heartfelt pledge of loyalty to her husband; "Roses", a candid confession of her insecurities and fears; and "Work In Progress (Growing Pains)", a humble acknowledgment of her flaws and mistakes.

    -

    Growing Pains received critical acclaim and commercial success, debuting at number two on the Billboard 200 chart and selling over three million copies worldwide. It also won a Grammy Award for Best Contemporary R&B Album. The album is widely regarded as one of Blige's best works, as well as one of the best R&B albums of the 2000s.

    -

    -

    If you are a fan of Mary J. Blige or R&B music in general, you can download the full album zip file from Apple Music or stream it on Qobuz. You can also read more about the album and its songs on AllMusic.

    - -

    One of the themes that Blige explores on Growing Pains is the importance of self-love and self-care. She sings about finding peace and joy within herself, rather than relying on external sources. She also encourages her listeners to do the same, as she believes that everyone deserves happiness and respect. On "Feel Like a Woman", she celebrates her femininity and sensuality, while on "If You Love Me?", she challenges her partner to show his true feelings and commitment.

    -

    Another theme that Blige addresses on Growing Pains is the challenge of overcoming adversity and negativity. She shares her experiences of dealing with criticism, doubt, pain, and betrayal, and how she learned to cope and heal. She also offers hope and inspiration to those who are going through similar situations, as she believes that nothing can stop them from achieving their dreams. On "Hurt Again", she expresses her resilience and courage in the face of heartbreak, while on "Come To Me (Peace)", she prays for harmony and unity in the world.

    -

    Growing Pains is a testament to Blige's artistic maturity and personal growth. It showcases her remarkable vocal skills, emotional depth, and musical diversity. It also reveals her vulnerability and honesty, as she opens up about her life and feelings. Growing Pains is not only an album, but a journey, a lesson, and a gift. It is a reflection of Blige's soul, and a reminder of her greatness.

    cec2833e83
    -
    -
    \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Oggy And The Cockroaches New Episodes In Hindi Download VERIFIED.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Oggy And The Cockroaches New Episodes In Hindi Download VERIFIED.md deleted file mode 100644 index 0f9521ca3a265119521fa15a6af9a9a01c0c098f..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Oggy And The Cockroaches New Episodes In Hindi Download VERIFIED.md +++ /dev/null @@ -1,21 +0,0 @@ - -

    How to Download Oggy and the Cockroaches New Episodes in Hindi for Free

    -

    Oggy and the Cockroaches is a popular animated comedy series that features the adventures of a lazy cat named Oggy and three pesky cockroaches who live in his house. The show has been dubbed in many languages, including Hindi, and has fans all over the world.

    -

    If you are looking for a way to download Oggy and the Cockroaches new episodes in Hindi for free, you have come to the right place. In this article, we will show you how to find and download the latest episodes of this hilarious show using some simple steps.

    -

    oggy and the cockroaches new episodes in hindi download


    Download File ››› https://urlcod.com/2uIc70



    -

    Step 1: Find a Reliable Website

    -

    The first step is to find a reliable website that offers Oggy and the Cockroaches new episodes in Hindi for free download. There are many websites that claim to provide this service, but not all of them are safe or legal. Some of them may contain viruses, malware, or unwanted ads that can harm your device or compromise your privacy.

    -

    One of the websites that we recommend is PureToons.Com[^1^]. This website has a large collection of Oggy and the Cockroaches episodes in Hindi, as well as other cartoons and anime. The website is easy to use, fast, and secure. You can also watch the episodes online if you prefer.

    -

    Step 2: Choose an Episode

    -

    The next step is to choose an episode that you want to download. You can browse through the different seasons and episodes of Oggy and the Cockroaches on PureToons.Com[^1^] by clicking on the links provided on the homepage. You can also use the search bar to find a specific episode by typing its name or number.

    -

    For example, if you want to download the latest episode of Oggy and the Cockroaches: Next Generation, which is a spin-off series that features Oggy's son and his friends[^3^], you can type "Oggy Next Generation" in the search bar and click on the result that matches your query.

    -

    Step 3: Download the Episode

    -

    The final step is to download the episode that you have chosen. Once you click on an episode link, you will be redirected to a page that contains some information about the episode, such as its title, genre, running time, language, quality, and summary. You will also see a download button at the bottom of the page.

    -

    Click on the download button and wait for a few seconds until a new window opens. This window will show you some options for downloading the episode in different formats and sizes, such as MP4, 3GP, 720p, 240p, 360p, 480p, or 1080p. Choose the option that suits your device and internet speed and click on it.

    -

    -

    A new tab will open that will ask you to verify that you are not a robot by completing a captcha. After completing the captcha, click on "click here to continue" and wait for another few seconds until a countdown timer ends. Then click on "get link" and your download will start automatically.

    -

    Conclusion

    -

    Oggy and the Cockroaches is a fun and entertaining show that you can enjoy with your family and friends. By following these simple steps, you can download Oggy and the Cockroaches new episodes in Hindi for free from PureToons.Com[^1^] without any hassle or risk.

    -

    If you liked this article, please share it with your friends who are also fans of Oggy and the Cockroaches. You can also watch some funny clips of Oggy and the Cockroaches on YouTube[^2^] [^4^] or Netflix[^3^]. Happy watching!

    e93f5a0c3f
    -
    -
    \ No newline at end of file diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/tests/README.md b/spaces/nikitaPDL2023/assignment4/detectron2/tests/README.md deleted file mode 100644 index f560384045ab4f6bc2beabef1170308fca117eb3..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/tests/README.md +++ /dev/null @@ -1,9 +0,0 @@ -## Unit Tests - -To run the unittests, do: -``` -cd detectron2 -python -m unittest discover -v -s ./tests -``` - -There are also end-to-end inference & training tests, in [dev/run_*_tests.sh](../dev). diff --git a/spaces/nomic-ai/Gustavosta_Stable-Diffusion-Prompts/README.md b/spaces/nomic-ai/Gustavosta_Stable-Diffusion-Prompts/README.md deleted file mode 100644 index 654ecd58a88135060f2bdd87f5e5e8183ee3f9e9..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/Gustavosta_Stable-Diffusion-Prompts/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Gustavosta/Stable-Diffusion-Prompts -emoji: 🗺️ -colorFrom: purple -colorTo: red -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/nomic-ai/csebuetnlp_xlsum/style.css b/spaces/nomic-ai/csebuetnlp_xlsum/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/csebuetnlp_xlsum/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/nomic-ai/teknium_GPT4-LLM-Cleaned/style.css b/spaces/nomic-ai/teknium_GPT4-LLM-Cleaned/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/teknium_GPT4-LLM-Cleaned/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/novita-ai/Face-Stylization-Playground/README.md b/spaces/novita-ai/Face-Stylization-Playground/README.md deleted file mode 100644 index e87c50910d44d2d33efcec252401cc182a77b65a..0000000000000000000000000000000000000000 --- a/spaces/novita-ai/Face-Stylization-Playground/README.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: Face-Stylization-Playground -emoji: ⚡️ -colorFrom: red -colorTo: purple -sdk: gradio -sdk_version: 3.48.0 -app_file: app.py -license: mit -pinned: false -suggested_hardware: cpu-upgrade -suggested_storage: small -hf_oauth: true ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/onursavas/MultilingualOCR/Dockerfile b/spaces/onursavas/MultilingualOCR/Dockerfile deleted file mode 100644 index c406ad653cbf4491fd0cb2bded00e8b5faa6f98a..0000000000000000000000000000000000000000 --- a/spaces/onursavas/MultilingualOCR/Dockerfile +++ /dev/null @@ -1,32 +0,0 @@ -# read the doc: https://huggingface.co/docs/hub/spaces-sdks-docker -# you will also find guides on how best to write your Dockerfile - -FROM ubuntu:22.04 - -ARG DEBIAN_FRONTEND=noninteractive - -RUN useradd -m -u 1000 user - -RUN apt-get update && apt-get install -y \ - git \ - curl \ - software-properties-common \ - python3.10 \ - python3.10-dev \ - && rm -rf /var/lib/apt/lists/* \ - && apt-get remove -y --purge python3-blinker - -RUN apt-get update && apt-get install -y python3-opencv - -WORKDIR /code - -COPY --chown=user ./requirements.txt /code/requirements.txt - -RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.10 \ - && python3.10 -m pip install --no-cache-dir -r /code/requirements.txt - -COPY --chown=user . . - -USER user - -CMD ["python3.10", "main.py"] diff --git a/spaces/paochoa/DeOldification/app.py b/spaces/paochoa/DeOldification/app.py deleted file mode 100644 index 2fbb9fb93cd6a73ff6b924e6649dd12225d82a42..0000000000000000000000000000000000000000 --- a/spaces/paochoa/DeOldification/app.py +++ /dev/null @@ -1,18 +0,0 @@ -import gradio as gr -import requests -import streamlit as st - -res = requests.get('https://stackoverflow.com/questions/26000336') - -# Definimos una función que se encarga de llevar a cabo las predicciones -def predict(url): - obj = {'image': url} - header = {'api-key': st.secrets["api_key"]} - res = requests.post('https://api.deepai.org/api/colorizer', data = obj, headers = header) - url_res = res.text.split(",")[1].split(":")[1] + ":" + res.text.split(",")[1].split(":")[2] - res = url_res.split("\"")[1] - return res - -# Creamos la interfaz y la lanzamos. -gr.Interface(fn=predict, inputs=gr.inputs.Textbox(lines=1), outputs=gr.outputs.Textbox(),examples=[], title="DeOldification of B&W images", description="This, with the help of the DeepAI colorization API, will color the image from the URL given. The result will be a URL, that is the URL where the colorized image is.").launch(share=False) - diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/network/session.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/network/session.py deleted file mode 100644 index 887dc14e796cad0257e5ccfd51ed3a21b7908821..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/network/session.py +++ /dev/null @@ -1,519 +0,0 @@ -"""PipSession and supporting code, containing all pip-specific -network request configuration and behavior. -""" - -import email.utils -import io -import ipaddress -import json -import logging -import mimetypes -import os -import platform -import shutil -import subprocess -import sys -import urllib.parse -import warnings -from typing import ( - TYPE_CHECKING, - Any, - Dict, - Generator, - List, - Mapping, - Optional, - Sequence, - Tuple, - Union, -) - -from pip._vendor import requests, urllib3 -from pip._vendor.cachecontrol import CacheControlAdapter as _BaseCacheControlAdapter -from pip._vendor.requests.adapters import DEFAULT_POOLBLOCK, BaseAdapter -from pip._vendor.requests.adapters import HTTPAdapter as _BaseHTTPAdapter -from pip._vendor.requests.models import PreparedRequest, Response -from pip._vendor.requests.structures import CaseInsensitiveDict -from pip._vendor.urllib3.connectionpool import ConnectionPool -from pip._vendor.urllib3.exceptions import InsecureRequestWarning - -from pip import __version__ -from pip._internal.metadata import get_default_environment -from pip._internal.models.link import Link -from pip._internal.network.auth import MultiDomainBasicAuth -from pip._internal.network.cache import SafeFileCache - -# Import ssl from compat so the initial import occurs in only one place. -from pip._internal.utils.compat import has_tls -from pip._internal.utils.glibc import libc_ver -from pip._internal.utils.misc import build_url_from_netloc, parse_netloc -from pip._internal.utils.urls import url_to_path - -if TYPE_CHECKING: - from ssl import SSLContext - - from pip._vendor.urllib3.poolmanager import PoolManager - - -logger = logging.getLogger(__name__) - -SecureOrigin = Tuple[str, str, Optional[Union[int, str]]] - - -# Ignore warning raised when using --trusted-host. -warnings.filterwarnings("ignore", category=InsecureRequestWarning) - - -SECURE_ORIGINS: List[SecureOrigin] = [ - # protocol, hostname, port - # Taken from Chrome's list of secure origins (See: http://bit.ly/1qrySKC) - ("https", "*", "*"), - ("*", "localhost", "*"), - ("*", "127.0.0.0/8", "*"), - ("*", "::1/128", "*"), - ("file", "*", None), - # ssh is always secure. - ("ssh", "*", "*"), -] - - -# These are environment variables present when running under various -# CI systems. For each variable, some CI systems that use the variable -# are indicated. The collection was chosen so that for each of a number -# of popular systems, at least one of the environment variables is used. -# This list is used to provide some indication of and lower bound for -# CI traffic to PyPI. Thus, it is okay if the list is not comprehensive. -# For more background, see: https://github.com/pypa/pip/issues/5499 -CI_ENVIRONMENT_VARIABLES = ( - # Azure Pipelines - "BUILD_BUILDID", - # Jenkins - "BUILD_ID", - # AppVeyor, CircleCI, Codeship, Gitlab CI, Shippable, Travis CI - "CI", - # Explicit environment variable. - "PIP_IS_CI", -) - - -def looks_like_ci() -> bool: - """ - Return whether it looks like pip is running under CI. - """ - # We don't use the method of checking for a tty (e.g. using isatty()) - # because some CI systems mimic a tty (e.g. Travis CI). Thus that - # method doesn't provide definitive information in either direction. - return any(name in os.environ for name in CI_ENVIRONMENT_VARIABLES) - - -def user_agent() -> str: - """ - Return a string representing the user agent. - """ - data: Dict[str, Any] = { - "installer": {"name": "pip", "version": __version__}, - "python": platform.python_version(), - "implementation": { - "name": platform.python_implementation(), - }, - } - - if data["implementation"]["name"] == "CPython": - data["implementation"]["version"] = platform.python_version() - elif data["implementation"]["name"] == "PyPy": - pypy_version_info = sys.pypy_version_info # type: ignore - if pypy_version_info.releaselevel == "final": - pypy_version_info = pypy_version_info[:3] - data["implementation"]["version"] = ".".join( - [str(x) for x in pypy_version_info] - ) - elif data["implementation"]["name"] == "Jython": - # Complete Guess - data["implementation"]["version"] = platform.python_version() - elif data["implementation"]["name"] == "IronPython": - # Complete Guess - data["implementation"]["version"] = platform.python_version() - - if sys.platform.startswith("linux"): - from pip._vendor import distro - - linux_distribution = distro.name(), distro.version(), distro.codename() - distro_infos: Dict[str, Any] = dict( - filter( - lambda x: x[1], - zip(["name", "version", "id"], linux_distribution), - ) - ) - libc = dict( - filter( - lambda x: x[1], - zip(["lib", "version"], libc_ver()), - ) - ) - if libc: - distro_infos["libc"] = libc - if distro_infos: - data["distro"] = distro_infos - - if sys.platform.startswith("darwin") and platform.mac_ver()[0]: - data["distro"] = {"name": "macOS", "version": platform.mac_ver()[0]} - - if platform.system(): - data.setdefault("system", {})["name"] = platform.system() - - if platform.release(): - data.setdefault("system", {})["release"] = platform.release() - - if platform.machine(): - data["cpu"] = platform.machine() - - if has_tls(): - import _ssl as ssl - - data["openssl_version"] = ssl.OPENSSL_VERSION - - setuptools_dist = get_default_environment().get_distribution("setuptools") - if setuptools_dist is not None: - data["setuptools_version"] = str(setuptools_dist.version) - - if shutil.which("rustc") is not None: - # If for any reason `rustc --version` fails, silently ignore it - try: - rustc_output = subprocess.check_output( - ["rustc", "--version"], stderr=subprocess.STDOUT, timeout=0.5 - ) - except Exception: - pass - else: - if rustc_output.startswith(b"rustc "): - # The format of `rustc --version` is: - # `b'rustc 1.52.1 (9bc8c42bb 2021-05-09)\n'` - # We extract just the middle (1.52.1) part - data["rustc_version"] = rustc_output.split(b" ")[1].decode() - - # Use None rather than False so as not to give the impression that - # pip knows it is not being run under CI. Rather, it is a null or - # inconclusive result. Also, we include some value rather than no - # value to make it easier to know that the check has been run. - data["ci"] = True if looks_like_ci() else None - - user_data = os.environ.get("PIP_USER_AGENT_USER_DATA") - if user_data is not None: - data["user_data"] = user_data - - return "{data[installer][name]}/{data[installer][version]} {json}".format( - data=data, - json=json.dumps(data, separators=(",", ":"), sort_keys=True), - ) - - -class LocalFSAdapter(BaseAdapter): - def send( - self, - request: PreparedRequest, - stream: bool = False, - timeout: Optional[Union[float, Tuple[float, float]]] = None, - verify: Union[bool, str] = True, - cert: Optional[Union[str, Tuple[str, str]]] = None, - proxies: Optional[Mapping[str, str]] = None, - ) -> Response: - pathname = url_to_path(request.url) - - resp = Response() - resp.status_code = 200 - resp.url = request.url - - try: - stats = os.stat(pathname) - except OSError as exc: - # format the exception raised as a io.BytesIO object, - # to return a better error message: - resp.status_code = 404 - resp.reason = type(exc).__name__ - resp.raw = io.BytesIO(f"{resp.reason}: {exc}".encode("utf8")) - else: - modified = email.utils.formatdate(stats.st_mtime, usegmt=True) - content_type = mimetypes.guess_type(pathname)[0] or "text/plain" - resp.headers = CaseInsensitiveDict( - { - "Content-Type": content_type, - "Content-Length": stats.st_size, - "Last-Modified": modified, - } - ) - - resp.raw = open(pathname, "rb") - resp.close = resp.raw.close - - return resp - - def close(self) -> None: - pass - - -class _SSLContextAdapterMixin: - """Mixin to add the ``ssl_context`` constructor argument to HTTP adapters. - - The additional argument is forwarded directly to the pool manager. This allows us - to dynamically decide what SSL store to use at runtime, which is used to implement - the optional ``truststore`` backend. - """ - - def __init__( - self, - *, - ssl_context: Optional["SSLContext"] = None, - **kwargs: Any, - ) -> None: - self._ssl_context = ssl_context - super().__init__(**kwargs) - - def init_poolmanager( - self, - connections: int, - maxsize: int, - block: bool = DEFAULT_POOLBLOCK, - **pool_kwargs: Any, - ) -> "PoolManager": - if self._ssl_context is not None: - pool_kwargs.setdefault("ssl_context", self._ssl_context) - return super().init_poolmanager( # type: ignore[misc] - connections=connections, - maxsize=maxsize, - block=block, - **pool_kwargs, - ) - - -class HTTPAdapter(_SSLContextAdapterMixin, _BaseHTTPAdapter): - pass - - -class CacheControlAdapter(_SSLContextAdapterMixin, _BaseCacheControlAdapter): - pass - - -class InsecureHTTPAdapter(HTTPAdapter): - def cert_verify( - self, - conn: ConnectionPool, - url: str, - verify: Union[bool, str], - cert: Optional[Union[str, Tuple[str, str]]], - ) -> None: - super().cert_verify(conn=conn, url=url, verify=False, cert=cert) - - -class InsecureCacheControlAdapter(CacheControlAdapter): - def cert_verify( - self, - conn: ConnectionPool, - url: str, - verify: Union[bool, str], - cert: Optional[Union[str, Tuple[str, str]]], - ) -> None: - super().cert_verify(conn=conn, url=url, verify=False, cert=cert) - - -class PipSession(requests.Session): - timeout: Optional[int] = None - - def __init__( - self, - *args: Any, - retries: int = 0, - cache: Optional[str] = None, - trusted_hosts: Sequence[str] = (), - index_urls: Optional[List[str]] = None, - ssl_context: Optional["SSLContext"] = None, - **kwargs: Any, - ) -> None: - """ - :param trusted_hosts: Domains not to emit warnings for when not using - HTTPS. - """ - super().__init__(*args, **kwargs) - - # Namespace the attribute with "pip_" just in case to prevent - # possible conflicts with the base class. - self.pip_trusted_origins: List[Tuple[str, Optional[int]]] = [] - - # Attach our User Agent to the request - self.headers["User-Agent"] = user_agent() - - # Attach our Authentication handler to the session - self.auth = MultiDomainBasicAuth(index_urls=index_urls) - - # Create our urllib3.Retry instance which will allow us to customize - # how we handle retries. - retries = urllib3.Retry( - # Set the total number of retries that a particular request can - # have. - total=retries, - # A 503 error from PyPI typically means that the Fastly -> Origin - # connection got interrupted in some way. A 503 error in general - # is typically considered a transient error so we'll go ahead and - # retry it. - # A 500 may indicate transient error in Amazon S3 - # A 520 or 527 - may indicate transient error in CloudFlare - status_forcelist=[500, 503, 520, 527], - # Add a small amount of back off between failed requests in - # order to prevent hammering the service. - backoff_factor=0.25, - ) # type: ignore - - # Our Insecure HTTPAdapter disables HTTPS validation. It does not - # support caching so we'll use it for all http:// URLs. - # If caching is disabled, we will also use it for - # https:// hosts that we've marked as ignoring - # TLS errors for (trusted-hosts). - insecure_adapter = InsecureHTTPAdapter(max_retries=retries) - - # We want to _only_ cache responses on securely fetched origins or when - # the host is specified as trusted. We do this because - # we can't validate the response of an insecurely/untrusted fetched - # origin, and we don't want someone to be able to poison the cache and - # require manual eviction from the cache to fix it. - if cache: - secure_adapter = CacheControlAdapter( - cache=SafeFileCache(cache), - max_retries=retries, - ssl_context=ssl_context, - ) - self._trusted_host_adapter = InsecureCacheControlAdapter( - cache=SafeFileCache(cache), - max_retries=retries, - ) - else: - secure_adapter = HTTPAdapter(max_retries=retries, ssl_context=ssl_context) - self._trusted_host_adapter = insecure_adapter - - self.mount("https://", secure_adapter) - self.mount("http://", insecure_adapter) - - # Enable file:// urls - self.mount("file://", LocalFSAdapter()) - - for host in trusted_hosts: - self.add_trusted_host(host, suppress_logging=True) - - def update_index_urls(self, new_index_urls: List[str]) -> None: - """ - :param new_index_urls: New index urls to update the authentication - handler with. - """ - self.auth.index_urls = new_index_urls - - def add_trusted_host( - self, host: str, source: Optional[str] = None, suppress_logging: bool = False - ) -> None: - """ - :param host: It is okay to provide a host that has previously been - added. - :param source: An optional source string, for logging where the host - string came from. - """ - if not suppress_logging: - msg = f"adding trusted host: {host!r}" - if source is not None: - msg += f" (from {source})" - logger.info(msg) - - parsed_host, parsed_port = parse_netloc(host) - if parsed_host is None: - raise ValueError(f"Trusted host URL must include a host part: {host!r}") - if (parsed_host, parsed_port) not in self.pip_trusted_origins: - self.pip_trusted_origins.append((parsed_host, parsed_port)) - - self.mount( - build_url_from_netloc(host, scheme="http") + "/", self._trusted_host_adapter - ) - self.mount(build_url_from_netloc(host) + "/", self._trusted_host_adapter) - if not parsed_port: - self.mount( - build_url_from_netloc(host, scheme="http") + ":", - self._trusted_host_adapter, - ) - # Mount wildcard ports for the same host. - self.mount(build_url_from_netloc(host) + ":", self._trusted_host_adapter) - - def iter_secure_origins(self) -> Generator[SecureOrigin, None, None]: - yield from SECURE_ORIGINS - for host, port in self.pip_trusted_origins: - yield ("*", host, "*" if port is None else port) - - def is_secure_origin(self, location: Link) -> bool: - # Determine if this url used a secure transport mechanism - parsed = urllib.parse.urlparse(str(location)) - origin_protocol, origin_host, origin_port = ( - parsed.scheme, - parsed.hostname, - parsed.port, - ) - - # The protocol to use to see if the protocol matches. - # Don't count the repository type as part of the protocol: in - # cases such as "git+ssh", only use "ssh". (I.e., Only verify against - # the last scheme.) - origin_protocol = origin_protocol.rsplit("+", 1)[-1] - - # Determine if our origin is a secure origin by looking through our - # hardcoded list of secure origins, as well as any additional ones - # configured on this PackageFinder instance. - for secure_origin in self.iter_secure_origins(): - secure_protocol, secure_host, secure_port = secure_origin - if origin_protocol != secure_protocol and secure_protocol != "*": - continue - - try: - addr = ipaddress.ip_address(origin_host or "") - network = ipaddress.ip_network(secure_host) - except ValueError: - # We don't have both a valid address or a valid network, so - # we'll check this origin against hostnames. - if ( - origin_host - and origin_host.lower() != secure_host.lower() - and secure_host != "*" - ): - continue - else: - # We have a valid address and network, so see if the address - # is contained within the network. - if addr not in network: - continue - - # Check to see if the port matches. - if ( - origin_port != secure_port - and secure_port != "*" - and secure_port is not None - ): - continue - - # If we've gotten here, then this origin matches the current - # secure origin and we should return True - return True - - # If we've gotten to this point, then the origin isn't secure and we - # will not accept it as a valid location to search. We will however - # log a warning that we are ignoring it. - logger.warning( - "The repository located at %s is not a trusted or secure host and " - "is being ignored. If this repository is available via HTTPS we " - "recommend you use HTTPS instead, otherwise you may silence " - "this warning and allow it anyway with '--trusted-host %s'.", - origin_host, - origin_host, - ) - - return False - - def request(self, method: str, url: str, *args: Any, **kwargs: Any) -> Response: - # Allow setting a default timeout on a session - kwargs.setdefault("timeout", self.timeout) - # Allow setting a default proxies on a session - kwargs.setdefault("proxies", self.proxies) - - # Dispatch the actual request - return super().request(method, url, *args, **kwargs) diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/protocol.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/protocol.py deleted file mode 100644 index 12ab23713a70dda46edd300bd975b02bfb2be031..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/protocol.py +++ /dev/null @@ -1,42 +0,0 @@ -from typing import Any, cast, Set, TYPE_CHECKING -from inspect import isclass - -if TYPE_CHECKING: - from pip._vendor.rich.console import RenderableType - -_GIBBERISH = """aihwerij235234ljsdnp34ksodfipwoe234234jlskjdf""" - - -def is_renderable(check_object: Any) -> bool: - """Check if an object may be rendered by Rich.""" - return ( - isinstance(check_object, str) - or hasattr(check_object, "__rich__") - or hasattr(check_object, "__rich_console__") - ) - - -def rich_cast(renderable: object) -> "RenderableType": - """Cast an object to a renderable by calling __rich__ if present. - - Args: - renderable (object): A potentially renderable object - - Returns: - object: The result of recursively calling __rich__. - """ - from pip._vendor.rich.console import RenderableType - - rich_visited_set: Set[type] = set() # Prevent potential infinite loop - while hasattr(renderable, "__rich__") and not isclass(renderable): - # Detect object which claim to have all the attributes - if hasattr(renderable, _GIBBERISH): - return repr(renderable) - cast_method = getattr(renderable, "__rich__") - renderable = cast_method() - renderable_type = type(renderable) - if renderable_type in rich_visited_set: - break - rich_visited_set.add(renderable_type) - - return cast(RenderableType, renderable) diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/tenacity/wait.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/tenacity/wait.py deleted file mode 100644 index f9349c028360d541c56962d6a09bd9c2a00e3a37..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/tenacity/wait.py +++ /dev/null @@ -1,228 +0,0 @@ -# Copyright 2016–2021 Julien Danjou -# Copyright 2016 Joshua Harlow -# Copyright 2013-2014 Ray Holder -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import abc -import random -import typing - -from pip._vendor.tenacity import _utils - -if typing.TYPE_CHECKING: - from pip._vendor.tenacity import RetryCallState - - -class wait_base(abc.ABC): - """Abstract base class for wait strategies.""" - - @abc.abstractmethod - def __call__(self, retry_state: "RetryCallState") -> float: - pass - - def __add__(self, other: "wait_base") -> "wait_combine": - return wait_combine(self, other) - - def __radd__(self, other: "wait_base") -> typing.Union["wait_combine", "wait_base"]: - # make it possible to use multiple waits with the built-in sum function - if other == 0: # type: ignore[comparison-overlap] - return self - return self.__add__(other) - - -WaitBaseT = typing.Union[wait_base, typing.Callable[["RetryCallState"], typing.Union[float, int]]] - - -class wait_fixed(wait_base): - """Wait strategy that waits a fixed amount of time between each retry.""" - - def __init__(self, wait: _utils.time_unit_type) -> None: - self.wait_fixed = _utils.to_seconds(wait) - - def __call__(self, retry_state: "RetryCallState") -> float: - return self.wait_fixed - - -class wait_none(wait_fixed): - """Wait strategy that doesn't wait at all before retrying.""" - - def __init__(self) -> None: - super().__init__(0) - - -class wait_random(wait_base): - """Wait strategy that waits a random amount of time between min/max.""" - - def __init__(self, min: _utils.time_unit_type = 0, max: _utils.time_unit_type = 1) -> None: # noqa - self.wait_random_min = _utils.to_seconds(min) - self.wait_random_max = _utils.to_seconds(max) - - def __call__(self, retry_state: "RetryCallState") -> float: - return self.wait_random_min + (random.random() * (self.wait_random_max - self.wait_random_min)) - - -class wait_combine(wait_base): - """Combine several waiting strategies.""" - - def __init__(self, *strategies: wait_base) -> None: - self.wait_funcs = strategies - - def __call__(self, retry_state: "RetryCallState") -> float: - return sum(x(retry_state=retry_state) for x in self.wait_funcs) - - -class wait_chain(wait_base): - """Chain two or more waiting strategies. - - If all strategies are exhausted, the very last strategy is used - thereafter. - - For example:: - - @retry(wait=wait_chain(*[wait_fixed(1) for i in range(3)] + - [wait_fixed(2) for j in range(5)] + - [wait_fixed(5) for k in range(4))) - def wait_chained(): - print("Wait 1s for 3 attempts, 2s for 5 attempts and 5s - thereafter.") - """ - - def __init__(self, *strategies: wait_base) -> None: - self.strategies = strategies - - def __call__(self, retry_state: "RetryCallState") -> float: - wait_func_no = min(max(retry_state.attempt_number, 1), len(self.strategies)) - wait_func = self.strategies[wait_func_no - 1] - return wait_func(retry_state=retry_state) - - -class wait_incrementing(wait_base): - """Wait an incremental amount of time after each attempt. - - Starting at a starting value and incrementing by a value for each attempt - (and restricting the upper limit to some maximum value). - """ - - def __init__( - self, - start: _utils.time_unit_type = 0, - increment: _utils.time_unit_type = 100, - max: _utils.time_unit_type = _utils.MAX_WAIT, # noqa - ) -> None: - self.start = _utils.to_seconds(start) - self.increment = _utils.to_seconds(increment) - self.max = _utils.to_seconds(max) - - def __call__(self, retry_state: "RetryCallState") -> float: - result = self.start + (self.increment * (retry_state.attempt_number - 1)) - return max(0, min(result, self.max)) - - -class wait_exponential(wait_base): - """Wait strategy that applies exponential backoff. - - It allows for a customized multiplier and an ability to restrict the - upper and lower limits to some maximum and minimum value. - - The intervals are fixed (i.e. there is no jitter), so this strategy is - suitable for balancing retries against latency when a required resource is - unavailable for an unknown duration, but *not* suitable for resolving - contention between multiple processes for a shared resource. Use - wait_random_exponential for the latter case. - """ - - def __init__( - self, - multiplier: typing.Union[int, float] = 1, - max: _utils.time_unit_type = _utils.MAX_WAIT, # noqa - exp_base: typing.Union[int, float] = 2, - min: _utils.time_unit_type = 0, # noqa - ) -> None: - self.multiplier = multiplier - self.min = _utils.to_seconds(min) - self.max = _utils.to_seconds(max) - self.exp_base = exp_base - - def __call__(self, retry_state: "RetryCallState") -> float: - try: - exp = self.exp_base ** (retry_state.attempt_number - 1) - result = self.multiplier * exp - except OverflowError: - return self.max - return max(max(0, self.min), min(result, self.max)) - - -class wait_random_exponential(wait_exponential): - """Random wait with exponentially widening window. - - An exponential backoff strategy used to mediate contention between multiple - uncoordinated processes for a shared resource in distributed systems. This - is the sense in which "exponential backoff" is meant in e.g. Ethernet - networking, and corresponds to the "Full Jitter" algorithm described in - this blog post: - - https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/ - - Each retry occurs at a random time in a geometrically expanding interval. - It allows for a custom multiplier and an ability to restrict the upper - limit of the random interval to some maximum value. - - Example:: - - wait_random_exponential(multiplier=0.5, # initial window 0.5s - max=60) # max 60s timeout - - When waiting for an unavailable resource to become available again, as - opposed to trying to resolve contention for a shared resource, the - wait_exponential strategy (which uses a fixed interval) may be preferable. - - """ - - def __call__(self, retry_state: "RetryCallState") -> float: - high = super().__call__(retry_state=retry_state) - return random.uniform(0, high) - - -class wait_exponential_jitter(wait_base): - """Wait strategy that applies exponential backoff and jitter. - - It allows for a customized initial wait, maximum wait and jitter. - - This implements the strategy described here: - https://cloud.google.com/storage/docs/retry-strategy - - The wait time is min(initial * 2**n + random.uniform(0, jitter), maximum) - where n is the retry count. - """ - - def __init__( - self, - initial: float = 1, - max: float = _utils.MAX_WAIT, # noqa - exp_base: float = 2, - jitter: float = 1, - ) -> None: - self.initial = initial - self.max = max - self.exp_base = exp_base - self.jitter = jitter - - def __call__(self, retry_state: "RetryCallState") -> float: - jitter = random.uniform(0, self.jitter) - try: - exp = self.exp_base ** (retry_state.attempt_number - 1) - result = self.initial * exp + jitter - except OverflowError: - result = self.max - return max(0, min(result, self.max)) diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/__init__.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/__init__.py deleted file mode 100644 index 1a188c35cb6a82cfb7dfb6d8a813fed35bed0cc4..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -import sys -import importlib - -__version__, _, _ = sys.version.partition(' ') - - -try: - # Allow Debian and pkgsrc (only) to customize system - # behavior. Ref pypa/distutils#2 and pypa/distutils#16. - # This hook is deprecated and no other environments - # should use it. - importlib.import_module('_distutils_system_mod') -except ImportError: - pass diff --git a/spaces/power2/JoJoGan-powerhow2/e4e/utils/model_utils.py b/spaces/power2/JoJoGan-powerhow2/e4e/utils/model_utils.py deleted file mode 100644 index e51e95578f72b3218d6d832e3b604193cb68c1d7..0000000000000000000000000000000000000000 --- a/spaces/power2/JoJoGan-powerhow2/e4e/utils/model_utils.py +++ /dev/null @@ -1,35 +0,0 @@ -import torch -import argparse -from models.psp import pSp -from models.encoders.psp_encoders import Encoder4Editing - - -def setup_model(checkpoint_path, device='cuda'): - ckpt = torch.load(checkpoint_path, map_location='cpu') - opts = ckpt['opts'] - - opts['checkpoint_path'] = checkpoint_path - opts['device'] = device - opts = argparse.Namespace(**opts) - - net = pSp(opts) - net.eval() - net = net.to(device) - return net, opts - - -def load_e4e_standalone(checkpoint_path, device='cuda'): - ckpt = torch.load(checkpoint_path, map_location='cpu') - opts = argparse.Namespace(**ckpt['opts']) - e4e = Encoder4Editing(50, 'ir_se', opts) - e4e_dict = {k.replace('encoder.', ''): v for k, v in ckpt['state_dict'].items() if k.startswith('encoder.')} - e4e.load_state_dict(e4e_dict) - e4e.eval() - e4e = e4e.to(device) - latent_avg = ckpt['latent_avg'].to(device) - - def add_latent_avg(model, inputs, outputs): - return outputs + latent_avg.repeat(outputs.shape[0], 1, 1) - - e4e.register_forward_hook(add_latent_avg) - return e4e diff --git a/spaces/prerna9811/Chord/portaudio/examples/paex_pink.c b/spaces/prerna9811/Chord/portaudio/examples/paex_pink.c deleted file mode 100644 index 519f9797bf5ceafe370aeb1d028f6edd75468f97..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/examples/paex_pink.c +++ /dev/null @@ -1,280 +0,0 @@ -/** @file paex_pink.c - @ingroup examples_src - @brief Generate Pink Noise using Gardner method. - - Optimization suggested by James McCartney uses a tree - to select which random value to replace. -
    -    x x x x x x x x x x x x x x x x
    -    x   x   x   x   x   x   x   x
    -    x       x       x       x
    -     x               x
    -       x
    -
    - Tree is generated by counting trailing zeros in an increasing index. - When the index is zero, no random number is selected. - - @author Phil Burk http://www.softsynth.com -*/ -/* - * $Id$ - * - * This program uses the PortAudio Portable Audio Library. - * For more information see: http://www.portaudio.com - * Copyright (c) 1999-2000 Ross Bencina and Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -#include -#include -#include "portaudio.h" - -#define PINK_MAX_RANDOM_ROWS (30) -#define PINK_RANDOM_BITS (24) -#define PINK_RANDOM_SHIFT ((sizeof(long)*8)-PINK_RANDOM_BITS) - -typedef struct -{ - long pink_Rows[PINK_MAX_RANDOM_ROWS]; - long pink_RunningSum; /* Used to optimize summing of generators. */ - int pink_Index; /* Incremented each sample. */ - int pink_IndexMask; /* Index wrapped by ANDing with this mask. */ - float pink_Scalar; /* Used to scale within range of -1.0 to +1.0 */ -} -PinkNoise; - -/* Prototypes */ -static unsigned long GenerateRandomNumber( void ); -void InitializePinkNoise( PinkNoise *pink, int numRows ); -float GeneratePinkNoise( PinkNoise *pink ); - -/************************************************************/ -/* Calculate pseudo-random 32 bit number based on linear congruential method. */ -static unsigned long GenerateRandomNumber( void ) -{ - /* Change this seed for different random sequences. */ - static unsigned long randSeed = 22222; - randSeed = (randSeed * 196314165) + 907633515; - return randSeed; -} - -/************************************************************/ -/* Setup PinkNoise structure for N rows of generators. */ -void InitializePinkNoise( PinkNoise *pink, int numRows ) -{ - int i; - long pmax; - pink->pink_Index = 0; - pink->pink_IndexMask = (1<pink_Scalar = 1.0f / pmax; - /* Initialize rows. */ - for( i=0; ipink_Rows[i] = 0; - pink->pink_RunningSum = 0; -} - -#define PINK_MEASURE -#ifdef PINK_MEASURE -float pinkMax = -999.0; -float pinkMin = 999.0; -#endif - -/* Generate Pink noise values between -1.0 and +1.0 */ -float GeneratePinkNoise( PinkNoise *pink ) -{ - long newRandom; - long sum; - float output; - /* Increment and mask index. */ - pink->pink_Index = (pink->pink_Index + 1) & pink->pink_IndexMask; - /* If index is zero, don't update any random values. */ - if( pink->pink_Index != 0 ) - { - /* Determine how many trailing zeros in PinkIndex. */ - /* This algorithm will hang if n==0 so test first. */ - int numZeros = 0; - int n = pink->pink_Index; - while( (n & 1) == 0 ) - { - n = n >> 1; - numZeros++; - } - /* Replace the indexed ROWS random value. - * Subtract and add back to RunningSum instead of adding all the random - * values together. Only one changes each time. - */ - pink->pink_RunningSum -= pink->pink_Rows[numZeros]; - newRandom = ((long)GenerateRandomNumber()) >> PINK_RANDOM_SHIFT; - pink->pink_RunningSum += newRandom; - pink->pink_Rows[numZeros] = newRandom; - } - - /* Add extra white noise value. */ - newRandom = ((long)GenerateRandomNumber()) >> PINK_RANDOM_SHIFT; - sum = pink->pink_RunningSum + newRandom; - /* Scale to range of -1.0 to 0.9999. */ - output = pink->pink_Scalar * sum; -#ifdef PINK_MEASURE - /* Check Min/Max */ - if( output > pinkMax ) pinkMax = output; - else if( output < pinkMin ) pinkMin = output; -#endif - return output; -} - -/*******************************************************************/ -#define PINK_TEST -#ifdef PINK_TEST - -/* Context for callback routine. */ -typedef struct -{ - PinkNoise leftPink; - PinkNoise rightPink; - unsigned int sampsToGo; -} -paTestData; - -/* This routine will be called by the PortAudio engine when audio is needed. -** It may called at interrupt level on some machines so don't do anything -** that could mess up the system like calling malloc() or free(). -*/ -static int patestCallback(const void* inputBuffer, - void* outputBuffer, - unsigned long framesPerBuffer, - const PaStreamCallbackTimeInfo* timeInfo, - PaStreamCallbackFlags statusFlags, - void* userData) -{ - int finished; - int i; - int numFrames; - paTestData *data = (paTestData*)userData; - float *out = (float*)outputBuffer; - (void) inputBuffer; /* Prevent "unused variable" warnings. */ - - /* Are we almost at end. */ - if( data->sampsToGo < framesPerBuffer ) - { - numFrames = data->sampsToGo; - finished = 1; - } - else - { - numFrames = framesPerBuffer; - finished = 0; - } - for( i=0; ileftPink ); - *out++ = GeneratePinkNoise( &data->rightPink ); - } - data->sampsToGo -= numFrames; - return finished; -} - -/*******************************************************************/ -int main(void); -int main(void) -{ - PaStream* stream; - PaError err; - paTestData data; - PaStreamParameters outputParameters; - int totalSamps; - static const double SR = 44100.0; - static const int FPB = 2048; /* Frames per buffer: 46 ms buffers. */ - - /* Initialize two pink noise signals with different numbers of rows. */ - InitializePinkNoise( &data.leftPink, 12 ); - InitializePinkNoise( &data.rightPink, 16 ); - - /* Look at a few values. */ - { - int i; - float pink; - for( i=0; i<20; i++ ) - { - pink = GeneratePinkNoise( &data.leftPink ); - printf("Pink = %f\n", pink ); - } - } - - data.sampsToGo = totalSamps = (int)(60.0 * SR); /* Play a whole minute. */ - err = Pa_Initialize(); - if( err != paNoError ) goto error; - - /* Open a stereo PortAudio stream so we can hear the result. */ - outputParameters.device = Pa_GetDefaultOutputDevice(); /* Take the default output device. */ - if (outputParameters.device == paNoDevice) { - fprintf(stderr,"Error: No default output device.\n"); - goto error; - } - outputParameters.channelCount = 2; /* Stereo output, most likely supported. */ - outputParameters.hostApiSpecificStreamInfo = NULL; - outputParameters.sampleFormat = paFloat32; /* 32 bit floating point output. */ - outputParameters.suggestedLatency = - Pa_GetDeviceInfo(outputParameters.device)->defaultLowOutputLatency; - err = Pa_OpenStream(&stream, - NULL, /* No input. */ - &outputParameters, - SR, /* Sample rate. */ - FPB, /* Frames per buffer. */ - paClipOff, /* we won't output out of range samples so don't bother clipping them */ - patestCallback, - &data); - if( err != paNoError ) goto error; - - err = Pa_StartStream( stream ); - if( err != paNoError ) goto error; - - printf("Stereo pink noise for one minute...\n"); - - while( ( err = Pa_IsStreamActive( stream ) ) == 1 ) Pa_Sleep(100); - if( err < 0 ) goto error; - - err = Pa_CloseStream( stream ); - if( err != paNoError ) goto error; -#ifdef PINK_MEASURE - printf("Pink min = %f, max = %f\n", pinkMin, pinkMax ); -#endif - Pa_Terminate(); - return 0; -error: - Pa_Terminate(); - fprintf( stderr, "An error occurred while using the portaudio stream\n" ); - fprintf( stderr, "Error number: %d\n", err ); - fprintf( stderr, "Error message: %s\n", Pa_GetErrorText( err ) ); - return 0; -} -#endif /* PINK_TEST */ diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/pens/boundsPen.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/pens/boundsPen.py deleted file mode 100644 index d833cc89b90b38937aa0e21c26bc7e7e84f5ee7d..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/pens/boundsPen.py +++ /dev/null @@ -1,100 +0,0 @@ -from fontTools.misc.arrayTools import updateBounds, pointInRect, unionRect -from fontTools.misc.bezierTools import calcCubicBounds, calcQuadraticBounds -from fontTools.pens.basePen import BasePen - - -__all__ = ["BoundsPen", "ControlBoundsPen"] - - -class ControlBoundsPen(BasePen): - - """Pen to calculate the "control bounds" of a shape. This is the - bounding box of all control points, so may be larger than the - actual bounding box if there are curves that don't have points - on their extremes. - - When the shape has been drawn, the bounds are available as the - ``bounds`` attribute of the pen object. It's a 4-tuple:: - - (xMin, yMin, xMax, yMax). - - If ``ignoreSinglePoints`` is True, single points are ignored. - """ - - def __init__(self, glyphSet, ignoreSinglePoints=False): - BasePen.__init__(self, glyphSet) - self.ignoreSinglePoints = ignoreSinglePoints - self.init() - - def init(self): - self.bounds = None - self._start = None - - def _moveTo(self, pt): - self._start = pt - if not self.ignoreSinglePoints: - self._addMoveTo() - - def _addMoveTo(self): - if self._start is None: - return - bounds = self.bounds - if bounds: - self.bounds = updateBounds(bounds, self._start) - else: - x, y = self._start - self.bounds = (x, y, x, y) - self._start = None - - def _lineTo(self, pt): - self._addMoveTo() - self.bounds = updateBounds(self.bounds, pt) - - def _curveToOne(self, bcp1, bcp2, pt): - self._addMoveTo() - bounds = self.bounds - bounds = updateBounds(bounds, bcp1) - bounds = updateBounds(bounds, bcp2) - bounds = updateBounds(bounds, pt) - self.bounds = bounds - - def _qCurveToOne(self, bcp, pt): - self._addMoveTo() - bounds = self.bounds - bounds = updateBounds(bounds, bcp) - bounds = updateBounds(bounds, pt) - self.bounds = bounds - - -class BoundsPen(ControlBoundsPen): - - """Pen to calculate the bounds of a shape. It calculates the - correct bounds even when the shape contains curves that don't - have points on their extremes. This is somewhat slower to compute - than the "control bounds". - - When the shape has been drawn, the bounds are available as the - ``bounds`` attribute of the pen object. It's a 4-tuple:: - - (xMin, yMin, xMax, yMax) - """ - - def _curveToOne(self, bcp1, bcp2, pt): - self._addMoveTo() - bounds = self.bounds - bounds = updateBounds(bounds, pt) - if not pointInRect(bcp1, bounds) or not pointInRect(bcp2, bounds): - bounds = unionRect( - bounds, calcCubicBounds(self._getCurrentPoint(), bcp1, bcp2, pt) - ) - self.bounds = bounds - - def _qCurveToOne(self, bcp, pt): - self._addMoveTo() - bounds = self.bounds - bounds = updateBounds(bounds, pt) - if not pointInRect(bcp, bounds): - bounds = unionRect( - bounds, calcQuadraticBounds(self._getCurrentPoint(), bcp, pt) - ) - self.bounds = bounds diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ufoLib/plistlib.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ufoLib/plistlib.py deleted file mode 100644 index 1f52f20a2b4836e39d3e292496928185dfe08534..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ufoLib/plistlib.py +++ /dev/null @@ -1,46 +0,0 @@ -"""DEPRECATED - This module is kept here only as a backward compatibility shim -for the old ufoLib.plistlib module, which was moved to fontTools.misc.plistlib. -Please use the latter instead. -""" -from fontTools.misc.plistlib import dump, dumps, load, loads -from fontTools.misc.textTools import tobytes - -# The following functions were part of the old py2-like ufoLib.plistlib API. -# They are kept only for backward compatiblity. -from fontTools.ufoLib.utils import deprecated - - -@deprecated("Use 'fontTools.misc.plistlib.load' instead") -def readPlist(path_or_file): - did_open = False - if isinstance(path_or_file, str): - path_or_file = open(path_or_file, "rb") - did_open = True - try: - return load(path_or_file, use_builtin_types=False) - finally: - if did_open: - path_or_file.close() - - -@deprecated("Use 'fontTools.misc.plistlib.dump' instead") -def writePlist(value, path_or_file): - did_open = False - if isinstance(path_or_file, str): - path_or_file = open(path_or_file, "wb") - did_open = True - try: - dump(value, path_or_file, use_builtin_types=False) - finally: - if did_open: - path_or_file.close() - - -@deprecated("Use 'fontTools.misc.plistlib.loads' instead") -def readPlistFromString(data): - return loads(tobytes(data, encoding="utf-8"), use_builtin_types=False) - - -@deprecated("Use 'fontTools.misc.plistlib.dumps' instead") -def writePlistToString(value): - return dumps(value, use_builtin_types=False) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Index-7b3f6002.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Index-7b3f6002.js deleted file mode 100644 index 28bf8ee9260046aeff578641ef6556f7f273aaab..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Index-7b3f6002.js +++ /dev/null @@ -1,13 +0,0 @@ -import{_ as Se}from"./index-0526d562.js";import{f as gn,B as af}from"./Button-89057c03.js";import{C as hf,a as sa}from"./Copy-1b5c0932.js";import{S as cf}from"./Index-37584f50.js";import{D as ff}from"./Download-696bd40c.js";import{B as uf}from"./BlockLabel-e3b0d1c3.js";import{E as df}from"./Empty-937365d8.js";import pf from"./Example-e03fb3b4.js";const{SvelteComponent:mf,append:gf,attr:ni,detach:yf,init:bf,insert:wf,noop:Jn,safe_not_equal:kf,svg_element:Zr}=window.__gradio__svelte__internal;function vf(n){let e,t;return{c(){e=Zr("svg"),t=Zr("path"),ni(t,"fill","currentColor"),ni(t,"d","m31 16l-7 7l-1.41-1.41L28.17 16l-5.58-5.59L24 9l7 7zM1 16l7-7l1.41 1.41L3.83 16l5.58 5.59L8 23l-7-7zm11.42 9.484L17.64 6l1.932.517L14.352 26z"),ni(e,"width","100%"),ni(e,"height","100%"),ni(e,"viewBox","0 0 32 32")},m(i,s){wf(i,e,s),gf(e,t)},p:Jn,i:Jn,o:Jn,d(i){i&&yf(e)}}}let ra=class extends mf{constructor(e){super(),bf(this,e,null,vf,kf,{})}};class _{constructor(){}lineAt(e){if(e<0||e>this.length)throw new RangeError(`Invalid position ${e} in document of length ${this.length}`);return this.lineInner(e,!1,1,0)}line(e){if(e<1||e>this.lines)throw new RangeError(`Invalid line number ${e} in ${this.lines}-line document`);return this.lineInner(e,!0,1,0)}replace(e,t,i){let s=[];return this.decompose(0,e,s,2),i.length&&i.decompose(0,i.length,s,3),this.decompose(t,this.length,s,1),He.from(s,this.length-(t-e)+i.length)}append(e){return this.replace(this.length,this.length,e)}slice(e,t=this.length){let i=[];return this.decompose(e,t,i,0),He.from(i,t-e)}eq(e){if(e==this)return!0;if(e.length!=this.length||e.lines!=this.lines)return!1;let t=this.scanIdentical(e,1),i=this.length-this.scanIdentical(e,-1),s=new yi(this),r=new yi(e);for(let o=t,l=t;;){if(s.next(o),r.next(o),o=0,s.lineBreak!=r.lineBreak||s.done!=r.done||s.value!=r.value)return!1;if(l+=s.value.length,s.done||l>=i)return!0}}iter(e=1){return new yi(this,e)}iterRange(e,t=this.length){return new oa(this,e,t)}iterLines(e,t){let i;if(e==null)i=this.iter();else{t==null&&(t=this.lines+1);let s=this.line(e).from;i=this.iterRange(s,Math.max(s,t==this.lines+1?this.length:t<=1?0:this.line(t-1).to))}return new la(i)}toString(){return this.sliceString(0)}toJSON(){let e=[];return this.flatten(e),e}static of(e){if(e.length==0)throw new RangeError("A document must have at least one line");return e.length==1&&!e[0]?_.empty:e.length<=32?new Q(e):He.from(Q.split(e,[]))}}class Q extends _{constructor(e,t=xf(e)){super(),this.text=e,this.length=t}get lines(){return this.text.length}get children(){return null}lineInner(e,t,i,s){for(let r=0;;r++){let o=this.text[r],l=s+o.length;if((t?i:l)>=e)return new Sf(s,l,i,o);s=l+1,i++}}decompose(e,t,i,s){let r=e<=0&&t>=this.length?this:new Q(Qr(this.text,e,t),Math.min(t,this.length)-Math.max(0,e));if(s&1){let o=i.pop(),l=on(r.text,o.text.slice(),0,r.length);if(l.length<=32)i.push(new Q(l,o.length+r.length));else{let a=l.length>>1;i.push(new Q(l.slice(0,a)),new Q(l.slice(a)))}}else i.push(r)}replace(e,t,i){if(!(i instanceof Q))return super.replace(e,t,i);let s=on(this.text,on(i.text,Qr(this.text,0,e)),t),r=this.length+i.length-(t-e);return s.length<=32?new Q(s,r):He.from(Q.split(s,[]),r)}sliceString(e,t=this.length,i=` -`){let s="";for(let r=0,o=0;r<=t&&oe&&o&&(s+=i),er&&(s+=l.slice(Math.max(0,e-r),t-r)),r=a+1}return s}flatten(e){for(let t of this.text)e.push(t)}scanIdentical(){return 0}static split(e,t){let i=[],s=-1;for(let r of e)i.push(r),s+=r.length+1,i.length==32&&(t.push(new Q(i,s)),i=[],s=-1);return s>-1&&t.push(new Q(i,s)),t}}class He extends _{constructor(e,t){super(),this.children=e,this.length=t,this.lines=0;for(let i of e)this.lines+=i.lines}lineInner(e,t,i,s){for(let r=0;;r++){let o=this.children[r],l=s+o.length,a=i+o.lines-1;if((t?a:l)>=e)return o.lineInner(e,t,i,s);s=l+1,i=a+1}}decompose(e,t,i,s){for(let r=0,o=0;o<=t&&r=o){let h=s&((o<=e?1:0)|(a>=t?2:0));o>=e&&a<=t&&!h?i.push(l):l.decompose(e-o,t-o,i,h)}o=a+1}}replace(e,t,i){if(i.lines=r&&t<=l){let a=o.replace(e-r,t-r,i),h=this.lines-o.lines+a.lines;if(a.lines>5-1&&a.lines>h>>5+1){let c=this.children.slice();return c[s]=a,new He(c,this.length-(t-e)+i.length)}return super.replace(r,l,a)}r=l+1}return super.replace(e,t,i)}sliceString(e,t=this.length,i=` -`){let s="";for(let r=0,o=0;re&&r&&(s+=i),eo&&(s+=l.sliceString(e-o,t-o,i)),o=a+1}return s}flatten(e){for(let t of this.children)t.flatten(e)}scanIdentical(e,t){if(!(e instanceof He))return 0;let i=0,[s,r,o,l]=t>0?[0,0,this.children.length,e.children.length]:[this.children.length-1,e.children.length-1,-1,-1];for(;;s+=t,r+=t){if(s==o||r==l)return i;let a=this.children[s],h=e.children[r];if(a!=h)return i+a.scanIdentical(h,t);i+=a.length+1}}static from(e,t=e.reduce((i,s)=>i+s.length+1,-1)){let i=0;for(let d of e)i+=d.lines;if(i<32){let d=[];for(let p of e)p.flatten(d);return new Q(d,t)}let s=Math.max(32,i>>5),r=s<<1,o=s>>1,l=[],a=0,h=-1,c=[];function f(d){let p;if(d.lines>r&&d instanceof He)for(let g of d.children)f(g);else d.lines>o&&(a>o||!a)?(u(),l.push(d)):d instanceof Q&&a&&(p=c[c.length-1])instanceof Q&&d.lines+p.lines<=32?(a+=d.lines,h+=d.length+1,c[c.length-1]=new Q(p.text.concat(d.text),p.length+1+d.length)):(a+d.lines>s&&u(),a+=d.lines,h+=d.length+1,c.push(d))}function u(){a!=0&&(l.push(c.length==1?c[0]:He.from(c,h)),h=-1,a=c.length=0)}for(let d of e)f(d);return u(),l.length==1?l[0]:new He(l,t)}}_.empty=new Q([""],0);function xf(n){let e=-1;for(let t of n)e+=t.length+1;return e}function on(n,e,t=0,i=1e9){for(let s=0,r=0,o=!0;r=t&&(a>i&&(l=l.slice(0,i-s)),s0?1:(e instanceof Q?e.text.length:e.children.length)<<1]}nextInner(e,t){for(this.done=this.lineBreak=!1;;){let i=this.nodes.length-1,s=this.nodes[i],r=this.offsets[i],o=r>>1,l=s instanceof Q?s.text.length:s.children.length;if(o==(t>0?l:0)){if(i==0)return this.done=!0,this.value="",this;t>0&&this.offsets[i-1]++,this.nodes.pop(),this.offsets.pop()}else if((r&1)==(t>0?0:1)){if(this.offsets[i]+=t,e==0)return this.lineBreak=!0,this.value=` -`,this;e--}else if(s instanceof Q){let a=s.text[o+(t<0?-1:0)];if(this.offsets[i]+=t,a.length>Math.max(0,e))return this.value=e==0?a:t>0?a.slice(e):a.slice(0,a.length-e),this;e-=a.length}else{let a=s.children[o+(t<0?-1:0)];e>a.length?(e-=a.length,this.offsets[i]+=t):(t<0&&this.offsets[i]--,this.nodes.push(a),this.offsets.push(t>0?1:(a instanceof Q?a.text.length:a.children.length)<<1))}}}next(e=0){return e<0&&(this.nextInner(-e,-this.dir),e=this.value.length),this.nextInner(e,this.dir)}}class oa{constructor(e,t,i){this.value="",this.done=!1,this.cursor=new yi(e,t>i?-1:1),this.pos=t>i?e.length:0,this.from=Math.min(t,i),this.to=Math.max(t,i)}nextInner(e,t){if(t<0?this.pos<=this.from:this.pos>=this.to)return this.value="",this.done=!0,this;e+=Math.max(0,t<0?this.pos-this.to:this.from-this.pos);let i=t<0?this.pos-this.from:this.to-this.pos;e>i&&(e=i),i-=e;let{value:s}=this.cursor.next(e);return this.pos+=(s.length+e)*t,this.value=s.length<=i?s:t<0?s.slice(s.length-i):s.slice(0,i),this.done=!this.value,this}next(e=0){return e<0?e=Math.max(e,this.from-this.pos):e>0&&(e=Math.min(e,this.to-this.pos)),this.nextInner(e,this.cursor.dir)}get lineBreak(){return this.cursor.lineBreak&&this.value!=""}}class la{constructor(e){this.inner=e,this.afterBreak=!0,this.value="",this.done=!1}next(e=0){let{done:t,lineBreak:i,value:s}=this.inner.next(e);return t?(this.done=!0,this.value=""):i?this.afterBreak?this.value="":(this.afterBreak=!0,this.next()):(this.value=s,this.afterBreak=!1),this}get lineBreak(){return!1}}typeof Symbol<"u"&&(_.prototype[Symbol.iterator]=function(){return this.iter()},yi.prototype[Symbol.iterator]=oa.prototype[Symbol.iterator]=la.prototype[Symbol.iterator]=function(){return this});class Sf{constructor(e,t,i,s){this.from=e,this.to=t,this.number=i,this.text=s}get length(){return this.to-this.from}}let qt="lc,34,7n,7,7b,19,,,,2,,2,,,20,b,1c,l,g,,2t,7,2,6,2,2,,4,z,,u,r,2j,b,1m,9,9,,o,4,,9,,3,,5,17,3,3b,f,,w,1j,,,,4,8,4,,3,7,a,2,t,,1m,,,,2,4,8,,9,,a,2,q,,2,2,1l,,4,2,4,2,2,3,3,,u,2,3,,b,2,1l,,4,5,,2,4,,k,2,m,6,,,1m,,,2,,4,8,,7,3,a,2,u,,1n,,,,c,,9,,14,,3,,1l,3,5,3,,4,7,2,b,2,t,,1m,,2,,2,,3,,5,2,7,2,b,2,s,2,1l,2,,,2,4,8,,9,,a,2,t,,20,,4,,2,3,,,8,,29,,2,7,c,8,2q,,2,9,b,6,22,2,r,,,,,,1j,e,,5,,2,5,b,,10,9,,2u,4,,6,,2,2,2,p,2,4,3,g,4,d,,2,2,6,,f,,jj,3,qa,3,t,3,t,2,u,2,1s,2,,7,8,,2,b,9,,19,3,3b,2,y,,3a,3,4,2,9,,6,3,63,2,2,,1m,,,7,,,,,2,8,6,a,2,,1c,h,1r,4,1c,7,,,5,,14,9,c,2,w,4,2,2,,3,1k,,,2,3,,,3,1m,8,2,2,48,3,,d,,7,4,,6,,3,2,5i,1m,,5,ek,,5f,x,2da,3,3x,,2o,w,fe,6,2x,2,n9w,4,,a,w,2,28,2,7k,,3,,4,,p,2,5,,47,2,q,i,d,,12,8,p,b,1a,3,1c,,2,4,2,2,13,,1v,6,2,2,2,2,c,,8,,1b,,1f,,,3,2,2,5,2,,,16,2,8,,6m,,2,,4,,fn4,,kh,g,g,g,a6,2,gt,,6a,,45,5,1ae,3,,2,5,4,14,3,4,,4l,2,fx,4,ar,2,49,b,4w,,1i,f,1k,3,1d,4,2,2,1x,3,10,5,,8,1q,,c,2,1g,9,a,4,2,,2n,3,2,,,2,6,,4g,,3,8,l,2,1l,2,,,,,m,,e,7,3,5,5f,8,2,3,,,n,,29,,2,6,,,2,,,2,,2,6j,,2,4,6,2,,2,r,2,2d,8,2,,,2,2y,,,,2,6,,,2t,3,2,4,,5,77,9,,2,6t,,a,2,,,4,,40,4,2,2,4,,w,a,14,6,2,4,8,,9,6,2,3,1a,d,,2,ba,7,,6,,,2a,m,2,7,,2,,2,3e,6,3,,,2,,7,,,20,2,3,,,,9n,2,f0b,5,1n,7,t4,,1r,4,29,,f5k,2,43q,,,3,4,5,8,8,2,7,u,4,44,3,1iz,1j,4,1e,8,,e,,m,5,,f,11s,7,,h,2,7,,2,,5,79,7,c5,4,15s,7,31,7,240,5,gx7k,2o,3k,6o".split(",").map(n=>n?parseInt(n,36):1);for(let n=1;nn)return qt[e-1]<=n;return!1}function eo(n){return n>=127462&&n<=127487}const to=8205;function ve(n,e,t=!0,i=!0){return(t?aa:Af)(n,e,i)}function aa(n,e,t){if(e==n.length)return e;e&&ha(n.charCodeAt(e))&&ca(n.charCodeAt(e-1))&&e--;let i=ce(n,e);for(e+=Ce(i);e=0&&eo(ce(n,o));)r++,o-=2;if(r%2==0)break;e+=2}else break}return e}function Af(n,e,t){for(;e>0;){let i=aa(n,e-2,t);if(i=56320&&n<57344}function ca(n){return n>=55296&&n<56320}function ce(n,e){let t=n.charCodeAt(e);if(!ca(t)||e+1==n.length)return t;let i=n.charCodeAt(e+1);return ha(i)?(t-55296<<10)+(i-56320)+65536:t}function fa(n){return n<=65535?String.fromCharCode(n):(n-=65536,String.fromCharCode((n>>10)+55296,(n&1023)+56320))}function Ce(n){return n<65536?1:2}const Ds=/\r\n?|\n/;var le=function(n){return n[n.Simple=0]="Simple",n[n.TrackDel=1]="TrackDel",n[n.TrackBefore=2]="TrackBefore",n[n.TrackAfter=3]="TrackAfter",n}(le||(le={}));class je{constructor(e){this.sections=e}get length(){let e=0;for(let t=0;te)return r+(e-s);r+=l}else{if(i!=le.Simple&&h>=e&&(i==le.TrackDel&&se||i==le.TrackBefore&&se))return null;if(h>e||h==e&&t<0&&!l)return e==s||t<0?r:r+a;r+=a}s=h}if(e>s)throw new RangeError(`Position ${e} is out of range for changeset of length ${s}`);return r}touchesRange(e,t=e){for(let i=0,s=0;i=0&&s<=t&&l>=e)return st?"cover":!0;s=l}return!1}toString(){let e="";for(let t=0;t=0?":"+s:"")}return e}toJSON(){return this.sections}static fromJSON(e){if(!Array.isArray(e)||e.length%2||e.some(t=>typeof t!="number"))throw new RangeError("Invalid JSON representation of ChangeDesc");return new je(e)}static create(e){return new je(e)}}class te extends je{constructor(e,t){super(e),this.inserted=t}apply(e){if(this.length!=e.length)throw new RangeError("Applying change set to a document with the wrong length");return Ts(this,(t,i,s,r,o)=>e=e.replace(s,s+(i-t),o),!1),e}mapDesc(e,t=!1){return Os(this,e,t,!0)}invert(e){let t=this.sections.slice(),i=[];for(let s=0,r=0;s=0){t[s]=l,t[s+1]=o;let a=s>>1;for(;i.length0&&nt(i,t,r.text),r.forward(c),l+=c}let h=e[o++];for(;l>1].toJSON()))}return e}static of(e,t,i){let s=[],r=[],o=0,l=null;function a(c=!1){if(!c&&!s.length)return;ou||f<0||u>t)throw new RangeError(`Invalid change range ${f} to ${u} (in doc of length ${t})`);let p=d?typeof d=="string"?_.of(d.split(i||Ds)):d:_.empty,g=p.length;if(f==u&&g==0)return;fo&&he(s,f-o,-1),he(s,u-f,g),nt(r,s,p),o=u}}return h(e),a(!l),l}static empty(e){return new te(e?[e,-1]:[],[])}static fromJSON(e){if(!Array.isArray(e))throw new RangeError("Invalid JSON representation of ChangeSet");let t=[],i=[];for(let s=0;sl&&typeof o!="string"))throw new RangeError("Invalid JSON representation of ChangeSet");if(r.length==1)t.push(r[0],0);else{for(;i.length=0&&t<=0&&t==n[s+1]?n[s]+=e:e==0&&n[s]==0?n[s+1]+=t:i?(n[s]+=e,n[s+1]+=t):n.push(e,t)}function nt(n,e,t){if(t.length==0)return;let i=e.length-2>>1;if(i>1])),!(t||o==n.sections.length||n.sections[o+1]<0);)l=n.sections[o++],a=n.sections[o++];e(s,h,r,c,f),s=h,r=c}}}function Os(n,e,t,i=!1){let s=[],r=i?[]:null,o=new xi(n),l=new xi(e);for(let a=-1;;)if(o.ins==-1&&l.ins==-1){let h=Math.min(o.len,l.len);he(s,h,-1),o.forward(h),l.forward(h)}else if(l.ins>=0&&(o.ins<0||a==o.i||o.off==0&&(l.len=0&&a=0){let h=0,c=o.len;for(;c;)if(l.ins==-1){let f=Math.min(c,l.len);h+=f,c-=f,l.forward(f)}else if(l.ins==0&&l.lena||o.ins>=0&&o.len>a)&&(l||i.length>h),r.forward2(a),o.forward(a)}}}}class xi{constructor(e){this.set=e,this.i=0,this.next()}next(){let{sections:e}=this.set;this.i>1;return t>=e.length?_.empty:e[t]}textBit(e){let{inserted:t}=this.set,i=this.i-2>>1;return i>=t.length&&!e?_.empty:t[i].slice(this.off,e==null?void 0:this.off+e)}forward(e){e==this.len?this.next():(this.len-=e,this.off+=e)}forward2(e){this.ins==-1?this.forward(e):e==this.ins?this.next():(this.ins-=e,this.off+=e)}}class bt{constructor(e,t,i){this.from=e,this.to=t,this.flags=i}get anchor(){return this.flags&16?this.to:this.from}get head(){return this.flags&16?this.from:this.to}get empty(){return this.from==this.to}get assoc(){return this.flags&4?-1:this.flags&8?1:0}get bidiLevel(){let e=this.flags&3;return e==3?null:e}get goalColumn(){let e=this.flags>>5;return e==33554431?void 0:e}map(e,t=-1){let i,s;return this.empty?i=s=e.mapPos(this.from,t):(i=e.mapPos(this.from,1),s=e.mapPos(this.to,-1)),i==this.from&&s==this.to?this:new bt(i,s,this.flags)}extend(e,t=e){if(e<=this.anchor&&t>=this.anchor)return w.range(e,t);let i=Math.abs(e-this.anchor)>Math.abs(t-this.anchor)?e:t;return w.range(this.anchor,i)}eq(e){return this.anchor==e.anchor&&this.head==e.head}toJSON(){return{anchor:this.anchor,head:this.head}}static fromJSON(e){if(!e||typeof e.anchor!="number"||typeof e.head!="number")throw new RangeError("Invalid JSON representation for SelectionRange");return w.range(e.anchor,e.head)}static create(e,t,i){return new bt(e,t,i)}}class w{constructor(e,t){this.ranges=e,this.mainIndex=t}map(e,t=-1){return e.empty?this:w.create(this.ranges.map(i=>i.map(e,t)),this.mainIndex)}eq(e){if(this.ranges.length!=e.ranges.length||this.mainIndex!=e.mainIndex)return!1;for(let t=0;te.toJSON()),main:this.mainIndex}}static fromJSON(e){if(!e||!Array.isArray(e.ranges)||typeof e.main!="number"||e.main>=e.ranges.length)throw new RangeError("Invalid JSON representation for EditorSelection");return new w(e.ranges.map(t=>bt.fromJSON(t)),e.main)}static single(e,t=e){return new w([w.range(e,t)],0)}static create(e,t=0){if(e.length==0)throw new RangeError("A selection needs at least one range");for(let i=0,s=0;se?4:0))}static normalized(e,t=0){let i=e[t];e.sort((s,r)=>s.from-r.from),t=e.indexOf(i);for(let s=1;sr.head?w.range(a,l):w.range(l,a))}}return new w(e,t)}}function da(n,e){for(let t of n.ranges)if(t.to>e)throw new RangeError("Selection points outside of document")}let kr=0;class D{constructor(e,t,i,s,r){this.combine=e,this.compareInput=t,this.compare=i,this.isStatic=s,this.id=kr++,this.default=e([]),this.extensions=typeof r=="function"?r(this):r}static define(e={}){return new D(e.combine||(t=>t),e.compareInput||((t,i)=>t===i),e.compare||(e.combine?(t,i)=>t===i:vr),!!e.static,e.enables)}of(e){return new ln([],this,0,e)}compute(e,t){if(this.isStatic)throw new Error("Can't compute a static facet");return new ln(e,this,1,t)}computeN(e,t){if(this.isStatic)throw new Error("Can't compute a static facet");return new ln(e,this,2,t)}from(e,t){return t||(t=i=>i),this.compute([e],i=>t(i.field(e)))}}function vr(n,e){return n==e||n.length==e.length&&n.every((t,i)=>t===e[i])}class ln{constructor(e,t,i,s){this.dependencies=e,this.facet=t,this.type=i,this.value=s,this.id=kr++}dynamicSlot(e){var t;let i=this.value,s=this.facet.compareInput,r=this.id,o=e[r]>>1,l=this.type==2,a=!1,h=!1,c=[];for(let f of this.dependencies)f=="doc"?a=!0:f=="selection"?h=!0:((t=e[f.id])!==null&&t!==void 0?t:1)&1||c.push(e[f.id]);return{create(f){return f.values[o]=i(f),1},update(f,u){if(a&&u.docChanged||h&&(u.docChanged||u.selection)||Bs(f,c)){let d=i(f);if(l?!io(d,f.values[o],s):!s(d,f.values[o]))return f.values[o]=d,1}return 0},reconfigure:(f,u)=>{let d=i(f),p=u.config.address[r];if(p!=null){let g=bn(u,p);if(this.dependencies.every(y=>y instanceof D?u.facet(y)===f.facet(y):y instanceof be?u.field(y,!1)==f.field(y,!1):!0)||(l?io(d,g,s):s(d,g)))return f.values[o]=g,0}return f.values[o]=d,1}}}}function io(n,e,t){if(n.length!=e.length)return!1;for(let i=0;in[a.id]),s=t.map(a=>a.type),r=i.filter(a=>!(a&1)),o=n[e.id]>>1;function l(a){let h=[];for(let c=0;ci===s),e);return e.provide&&(t.provides=e.provide(t)),t}create(e){let t=e.facet(no).find(i=>i.field==this);return(t?.create||this.createF)(e)}slot(e){let t=e[this.id]>>1;return{create:i=>(i.values[t]=this.create(i),1),update:(i,s)=>{let r=i.values[t],o=this.updateF(r,s);return this.compareF(r,o)?0:(i.values[t]=o,1)},reconfigure:(i,s)=>s.config.address[this.id]!=null?(i.values[t]=s.field(this),0):(i.values[t]=this.create(i),1)}}init(e){return[this,no.of({field:this,create:e})]}get extension(){return this}}const gt={lowest:4,low:3,default:2,high:1,highest:0};function si(n){return e=>new pa(e,n)}const Ri={highest:si(gt.highest),high:si(gt.high),default:si(gt.default),low:si(gt.low),lowest:si(gt.lowest)};class pa{constructor(e,t){this.inner=e,this.prec=t}}class Ln{of(e){return new Ps(this,e)}reconfigure(e){return Ln.reconfigure.of({compartment:this,extension:e})}get(e){return e.config.compartments.get(this)}}class Ps{constructor(e,t){this.compartment=e,this.inner=t}}class yn{constructor(e,t,i,s,r,o){for(this.base=e,this.compartments=t,this.dynamicSlots=i,this.address=s,this.staticValues=r,this.facets=o,this.statusTemplate=[];this.statusTemplate.length>1]}static resolve(e,t,i){let s=[],r=Object.create(null),o=new Map;for(let u of Df(e,t,o))u instanceof be?s.push(u):(r[u.facet.id]||(r[u.facet.id]=[])).push(u);let l=Object.create(null),a=[],h=[];for(let u of s)l[u.id]=h.length<<1,h.push(d=>u.slot(d));let c=i?.config.facets;for(let u in r){let d=r[u],p=d[0].facet,g=c&&c[u]||[];if(d.every(y=>y.type==0))if(l[p.id]=a.length<<1|1,vr(g,d))a.push(i.facet(p));else{let y=p.combine(d.map(b=>b.value));a.push(i&&p.compare(y,i.facet(p))?i.facet(p):y)}else{for(let y of d)y.type==0?(l[y.id]=a.length<<1|1,a.push(y.value)):(l[y.id]=h.length<<1,h.push(b=>y.dynamicSlot(b)));l[p.id]=h.length<<1,h.push(y=>Mf(y,p,d))}}let f=h.map(u=>u(l));return new yn(e,o,f,l,a,r)}}function Df(n,e,t){let i=[[],[],[],[],[]],s=new Map;function r(o,l){let a=s.get(o);if(a!=null){if(a<=l)return;let h=i[a].indexOf(o);h>-1&&i[a].splice(h,1),o instanceof Ps&&t.delete(o.compartment)}if(s.set(o,l),Array.isArray(o))for(let h of o)r(h,l);else if(o instanceof Ps){if(t.has(o.compartment))throw new RangeError("Duplicate use of compartment in extensions");let h=e.get(o.compartment)||o.inner;t.set(o.compartment,h),r(h,l)}else if(o instanceof pa)r(o.inner,o.prec);else if(o instanceof be)i[l].push(o),o.provides&&r(o.provides,l);else if(o instanceof ln)i[l].push(o),o.facet.extensions&&r(o.facet.extensions,gt.default);else{let h=o.extension;if(!h)throw new Error(`Unrecognized extension value in extension set (${o}). This sometimes happens because multiple instances of @codemirror/state are loaded, breaking instanceof checks.`);r(h,l)}}return r(n,gt.default),i.reduce((o,l)=>o.concat(l))}function bi(n,e){if(e&1)return 2;let t=e>>1,i=n.status[t];if(i==4)throw new Error("Cyclic dependency between fields and/or facets");if(i&2)return i;n.status[t]=4;let s=n.computeSlot(n,n.config.dynamicSlots[t]);return n.status[t]=2|s}function bn(n,e){return e&1?n.config.staticValues[e>>1]:n.values[e>>1]}const ma=D.define(),ga=D.define({combine:n=>n.some(e=>e),static:!0}),ya=D.define({combine:n=>n.length?n[0]:void 0,static:!0}),ba=D.define(),wa=D.define(),ka=D.define(),va=D.define({combine:n=>n.length?n[0]:!1});class Et{constructor(e,t){this.type=e,this.value=t}static define(){return new Tf}}class Tf{of(e){return new Et(this,e)}}class Of{constructor(e){this.map=e}of(e){return new R(this,e)}}class R{constructor(e,t){this.type=e,this.value=t}map(e){let t=this.type.map(this.value,e);return t===void 0?void 0:t==this.value?this:new R(this.type,t)}is(e){return this.type==e}static define(e={}){return new Of(e.map||(t=>t))}static mapEffects(e,t){if(!e.length)return e;let i=[];for(let s of e){let r=s.map(t);r&&i.push(r)}return i}}R.reconfigure=R.define();R.appendConfig=R.define();class ie{constructor(e,t,i,s,r,o){this.startState=e,this.changes=t,this.selection=i,this.effects=s,this.annotations=r,this.scrollIntoView=o,this._doc=null,this._state=null,i&&da(i,t.newLength),r.some(l=>l.type==ie.time)||(this.annotations=r.concat(ie.time.of(Date.now())))}static create(e,t,i,s,r,o){return new ie(e,t,i,s,r,o)}get newDoc(){return this._doc||(this._doc=this.changes.apply(this.startState.doc))}get newSelection(){return this.selection||this.startState.selection.map(this.changes)}get state(){return this._state||this.startState.applyTransaction(this),this._state}annotation(e){for(let t of this.annotations)if(t.type==e)return t.value}get docChanged(){return!this.changes.empty}get reconfigured(){return this.startState.config!=this.state.config}isUserEvent(e){let t=this.annotation(ie.userEvent);return!!(t&&(t==e||t.length>e.length&&t.slice(0,e.length)==e&&t[e.length]=="."))}}ie.time=Et.define();ie.userEvent=Et.define();ie.addToHistory=Et.define();ie.remote=Et.define();function Bf(n,e){let t=[];for(let i=0,s=0;;){let r,o;if(i=n[i]))r=n[i++],o=n[i++];else if(s=0;s--){let r=i[s](n);r instanceof ie?n=r:Array.isArray(r)&&r.length==1&&r[0]instanceof ie?n=r[0]:n=Sa(e,jt(r),!1)}return n}function Ef(n){let e=n.startState,t=e.facet(ka),i=n;for(let s=t.length-1;s>=0;s--){let r=t[s](n);r&&Object.keys(r).length&&(i=xa(i,Es(e,r,n.changes.newLength),!0))}return i==n?n:ie.create(e,n.changes,n.selection,i.effects,i.annotations,i.scrollIntoView)}const Rf=[];function jt(n){return n==null?Rf:Array.isArray(n)?n:[n]}var Ae=function(n){return n[n.Word=0]="Word",n[n.Space=1]="Space",n[n.Other=2]="Other",n}(Ae||(Ae={}));const Lf=/[\u00df\u0587\u0590-\u05f4\u0600-\u06ff\u3040-\u309f\u30a0-\u30ff\u3400-\u4db5\u4e00-\u9fcc\uac00-\ud7af]/;let Rs;try{Rs=new RegExp("[\\p{Alphabetic}\\p{Number}_]","u")}catch{}function If(n){if(Rs)return Rs.test(n);for(let e=0;e"€"&&(t.toUpperCase()!=t.toLowerCase()||Lf.test(t)))return!0}return!1}function Nf(n){return e=>{if(!/\S/.test(e))return Ae.Space;if(If(e))return Ae.Word;for(let t=0;t-1)return Ae.Word;return Ae.Other}}class N{constructor(e,t,i,s,r,o){this.config=e,this.doc=t,this.selection=i,this.values=s,this.status=e.statusTemplate.slice(),this.computeSlot=r,o&&(o._state=this);for(let l=0;ls.set(a,l)),t=null),s.set(o.value.compartment,o.value.extension)):o.is(R.reconfigure)?(t=null,i=o.value):o.is(R.appendConfig)&&(t=null,i=jt(i).concat(o.value));let r;t?r=e.startState.values.slice():(t=yn.resolve(i,s,this),r=new N(t,this.doc,this.selection,t.dynamicSlots.map(()=>null),(l,a)=>a.reconfigure(l,this),null).values),new N(t,e.newDoc,e.newSelection,r,(o,l)=>l.update(o,e),e)}replaceSelection(e){return typeof e=="string"&&(e=this.toText(e)),this.changeByRange(t=>({changes:{from:t.from,to:t.to,insert:e},range:w.cursor(t.from+e.length)}))}changeByRange(e){let t=this.selection,i=e(t.ranges[0]),s=this.changes(i.changes),r=[i.range],o=jt(i.effects);for(let l=1;lo.spec.fromJSON(l,a)))}}return N.create({doc:e.doc,selection:w.fromJSON(e.selection),extensions:t.extensions?s.concat([t.extensions]):s})}static create(e={}){let t=yn.resolve(e.extensions||[],new Map),i=e.doc instanceof _?e.doc:_.of((e.doc||"").split(t.staticFacet(N.lineSeparator)||Ds)),s=e.selection?e.selection instanceof w?e.selection:w.single(e.selection.anchor,e.selection.head):w.single(0);return da(s,i.length),t.staticFacet(ga)||(s=s.asSingle()),new N(t,i,s,t.dynamicSlots.map(()=>null),(r,o)=>o.create(r),null)}get tabSize(){return this.facet(N.tabSize)}get lineBreak(){return this.facet(N.lineSeparator)||` -`}get readOnly(){return this.facet(va)}phrase(e,...t){for(let i of this.facet(N.phrases))if(Object.prototype.hasOwnProperty.call(i,e)){e=i[e];break}return t.length&&(e=e.replace(/\$(\$|\d*)/g,(i,s)=>{if(s=="$")return"$";let r=+(s||1);return!r||r>t.length?i:t[r-1]})),e}languageDataAt(e,t,i=-1){let s=[];for(let r of this.facet(ma))for(let o of r(this,t,i))Object.prototype.hasOwnProperty.call(o,e)&&s.push(o[e]);return s}charCategorizer(e){return Nf(this.languageDataAt("wordChars",e).join(""))}wordAt(e){let{text:t,from:i,length:s}=this.doc.lineAt(e),r=this.charCategorizer(e),o=e-i,l=e-i;for(;o>0;){let a=ve(t,o,!1);if(r(t.slice(a,o))!=Ae.Word)break;o=a}for(;ln.length?n[0]:4});N.lineSeparator=ya;N.readOnly=va;N.phrases=D.define({compare(n,e){let t=Object.keys(n),i=Object.keys(e);return t.length==i.length&&t.every(s=>n[s]==e[s])}});N.languageData=ma;N.changeFilter=ba;N.transactionFilter=wa;N.transactionExtender=ka;Ln.reconfigure=R.define();function Rt(n,e,t={}){let i={};for(let s of n)for(let r of Object.keys(s)){let o=s[r],l=i[r];if(l===void 0)i[r]=o;else if(!(l===o||o===void 0))if(Object.hasOwnProperty.call(t,r))i[r]=t[r](l,o);else throw new Error("Config merge conflict for field "+r)}for(let s in e)i[s]===void 0&&(i[s]=e[s]);return i}class St{eq(e){return this==e}range(e,t=e){return Ls.create(e,t,this)}}St.prototype.startSide=St.prototype.endSide=0;St.prototype.point=!1;St.prototype.mapMode=le.TrackDel;let Ls=class Ca{constructor(e,t,i){this.from=e,this.to=t,this.value=i}static create(e,t,i){return new Ca(e,t,i)}};function Is(n,e){return n.from-e.from||n.value.startSide-e.value.startSide}class xr{constructor(e,t,i,s){this.from=e,this.to=t,this.value=i,this.maxPoint=s}get length(){return this.to[this.to.length-1]}findIndex(e,t,i,s=0){let r=i?this.to:this.from;for(let o=s,l=r.length;;){if(o==l)return o;let a=o+l>>1,h=r[a]-e||(i?this.value[a].endSide:this.value[a].startSide)-t;if(a==o)return h>=0?o:l;h>=0?l=a:o=a+1}}between(e,t,i,s){for(let r=this.findIndex(t,-1e9,!0),o=this.findIndex(i,1e9,!1,r);rd||u==d&&h.startSide>0&&h.endSide<=0)continue;(d-u||h.endSide-h.startSide)<0||(o<0&&(o=u),h.point&&(l=Math.max(l,d-u)),i.push(h),s.push(u-o),r.push(d-o))}return{mapped:i.length?new xr(s,r,i,l):null,pos:o}}}class F{constructor(e,t,i,s){this.chunkPos=e,this.chunk=t,this.nextLayer=i,this.maxPoint=s}static create(e,t,i,s){return new F(e,t,i,s)}get length(){let e=this.chunk.length-1;return e<0?0:Math.max(this.chunkEnd(e),this.nextLayer.length)}get size(){if(this.isEmpty)return 0;let e=this.nextLayer.size;for(let t of this.chunk)e+=t.value.length;return e}chunkEnd(e){return this.chunkPos[e]+this.chunk[e].length}update(e){let{add:t=[],sort:i=!1,filterFrom:s=0,filterTo:r=this.length}=e,o=e.filter;if(t.length==0&&!o)return this;if(i&&(t=t.slice().sort(Is)),this.isEmpty)return t.length?F.of(t):this;let l=new Aa(this,null,-1).goto(0),a=0,h=[],c=new Ct;for(;l.value||a=0){let f=t[a++];c.addInner(f.from,f.to,f.value)||h.push(f)}else l.rangeIndex==1&&l.chunkIndexthis.chunkEnd(l.chunkIndex)||rl.to||r=r&&e<=r+o.length&&o.between(r,e-r,t-r,i)===!1)return}this.nextLayer.between(e,t,i)}}iter(e=0){return Si.from([this]).goto(e)}get isEmpty(){return this.nextLayer==this}static iter(e,t=0){return Si.from(e).goto(t)}static compare(e,t,i,s,r=-1){let o=e.filter(f=>f.maxPoint>0||!f.isEmpty&&f.maxPoint>=r),l=t.filter(f=>f.maxPoint>0||!f.isEmpty&&f.maxPoint>=r),a=so(o,l,i),h=new ri(o,a,r),c=new ri(l,a,r);i.iterGaps((f,u,d)=>ro(h,f,c,u,d,s)),i.empty&&i.length==0&&ro(h,0,c,0,0,s)}static eq(e,t,i=0,s){s==null&&(s=1e9);let r=e.filter(c=>!c.isEmpty&&t.indexOf(c)<0),o=t.filter(c=>!c.isEmpty&&e.indexOf(c)<0);if(r.length!=o.length)return!1;if(!r.length)return!0;let l=so(r,o),a=new ri(r,l,0).goto(i),h=new ri(o,l,0).goto(i);for(;;){if(a.to!=h.to||!Ns(a.active,h.active)||a.point&&(!h.point||!a.point.eq(h.point)))return!1;if(a.to>s)return!0;a.next(),h.next()}}static spans(e,t,i,s,r=-1){let o=new ri(e,null,r).goto(t),l=t,a=o.openStart;for(;;){let h=Math.min(o.to,i);if(o.point?(s.point(l,h,o.point,o.activeForPoint(o.to),a,o.pointRank),a=o.openEnd(h)+(o.to>h?1:0)):h>l&&(s.span(l,h,o.active,a),a=o.openEnd(h)),o.to>i)break;l=o.to,o.next()}return a}static of(e,t=!1){let i=new Ct;for(let s of e instanceof Ls?[e]:t?_f(e):e)i.add(s.from,s.to,s.value);return i.finish()}}F.empty=new F([],[],null,-1);function _f(n){if(n.length>1)for(let e=n[0],t=1;t0)return n.slice().sort(Is);e=i}return n}F.empty.nextLayer=F.empty;class Ct{constructor(){this.chunks=[],this.chunkPos=[],this.chunkStart=-1,this.last=null,this.lastFrom=-1e9,this.lastTo=-1e9,this.from=[],this.to=[],this.value=[],this.maxPoint=-1,this.setMaxPoint=-1,this.nextLayer=null}finishChunk(e){this.chunks.push(new xr(this.from,this.to,this.value,this.maxPoint)),this.chunkPos.push(this.chunkStart),this.chunkStart=-1,this.setMaxPoint=Math.max(this.setMaxPoint,this.maxPoint),this.maxPoint=-1,e&&(this.from=[],this.to=[],this.value=[])}add(e,t,i){this.addInner(e,t,i)||(this.nextLayer||(this.nextLayer=new Ct)).add(e,t,i)}addInner(e,t,i){let s=e-this.lastTo||i.startSide-this.last.endSide;if(s<=0&&(e-this.lastFrom||i.startSide-this.last.startSide)<0)throw new Error("Ranges must be added sorted by `from` position and `startSide`");return s<0?!1:(this.from.length==250&&this.finishChunk(!0),this.chunkStart<0&&(this.chunkStart=e),this.from.push(e-this.chunkStart),this.to.push(t-this.chunkStart),this.last=i,this.lastFrom=e,this.lastTo=t,this.value.push(i),i.point&&(this.maxPoint=Math.max(this.maxPoint,t-e)),!0)}addChunk(e,t){if((e-this.lastTo||t.value[0].startSide-this.last.endSide)<0)return!1;this.from.length&&this.finishChunk(!0),this.setMaxPoint=Math.max(this.setMaxPoint,t.maxPoint),this.chunks.push(t),this.chunkPos.push(e);let i=t.value.length-1;return this.last=t.value[i],this.lastFrom=t.from[i]+e,this.lastTo=t.to[i]+e,!0}finish(){return this.finishInner(F.empty)}finishInner(e){if(this.from.length&&this.finishChunk(!1),this.chunks.length==0)return e;let t=F.create(this.chunkPos,this.chunks,this.nextLayer?this.nextLayer.finishInner(e):e,this.setMaxPoint);return this.from=null,t}}function so(n,e,t){let i=new Map;for(let r of n)for(let o=0;o=this.minPoint)break}}setRangeIndex(e){if(e==this.layer.chunk[this.chunkIndex].value.length){if(this.chunkIndex++,this.skip)for(;this.chunkIndex=i&&s.push(new Aa(o,t,i,r));return s.length==1?s[0]:new Si(s)}get startSide(){return this.value?this.value.startSide:0}goto(e,t=-1e9){for(let i of this.heap)i.goto(e,t);for(let i=this.heap.length>>1;i>=0;i--)Yn(this.heap,i);return this.next(),this}forward(e,t){for(let i of this.heap)i.forward(e,t);for(let i=this.heap.length>>1;i>=0;i--)Yn(this.heap,i);(this.to-e||this.value.endSide-t)<0&&this.next()}next(){if(this.heap.length==0)this.from=this.to=1e9,this.value=null,this.rank=-1;else{let e=this.heap[0];this.from=e.from,this.to=e.to,this.value=e.value,this.rank=e.rank,e.value&&e.next(),Yn(this.heap,0)}}}function Yn(n,e){for(let t=n[e];;){let i=(e<<1)+1;if(i>=n.length)break;let s=n[i];if(i+1=0&&(s=n[i+1],i++),t.compare(s)<0)break;n[i]=t,n[e]=s,e=i}}class ri{constructor(e,t,i){this.minPoint=i,this.active=[],this.activeTo=[],this.activeRank=[],this.minActive=-1,this.point=null,this.pointFrom=0,this.pointRank=0,this.to=-1e9,this.endSide=0,this.openStart=-1,this.cursor=Si.from(e,t,i)}goto(e,t=-1e9){return this.cursor.goto(e,t),this.active.length=this.activeTo.length=this.activeRank.length=0,this.minActive=-1,this.to=e,this.endSide=t,this.openStart=-1,this.next(),this}forward(e,t){for(;this.minActive>-1&&(this.activeTo[this.minActive]-e||this.active[this.minActive].endSide-t)<0;)this.removeActive(this.minActive);this.cursor.forward(e,t)}removeActive(e){Hi(this.active,e),Hi(this.activeTo,e),Hi(this.activeRank,e),this.minActive=oo(this.active,this.activeTo)}addActive(e){let t=0,{value:i,to:s,rank:r}=this.cursor;for(;t-1&&(this.activeTo[r]-this.cursor.from||this.active[r].endSide-this.cursor.startSide)<0){if(this.activeTo[r]>e){this.to=this.activeTo[r],this.endSide=this.active[r].endSide;break}this.removeActive(r),i&&Hi(i,r)}else if(this.cursor.value)if(this.cursor.from>e){this.to=this.cursor.from,this.endSide=this.cursor.startSide;break}else{let o=this.cursor.value;if(!o.point)this.addActive(i),this.cursor.frome&&s++,this.cursor.next();else if(t&&this.cursor.to==this.to&&this.cursor.from=0&&!(this.activeRank[i]e||this.activeTo[i]==e&&this.active[i].endSide>=this.point.endSide)&&t.push(this.active[i]);return t.reverse()}openEnd(e){let t=0;for(let i=this.activeTo.length-1;i>=0&&this.activeTo[i]>e;i--)t++;return t}}function ro(n,e,t,i,s,r){n.goto(e),t.goto(i);let o=i+s,l=i,a=i-e;for(;;){let h=n.to+a-t.to||n.endSide-t.endSide,c=h<0?n.to+a:t.to,f=Math.min(c,o);if(n.point||t.point?n.point&&t.point&&(n.point==t.point||n.point.eq(t.point))&&Ns(n.activeForPoint(n.to+a),t.activeForPoint(t.to))||r.comparePoint(l,f,n.point,t.point):f>l&&!Ns(n.active,t.active)&&r.compareRange(l,f,n.active,t.active),c>o)break;l=c,h<=0&&n.next(),h>=0&&t.next()}}function Ns(n,e){if(n.length!=e.length)return!1;for(let t=0;t=e;i--)n[i+1]=n[i];n[e]=t}function oo(n,e){let t=-1,i=1e9;for(let s=0;s=e)return s;if(s==n.length)break;r+=n.charCodeAt(s)==9?t-r%t:1,s=ve(n,s)}return i===!0?-1:n.length}const Vs="ͼ",lo=typeof Symbol>"u"?"__"+Vs:Symbol.for(Vs),Fs=typeof Symbol>"u"?"__styleSet"+Math.floor(Math.random()*1e8):Symbol("styleSet"),ao=typeof globalThis<"u"?globalThis:typeof window<"u"?window:{};class lt{constructor(e,t){this.rules=[];let{finish:i}=t||{};function s(o){return/^@/.test(o)?[o]:o.split(/,\s*/)}function r(o,l,a,h){let c=[],f=/^@(\w+)\b/.exec(o[0]),u=f&&f[1]=="keyframes";if(f&&l==null)return a.push(o[0]+";");for(let d in l){let p=l[d];if(/&/.test(d))r(d.split(/,\s*/).map(g=>o.map(y=>g.replace(/&/,y))).reduce((g,y)=>g.concat(y)),p,a);else if(p&&typeof p=="object"){if(!f)throw new RangeError("The value of a property ("+d+") should be a primitive value.");r(s(d),p,c,u)}else p!=null&&c.push(d.replace(/_.*/,"").replace(/[A-Z]/g,g=>"-"+g.toLowerCase())+": "+p+";")}(c.length||u)&&a.push((i&&!f&&!h?o.map(i):o).join(", ")+" {"+c.join(" ")+"}")}for(let o in e)r(s(o),e[o],this.rules)}getRules(){return this.rules.join(` -`)}static newName(){let e=ao[lo]||1;return ao[lo]=e+1,Vs+e.toString(36)}static mount(e,t){(e[Fs]||new Vf(e)).mount(Array.isArray(t)?t:[t])}}let zi=null;class Vf{constructor(e){if(!e.head&&e.adoptedStyleSheets&&typeof CSSStyleSheet<"u"){if(zi)return e.adoptedStyleSheets=[zi.sheet].concat(e.adoptedStyleSheets),e[Fs]=zi;this.sheet=new CSSStyleSheet,e.adoptedStyleSheets=[this.sheet].concat(e.adoptedStyleSheets),zi=this}else{this.styleTag=(e.ownerDocument||e).createElement("style");let t=e.head||e;t.insertBefore(this.styleTag,t.firstChild)}this.modules=[],e[Fs]=this}mount(e){let t=this.sheet,i=0,s=0;for(let r=0;r-1&&(this.modules.splice(l,1),s--,l=-1),l==-1){if(this.modules.splice(s++,0,o),t)for(let a=0;a",191:"?",192:"~",219:"{",220:"|",221:"}",222:'"'},ho=typeof navigator<"u"&&/Chrome\/(\d+)/.exec(navigator.userAgent),Ff=typeof navigator<"u"&&/Mac/.test(navigator.platform),Hf=typeof navigator<"u"&&/MSIE \d|Trident\/(?:[7-9]|\d{2,})\..*rv:(\d+)/.exec(navigator.userAgent),Wf=Ff||ho&&+ho[1]<57;for(var oe=0;oe<10;oe++)at[48+oe]=at[96+oe]=String(oe);for(var oe=1;oe<=24;oe++)at[oe+111]="F"+oe;for(var oe=65;oe<=90;oe++)at[oe]=String.fromCharCode(oe+32),Ci[oe]=String.fromCharCode(oe);for(var Xn in at)Ci.hasOwnProperty(Xn)||(Ci[Xn]=at[Xn]);function zf(n){var e=Wf&&(n.ctrlKey||n.altKey||n.metaKey)||Hf&&n.shiftKey&&n.key&&n.key.length==1||n.key=="Unidentified",t=!e&&n.key||(n.shiftKey?Ci:at)[n.keyCode]||n.key||"Unidentified";return t=="Esc"&&(t="Escape"),t=="Del"&&(t="Delete"),t=="Left"&&(t="ArrowLeft"),t=="Up"&&(t="ArrowUp"),t=="Right"&&(t="ArrowRight"),t=="Down"&&(t="ArrowDown"),t}function wn(n){let e;return n.nodeType==11?e=n.getSelection?n:n.ownerDocument:e=n,e.getSelection()}function Ut(n,e){return e?n==e||n.contains(e.nodeType!=1?e.parentNode:e):!1}function qf(n){let e=n.activeElement;for(;e&&e.shadowRoot;)e=e.shadowRoot.activeElement;return e}function an(n,e){if(!e.anchorNode)return!1;try{return Ut(n,e.anchorNode)}catch{return!1}}function Ai(n){return n.nodeType==3?Gt(n,0,n.nodeValue.length).getClientRects():n.nodeType==1?n.getClientRects():[]}function kn(n,e,t,i){return t?co(n,e,t,i,-1)||co(n,e,t,i,1):!1}function vn(n){for(var e=0;;e++)if(n=n.previousSibling,!n)return e}function co(n,e,t,i,s){for(;;){if(n==t&&e==i)return!0;if(e==(s<0?0:Mi(n))){if(n.nodeName=="DIV")return!1;let r=n.parentNode;if(!r||r.nodeType!=1)return!1;e=vn(n)+(s<0?0:1),n=r}else if(n.nodeType==1){if(n=n.childNodes[e+(s<0?-1:0)],n.nodeType==1&&n.contentEditable=="false")return!1;e=s<0?Mi(n):0}else return!1}}function Mi(n){return n.nodeType==3?n.nodeValue.length:n.childNodes.length}const Ma={left:0,right:0,top:0,bottom:0};function Sr(n,e){let t=e?n.left:n.right;return{left:t,right:t,top:n.top,bottom:n.bottom}}function jf(n){return{left:0,right:n.innerWidth,top:0,bottom:n.innerHeight}}function Kf(n,e,t,i,s,r,o,l){let a=n.ownerDocument,h=a.defaultView||window;for(let c=n;c;)if(c.nodeType==1){let f,u=c==a.body;if(u)f=jf(h);else{if(c.scrollHeight<=c.clientHeight&&c.scrollWidth<=c.clientWidth){c=c.assignedSlot||c.parentNode;continue}let g=c.getBoundingClientRect();f={left:g.left,right:g.left+c.clientWidth,top:g.top,bottom:g.top+c.clientHeight}}let d=0,p=0;if(s=="nearest")e.top0&&e.bottom>f.bottom+p&&(p=e.bottom-f.bottom+p+o)):e.bottom>f.bottom&&(p=e.bottom-f.bottom+o,t<0&&e.top-p0&&e.right>f.right+d&&(d=e.right-f.right+d+r)):e.right>f.right&&(d=e.right-f.right+r,t<0&&e.leftt)return f.domBoundsAround(e,t,h);if(u>=e&&s==-1&&(s=a,r=h),h>t&&f.dom.parentNode==this.dom){o=a,l=c;break}c=u,h=u+f.breakAfter}return{from:r,to:l<0?i+this.length:l,startDOM:(s?this.children[s-1].dom.nextSibling:null)||this.dom.firstChild,endDOM:o=0?this.children[o].dom:null}}markDirty(e=!1){this.dirty|=2,this.markParentsDirty(e)}markParentsDirty(e){for(let t=this.parent;t;t=t.parent){if(e&&(t.dirty|=2),t.dirty&1)return;t.dirty|=1,e=!1}}setParent(e){this.parent!=e&&(this.parent=e,this.dirty&&this.markParentsDirty(!0))}setDOM(e){this.dom&&(this.dom.cmView=null),this.dom=e,e.cmView=this}get rootView(){for(let e=this;;){let t=e.parent;if(!t)return e;e=t}}replaceChildren(e,t,i=Cr){this.markDirty();for(let s=e;sthis.pos||e==this.pos&&(t>0||this.i==0||this.children[this.i-1].breakAfter))return this.off=e-this.pos,this;let i=this.children[--this.i];this.pos-=i.length+i.breakAfter}}}function Ba(n,e,t,i,s,r,o,l,a){let{children:h}=n,c=h.length?h[e]:null,f=r.length?r[r.length-1]:null,u=f?f.breakAfter:o;if(!(e==i&&c&&!o&&!u&&r.length<2&&c.merge(t,s,r.length?f:null,t==0,l,a))){if(i0&&(!o&&r.length&&c.merge(t,c.length,r[0],!1,l,0)?c.breakAfter=r.shift().breakAfter:(t2);var A={mac:go||/Mac/.test(ke.platform),windows:/Win/.test(ke.platform),linux:/Linux|X11/.test(ke.platform),ie:In,ie_version:Ea?Hs.documentMode||6:zs?+zs[1]:Ws?+Ws[1]:0,gecko:po,gecko_version:po?+(/Firefox\/(\d+)/.exec(ke.userAgent)||[0,0])[1]:0,chrome:!!Zn,chrome_version:Zn?+Zn[1]:0,ios:go,android:/Android\b/.test(ke.userAgent),webkit:mo,safari:Ra,webkit_version:mo?+(/\bAppleWebKit\/(\d+)/.exec(navigator.userAgent)||[0,0])[1]:0,tabSize:Hs.documentElement.style.tabSize!=null?"tab-size":"-moz-tab-size"};const Jf=256;class ht extends q{constructor(e){super(),this.text=e}get length(){return this.text.length}createDOM(e){this.setDOM(e||document.createTextNode(this.text))}sync(e){this.dom||this.createDOM(),this.dom.nodeValue!=this.text&&(e&&e.node==this.dom&&(e.written=!0),this.dom.nodeValue=this.text)}reuseDOM(e){e.nodeType==3&&this.createDOM(e)}merge(e,t,i){return i&&(!(i instanceof ht)||this.length-(t-e)+i.length>Jf)?!1:(this.text=this.text.slice(0,e)+(i?i.text:"")+this.text.slice(t),this.markDirty(),!0)}split(e){let t=new ht(this.text.slice(e));return this.text=this.text.slice(0,e),this.markDirty(),t}localPosFromDOM(e,t){return e==this.dom?t:t?this.text.length:0}domAtPos(e){return new fe(this.dom,e)}domBoundsAround(e,t,i){return{from:i,to:i+this.length,startDOM:this.dom,endDOM:this.dom.nextSibling}}coordsAt(e,t){return qs(this.dom,e,t)}}class $e extends q{constructor(e,t=[],i=0){super(),this.mark=e,this.children=t,this.length=i;for(let s of t)s.setParent(this)}setAttrs(e){if(Ta(e),this.mark.class&&(e.className=this.mark.class),this.mark.attrs)for(let t in this.mark.attrs)e.setAttribute(t,this.mark.attrs[t]);return e}reuseDOM(e){e.nodeName==this.mark.tagName.toUpperCase()&&(this.setDOM(e),this.dirty|=6)}sync(e){this.dom?this.dirty&4&&this.setAttrs(this.dom):this.setDOM(this.setAttrs(document.createElement(this.mark.tagName))),super.sync(e)}merge(e,t,i,s,r,o){return i&&(!(i instanceof $e&&i.mark.eq(this.mark))||e&&r<=0||te&&t.push(i=e&&(s=r),i=a,r++}let o=this.length-e;return this.length=e,s>-1&&(this.children.length=s,this.markDirty()),new $e(this.mark,t,o)}domAtPos(e){return Na(this,e)}coordsAt(e,t){return Va(this,e,t)}}function qs(n,e,t){let i=n.nodeValue.length;e>i&&(e=i);let s=e,r=e,o=0;e==0&&t<0||e==i&&t>=0?A.chrome||A.gecko||(e?(s--,o=1):r=0)?0:l.length-1];return A.safari&&!o&&a.width==0&&(a=Array.prototype.find.call(l,h=>h.width)||a),o?Sr(a,o<0):a||null}class st extends q{constructor(e,t,i){super(),this.widget=e,this.length=t,this.side=i,this.prevWidget=null}static create(e,t,i){return new(e.customView||st)(e,t,i)}split(e){let t=st.create(this.widget,this.length-e,this.side);return this.length-=e,t}sync(){(!this.dom||!this.widget.updateDOM(this.dom))&&(this.dom&&this.prevWidget&&this.prevWidget.destroy(this.dom),this.prevWidget=null,this.setDOM(this.widget.toDOM(this.editorView)),this.dom.contentEditable="false")}getSide(){return this.side}merge(e,t,i,s,r,o){return i&&(!(i instanceof st)||!this.widget.compare(i.widget)||e>0&&r<=0||t0?i.length-1:0;s=i[r],!(e>0?r==0:r==i.length-1||s.top0?-1:1);return this.length?s:Sr(s,this.side>0)}get isEditable(){return!1}destroy(){super.destroy(),this.dom&&this.widget.destroy(this.dom)}}class La extends st{domAtPos(e){let{topView:t,text:i}=this.widget;return t?js(e,0,t,i,(s,r)=>s.domAtPos(r),s=>new fe(i,Math.min(s,i.nodeValue.length))):new fe(i,Math.min(e,i.nodeValue.length))}sync(){this.setDOM(this.widget.toDOM())}localPosFromDOM(e,t){let{topView:i,text:s}=this.widget;return i?Ia(e,t,i,s):Math.min(t,this.length)}ignoreMutation(){return!1}get overrideDOMText(){return null}coordsAt(e,t){let{topView:i,text:s}=this.widget;return i?js(e,t,i,s,(r,o,l)=>r.coordsAt(o,l),(r,o)=>qs(s,r,o)):qs(s,e,t)}destroy(){var e;super.destroy(),(e=this.widget.topView)===null||e===void 0||e.destroy()}get isEditable(){return!0}canReuseDOM(){return!0}}function js(n,e,t,i,s,r){if(t instanceof $e){for(let o=t.dom.firstChild;o;o=o.nextSibling){let l=q.get(o);if(!l)return r(n,e);let a=Ut(o,i),h=l.length+(a?i.nodeValue.length:0);if(n0?-1:1);return i&&i.topt.top?{left:t.left,right:t.right,top:i.top,bottom:i.bottom}:t}get overrideDOMText(){return _.empty}}ht.prototype.children=st.prototype.children=Jt.prototype.children=Cr;function Yf(n,e){let t=n.parent,i=t?t.children.indexOf(n):-1;for(;t&&i>=0;)if(e<0?i>0:ir&&e0;r--){let o=i[r-1];if(o.dom.parentNode==t)return o.domAtPos(o.length)}for(let r=s;r0&&e instanceof $e&&s.length&&(i=s[s.length-1])instanceof $e&&i.mark.eq(e.mark)?_a(i,e.children[0],t-1):(s.push(e),e.setParent(n)),n.length+=e.length}function Va(n,e,t){let i=null,s=-1,r=null,o=-1;function l(h,c){for(let f=0,u=0;f=c&&(d.children.length?l(d,c-u):!r&&(p>c||u==p&&d.getSide()>0)?(r=d,o=c-u):(u0?3e8:-4e8:t>0?1e8:-1e8,new At(e,t,t,i,e.widget||null,!1)}static replace(e){let t=!!e.block,i,s;if(e.isBlockGap)i=-5e8,s=4e8;else{let{start:r,end:o}=Fa(e,t);i=(r?t?-3e8:-1:5e8)-1,s=(o?t?2e8:1:-6e8)+1}return new At(e,i,s,t,e.widget||null,!0)}static line(e){return new Ii(e)}static set(e,t=!1){return F.of(e,t)}hasHeight(){return this.widget?this.widget.estimatedHeight>-1:!1}}E.none=F.empty;class Nn extends E{constructor(e){let{start:t,end:i}=Fa(e);super(t?-1:5e8,i?1:-6e8,null,e),this.tagName=e.tagName||"span",this.class=e.class||"",this.attrs=e.attributes||null}eq(e){return this==e||e instanceof Nn&&this.tagName==e.tagName&&this.class==e.class&&Ar(this.attrs,e.attrs)}range(e,t=e){if(e>=t)throw new RangeError("Mark decorations may not be empty");return super.range(e,t)}}Nn.prototype.point=!1;class Ii extends E{constructor(e){super(-2e8,-2e8,null,e)}eq(e){return e instanceof Ii&&Ar(this.spec.attributes,e.spec.attributes)}range(e,t=e){if(t!=e)throw new RangeError("Line decoration ranges must be zero-length");return super.range(e,t)}}Ii.prototype.mapMode=le.TrackBefore;Ii.prototype.point=!0;class At extends E{constructor(e,t,i,s,r,o){super(t,i,r,e),this.block=s,this.isReplace=o,this.mapMode=s?t<=0?le.TrackBefore:le.TrackAfter:le.TrackDel}get type(){return this.startSide=5}eq(e){return e instanceof At&&Zf(this.widget,e.widget)&&this.block==e.block&&this.startSide==e.startSide&&this.endSide==e.endSide}range(e,t=e){if(this.isReplace&&(e>t||e==t&&this.startSide>0&&this.endSide<=0))throw new RangeError("Invalid range for replacement decoration");if(!this.isReplace&&t!=e)throw new RangeError("Widget decorations can only have zero-length ranges");return super.range(e,t)}}At.prototype.point=!0;function Fa(n,e=!1){let{inclusiveStart:t,inclusiveEnd:i}=n;return t==null&&(t=n.inclusive),i==null&&(i=n.inclusive),{start:t??e,end:i??e}}function Zf(n,e){return n==e||!!(n&&e&&n.compare(e))}function Us(n,e,t,i=0){let s=t.length-1;s>=0&&t[s]+i>=n?t[s]=Math.max(t[s],e):t.push(n,e)}class pe extends q{constructor(){super(...arguments),this.children=[],this.length=0,this.prevAttrs=void 0,this.attrs=null,this.breakAfter=0}merge(e,t,i,s,r,o){if(i){if(!(i instanceof pe))return!1;this.dom||i.transferDOM(this)}return s&&this.setDeco(i?i.attrs:null),Pa(this,e,t,i?i.children:[],r,o),!0}split(e){let t=new pe;if(t.breakAfter=this.breakAfter,this.length==0)return t;let{i,off:s}=this.childPos(e);s&&(t.append(this.children[i].split(s),0),this.children[i].merge(s,this.children[i].length,null,!1,0,0),i++);for(let r=i;r0&&this.children[i-1].length==0;)this.children[--i].destroy();return this.children.length=i,this.markDirty(),this.length=e,t}transferDOM(e){this.dom&&(this.markDirty(),e.setDOM(this.dom),e.prevAttrs=this.prevAttrs===void 0?this.attrs:this.prevAttrs,this.prevAttrs=void 0,this.dom=null)}setDeco(e){Ar(this.attrs,e)||(this.dom&&(this.prevAttrs=this.attrs,this.markDirty()),this.attrs=e)}append(e,t){_a(this,e,t)}addLineDeco(e){let t=e.spec.attributes,i=e.spec.class;t&&(this.attrs=Ks(t,this.attrs||{})),i&&(this.attrs=Ks({class:i},this.attrs||{}))}domAtPos(e){return Na(this,e)}reuseDOM(e){e.nodeName=="DIV"&&(this.setDOM(e),this.dirty|=6)}sync(e){var t;this.dom?this.dirty&4&&(Ta(this.dom),this.dom.className="cm-line",this.prevAttrs=this.attrs?null:void 0):(this.setDOM(document.createElement("div")),this.dom.className="cm-line",this.prevAttrs=this.attrs?null:void 0),this.prevAttrs!==void 0&&($s(this.dom,this.prevAttrs,this.attrs),this.dom.classList.add("cm-line"),this.prevAttrs=void 0),super.sync(e);let i=this.dom.lastChild;for(;i&&q.get(i)instanceof $e;)i=i.lastChild;if(!i||!this.length||i.nodeName!="BR"&&((t=q.get(i))===null||t===void 0?void 0:t.isEditable)==!1&&(!A.ios||!this.children.some(s=>s instanceof ht))){let s=document.createElement("BR");s.cmIgnore=!0,this.dom.appendChild(s)}}measureTextSize(){if(this.children.length==0||this.length>20)return null;let e=0;for(let t of this.children){if(!(t instanceof ht)||/[^ -~]/.test(t.text))return null;let i=Ai(t.dom);if(i.length!=1)return null;e+=i[0].width}return e?{lineHeight:this.dom.getBoundingClientRect().height,charWidth:e/this.length}:null}coordsAt(e,t){return Va(this,e,t)}become(e){return!1}get type(){return H.Text}static find(e,t){for(let i=0,s=0;i=t){if(r instanceof pe)return r;if(o>t)break}s=o+r.breakAfter}return null}}class xt extends q{constructor(e,t,i){super(),this.widget=e,this.length=t,this.type=i,this.breakAfter=0,this.prevWidget=null}merge(e,t,i,s,r,o){return i&&(!(i instanceof xt)||!this.widget.compare(i.widget)||e>0&&r<=0||t0;){if(this.textOff==this.text.length){let{value:r,lineBreak:o,done:l}=this.cursor.next(this.skip);if(this.skip=0,l)throw new Error("Ran out of text content when drawing inline views");if(o){this.posCovered()||this.getLine(),this.content.length?this.content[this.content.length-1].breakAfter=1:this.breakAtStart=1,this.flushBuffer([]),this.curLine=null,e--;continue}else this.text=r,this.textOff=0}let s=Math.min(this.text.length-this.textOff,e,512);this.flushBuffer(t.slice(0,i)),this.getLine().append(qi(new ht(this.text.slice(this.textOff,this.textOff+s)),t),i),this.atCursorPos=!0,this.textOff+=s,e-=s,i=0}}span(e,t,i,s){this.buildText(t-e,i,s),this.pos=t,this.openStart<0&&(this.openStart=s)}point(e,t,i,s,r,o){if(this.disallowBlockEffectsFor[o]&&i instanceof At){if(i.block)throw new RangeError("Block decorations may not be specified via plugins");if(t>this.doc.lineAt(this.pos).to)throw new RangeError("Decorations that replace line breaks may not be specified via plugins")}let l=t-e;if(i instanceof At)if(i.block){let{type:a}=i;a==H.WidgetAfter&&!this.posCovered()&&this.getLine(),this.addBlockWidget(new xt(i.widget||new yo("div"),l,a))}else{let a=st.create(i.widget||new yo("span"),l,l?0:i.startSide),h=this.atCursorPos&&!a.isEditable&&r<=s.length&&(e0),c=!a.isEditable&&(en.some(e=>e)}),$a=D.define({combine:n=>n.some(e=>e)});class xn{constructor(e,t="nearest",i="nearest",s=5,r=5){this.range=e,this.y=t,this.x=i,this.yMargin=s,this.xMargin=r}map(e){return e.empty?this:new xn(this.range.map(e),this.y,this.x,this.yMargin,this.xMargin)}}const bo=R.define({map:(n,e)=>n.map(e)});function Ee(n,e,t){let i=n.facet(qa);i.length?i[0](e):window.onerror?window.onerror(String(e),t,void 0,void 0,e):t?console.error(t+":",e):console.error(e)}const _n=D.define({combine:n=>n.length?n[0]:!0});let Qf=0;const fi=D.define();class ue{constructor(e,t,i,s){this.id=e,this.create=t,this.domEventHandlers=i,this.extension=s(this)}static define(e,t){const{eventHandlers:i,provide:s,decorations:r}=t||{};return new ue(Qf++,e,i,o=>{let l=[fi.of(o)];return r&&l.push(Di.of(a=>{let h=a.plugin(o);return h?r(h):E.none})),s&&l.push(s(o)),l})}static fromClass(e,t){return ue.define(i=>new e(i),t)}}class Qn{constructor(e){this.spec=e,this.mustUpdate=null,this.value=null}update(e){if(this.value){if(this.mustUpdate){let t=this.mustUpdate;if(this.mustUpdate=null,this.value.update)try{this.value.update(t)}catch(i){if(Ee(t.state,i,"CodeMirror plugin crashed"),this.value.destroy)try{this.value.destroy()}catch{}this.deactivate()}}}else if(this.spec)try{this.value=this.spec.create(e)}catch(t){Ee(e.state,t,"CodeMirror plugin crashed"),this.deactivate()}return this}destroy(e){var t;if(!((t=this.value)===null||t===void 0)&&t.destroy)try{this.value.destroy()}catch(i){Ee(e.state,i,"CodeMirror plugin crashed")}}deactivate(){this.spec=this.value=null}}const Ua=D.define(),Ga=D.define(),Di=D.define(),Ja=D.define(),Ya=D.define(),ui=D.define();class Ke{constructor(e,t,i,s){this.fromA=e,this.toA=t,this.fromB=i,this.toB=s}join(e){return new Ke(Math.min(this.fromA,e.fromA),Math.max(this.toA,e.toA),Math.min(this.fromB,e.fromB),Math.max(this.toB,e.toB))}addToSet(e){let t=e.length,i=this;for(;t>0;t--){let s=e[t-1];if(!(s.fromA>i.toA)){if(s.toAc)break;r+=2}if(!a)return i;new Ke(a.fromA,a.toA,a.fromB,a.toB).addToSet(i),o=a.toA,l=a.toB}}}class Sn{constructor(e,t,i){this.view=e,this.state=t,this.transactions=i,this.flags=0,this.startState=e.state,this.changes=te.empty(this.startState.doc.length);for(let o of i)this.changes=this.changes.compose(o.changes);let s=[];this.changes.iterChangedRanges((o,l,a,h)=>s.push(new Ke(o,l,a,h))),this.changedRanges=s;let r=e.hasFocus;r!=e.inputState.notifiedFocused&&(e.inputState.notifiedFocused=r,this.flags|=1)}static create(e,t,i){return new Sn(e,t,i)}get viewportChanged(){return(this.flags&4)>0}get heightChanged(){return(this.flags&2)>0}get geometryChanged(){return this.docChanged||(this.flags&10)>0}get focusChanged(){return(this.flags&1)>0}get docChanged(){return!this.changes.empty}get selectionSet(){return this.transactions.some(e=>e.selection)}get empty(){return this.flags==0&&this.transactions.length==0}}var Y=function(n){return n[n.LTR=0]="LTR",n[n.RTL=1]="RTL",n}(Y||(Y={}));const Js=Y.LTR,eu=Y.RTL;function Xa(n){let e=[];for(let t=0;t=t){if(l.level==i)return o;(r<0||(s!=0?s<0?l.fromt:e[r].level>l.level))&&(r=o)}}if(r<0)throw new RangeError("Index out of range");return r}}const J=[];function ru(n,e){let t=n.length,i=e==Js?1:2,s=e==Js?2:1;if(!n||i==1&&!su.test(n))return Za(t);for(let o=0,l=i,a=i;o=0;u-=3)if(Le[u+1]==-c){let d=Le[u+2],p=d&2?i:d&4?d&1?s:i:0;p&&(J[o]=J[Le[u]]=p),l=u;break}}else{if(Le.length==189)break;Le[l++]=o,Le[l++]=h,Le[l++]=a}else if((f=J[o])==2||f==1){let u=f==i;a=u?0:1;for(let d=l-3;d>=0;d-=3){let p=Le[d+2];if(p&2)break;if(u)Le[d+2]|=2;else{if(p&4)break;Le[d+2]|=4}}}for(let o=0;ol;){let c=h,f=J[--h]!=2;for(;h>l&&f==(J[h-1]!=2);)h--;r.push(new $t(h,c,f?2:1))}else r.push(new $t(l,o,0))}else for(let o=0;o1)for(let a of this.points)a.node==e&&a.pos>this.text.length&&(a.pos-=o-1);i=r+o}}readNode(e){if(e.cmIgnore)return;let t=q.get(e),i=t&&t.overrideDOMText;if(i!=null){this.findPointInside(e,i.length);for(let s=i.iter();!s.next().done;)s.lineBreak?this.lineBreak():this.append(s.value)}else e.nodeType==3?this.readTextNode(e):e.nodeName=="BR"?e.nextSibling&&this.lineBreak():e.nodeType==1&&this.readRange(e.firstChild,null)}findPointBefore(e,t){for(let i of this.points)i.node==e&&e.childNodes[i.offset]==t&&(i.pos=this.text.length)}findPointInside(e,t){for(let i of this.points)(e.nodeType==3?i.node==e:e.contains(i.node))&&(i.pos=this.text.length+Math.min(t,i.offset))}}function wo(n){return n.nodeType==1&&/^(DIV|P|LI|UL|OL|BLOCKQUOTE|DD|DT|H\d|SECTION|PRE)$/.test(n.nodeName)}class ko{constructor(e,t){this.node=e,this.offset=t,this.pos=-1}}class vo extends q{constructor(e){super(),this.view=e,this.compositionDeco=E.none,this.decorations=[],this.dynamicDecorationMap=[],this.minWidth=0,this.minWidthFrom=0,this.minWidthTo=0,this.impreciseAnchor=null,this.impreciseHead=null,this.forceSelection=!1,this.lastUpdate=Date.now(),this.setDOM(e.contentDOM),this.children=[new pe],this.children[0].setParent(this),this.updateDeco(),this.updateInner([new Ke(0,0,0,e.state.doc.length)],0)}get editorView(){return this.view}get length(){return this.view.state.doc.length}update(e){let t=e.changedRanges;this.minWidth>0&&t.length&&(t.every(({fromA:o,toA:l})=>lthis.minWidthTo)?(this.minWidthFrom=e.changes.mapPos(this.minWidthFrom,1),this.minWidthTo=e.changes.mapPos(this.minWidthTo,1)):this.minWidth=this.minWidthFrom=this.minWidthTo=0),this.view.inputState.composing<0?this.compositionDeco=E.none:(e.transactions.length||this.dirty)&&(this.compositionDeco=au(this.view,e.changes)),(A.ie||A.chrome)&&!this.compositionDeco.size&&e&&e.state.doc.lines!=e.startState.doc.lines&&(this.forceSelection=!0);let i=this.decorations,s=this.updateDeco(),r=uu(i,s,e.changes);return t=Ke.extendWithRanges(t,r),this.dirty==0&&t.length==0?!1:(this.updateInner(t,e.startState.doc.length),e.transactions.length&&(this.lastUpdate=Date.now()),!0)}updateInner(e,t){this.view.viewState.mustMeasureContent=!0,this.updateChildren(e,t);let{observer:i}=this.view;i.ignore(()=>{this.dom.style.height=this.view.viewState.contentHeight+"px",this.dom.style.flexBasis=this.minWidth?this.minWidth+"px":"";let r=A.chrome||A.ios?{node:i.selectionRange.focusNode,written:!1}:void 0;this.sync(r),this.dirty=0,r&&(r.written||i.selectionRange.focusNode!=r.node)&&(this.forceSelection=!0),this.dom.style.height=""});let s=[];if(this.view.viewport.from||this.view.viewport.to=0?e[s]:null;if(!r)break;let{fromA:o,toA:l,fromB:a,toB:h}=r,{content:c,breakAtStart:f,openStart:u,openEnd:d}=Mr.build(this.view.state.doc,a,h,this.decorations,this.dynamicDecorationMap),{i:p,off:g}=i.findPos(l,1),{i:y,off:b}=i.findPos(o,-1);Ba(this,y,b,p,g,c,f,u,d)}}updateSelection(e=!1,t=!1){if((e||!this.view.observer.selectionRange.focusNode)&&this.view.observer.readSelectionRange(),!(t||this.mayControlSelection()))return;let i=this.forceSelection;this.forceSelection=!1;let s=this.view.state.selection.main,r=this.domAtPos(s.anchor),o=s.empty?r:this.domAtPos(s.head);if(A.gecko&&s.empty&&lu(r)){let a=document.createTextNode("");this.view.observer.ignore(()=>r.node.insertBefore(a,r.node.childNodes[r.offset]||null)),r=o=new fe(a,0),i=!0}let l=this.view.observer.selectionRange;(i||!l.focusNode||!kn(r.node,r.offset,l.anchorNode,l.anchorOffset)||!kn(o.node,o.offset,l.focusNode,l.focusOffset))&&(this.view.observer.ignore(()=>{A.android&&A.chrome&&this.dom.contains(l.focusNode)&&du(l.focusNode,this.dom)&&(this.dom.blur(),this.dom.focus({preventScroll:!0}));let a=wn(this.view.root);if(a)if(s.empty){if(A.gecko){let h=cu(r.node,r.offset);if(h&&h!=3){let c=ih(r.node,r.offset,h==1?1:-1);c&&(r=new fe(c,h==1?0:c.nodeValue.length))}}a.collapse(r.node,r.offset),s.bidiLevel!=null&&l.cursorBidiLevel!=null&&(l.cursorBidiLevel=s.bidiLevel)}else if(a.extend){a.collapse(r.node,r.offset);try{a.extend(o.node,o.offset)}catch{}}else{let h=document.createRange();s.anchor>s.head&&([r,o]=[o,r]),h.setEnd(o.node,o.offset),h.setStart(r.node,r.offset),a.removeAllRanges(),a.addRange(h)}}),this.view.observer.setSelectionRange(r,o)),this.impreciseAnchor=r.precise?null:new fe(l.anchorNode,l.anchorOffset),this.impreciseHead=o.precise?null:new fe(l.focusNode,l.focusOffset)}enforceCursorAssoc(){if(this.compositionDeco.size)return;let{view:e}=this,t=e.state.selection.main,i=wn(e.root),{anchorNode:s,anchorOffset:r}=e.observer.selectionRange;if(!i||!t.empty||!t.assoc||!i.modify)return;let o=pe.find(this,t.head);if(!o)return;let l=o.posAtStart;if(t.head==l||t.head==l+o.length)return;let a=this.coordsAt(t.head,-1),h=this.coordsAt(t.head,1);if(!a||!h||a.bottom>h.top)return;let c=this.domAtPos(t.head+t.assoc);i.collapse(c.node,c.offset),i.modify("move",t.assoc<0?"forward":"backward","lineboundary"),e.observer.readSelectionRange();let f=e.observer.selectionRange;e.docView.posFromDOM(f.anchorNode,f.anchorOffset)!=t.from&&i.collapse(s,r)}mayControlSelection(){let e=this.view.root.activeElement;return e==this.dom||an(this.dom,this.view.observer.selectionRange)&&!(e&&this.dom.contains(e))}nearest(e){for(let t=e;t;){let i=q.get(t);if(i&&i.rootView==this)return i;t=t.parentNode}return null}posFromDOM(e,t){let i=this.nearest(e);if(!i)throw new RangeError("Trying to find position for a DOM position outside of the document");return i.localPosFromDOM(e,t)+i.posAtStart}domAtPos(e){let{i:t,off:i}=this.childCursor().findPos(e,-1);for(;to||e==o&&r.type!=H.WidgetBefore&&r.type!=H.WidgetAfter&&(!s||t==2||this.children[s-1].breakAfter||this.children[s-1].type==H.WidgetBefore&&t>-2))return r.coordsAt(e-o,t);i=o}}measureVisibleLineHeights(e){let t=[],{from:i,to:s}=e,r=this.view.contentDOM.clientWidth,o=r>Math.max(this.view.scrollDOM.clientWidth,this.minWidth)+1,l=-1,a=this.view.textDirection==Y.LTR;for(let h=0,c=0;cs)break;if(h>=i){let d=f.dom.getBoundingClientRect();if(t.push(d.height),o){let p=f.dom.lastChild,g=p?Ai(p):[];if(g.length){let y=g[g.length-1],b=a?y.right-d.left:d.right-y.left;b>l&&(l=b,this.minWidth=r,this.minWidthFrom=h,this.minWidthTo=u)}}}h=u+f.breakAfter}return t}textDirectionAt(e){let{i:t}=this.childPos(e,1);return getComputedStyle(this.children[t].dom).direction=="rtl"?Y.RTL:Y.LTR}measureTextSize(){for(let s of this.children)if(s instanceof pe){let r=s.measureTextSize();if(r)return r}let e=document.createElement("div"),t,i;return e.className="cm-line",e.style.width="99999px",e.textContent="abc def ghi jkl mno pqr stu",this.view.observer.ignore(()=>{this.dom.appendChild(e);let s=Ai(e.firstChild)[0];t=e.getBoundingClientRect().height,i=s?s.width/27:7,e.remove()}),{lineHeight:t,charWidth:i}}childCursor(e=this.length){let t=this.children.length;return t&&(e-=this.children[--t].length),new Oa(this.children,e,t)}computeBlockGapDeco(){let e=[],t=this.view.viewState;for(let i=0,s=0;;s++){let r=s==t.viewports.length?null:t.viewports[s],o=r?r.from-1:this.length;if(o>i){let l=t.lineBlockAt(o).bottom-t.lineBlockAt(i).top;e.push(E.replace({widget:new xo(l),block:!0,inclusive:!0,isBlockGap:!0}).range(i,o))}if(!r)break;i=r.to+1}return E.set(e)}updateDeco(){let e=this.view.state.facet(Di).map((t,i)=>(this.dynamicDecorationMap[i]=typeof t=="function")?t(this.view):t);for(let t=e.length;tt.anchor?-1:1),s;if(!i)return;!t.empty&&(s=this.coordsAt(t.anchor,t.anchor>t.head?-1:1))&&(i={left:Math.min(i.left,s.left),top:Math.min(i.top,s.top),right:Math.max(i.right,s.right),bottom:Math.max(i.bottom,s.bottom)});let r=0,o=0,l=0,a=0;for(let c of this.view.state.facet(Ya).map(f=>f(this.view)))if(c){let{left:f,right:u,top:d,bottom:p}=c;f!=null&&(r=Math.max(r,f)),u!=null&&(o=Math.max(o,u)),d!=null&&(l=Math.max(l,d)),p!=null&&(a=Math.max(a,p))}let h={left:i.left-r,top:i.top-l,right:i.right+o,bottom:i.bottom+a};Kf(this.view.scrollDOM,h,t.head0&&t<=0)n=n.childNodes[e-1],e=Mi(n);else if(n.nodeType==1&&e=0)n=n.childNodes[e],e=0;else return null}}function cu(n,e){return n.nodeType!=1?0:(e&&n.childNodes[e-1].contentEditable=="false"?1:0)|(e0;){let h=ve(s.text,o,!1);if(i(s.text.slice(h,o))!=a)break;o=h}for(;ln?e.left-n:Math.max(0,n-e.right)}function gu(n,e){return e.top>n?e.top-n:Math.max(0,n-e.bottom)}function es(n,e){return n.tope.top+1}function So(n,e){return en.bottom?{top:n.top,left:n.left,right:n.right,bottom:e}:n}function Xs(n,e,t){let i,s,r,o,l=!1,a,h,c,f;for(let p=n.firstChild;p;p=p.nextSibling){let g=Ai(p);for(let y=0;yS||o==S&&r>v)&&(i=p,s=b,r=v,o=S,l=!v||(v>0?y0)),v==0?t>b.bottom&&(!c||c.bottomb.top)&&(h=p,f=b):c&&es(c,b)?c=Co(c,b.bottom):f&&es(f,b)&&(f=So(f,b.top))}}if(c&&c.bottom>=t?(i=a,s=c):f&&f.top<=t&&(i=h,s=f),!i)return{node:n,offset:0};let u=Math.max(s.left,Math.min(s.right,e));if(i.nodeType==3)return Ao(i,u,t);if(l&&i.contentEditable!="false")return Xs(i,u,t);let d=Array.prototype.indexOf.call(n.childNodes,i)+(e>=(s.left+s.right)/2?1:0);return{node:n,offset:d}}function Ao(n,e,t){let i=n.nodeValue.length,s=-1,r=1e9,o=0;for(let l=0;lt?c.top-t:t-c.bottom)-1;if(c.left-1<=e&&c.right+1>=e&&f=(c.left+c.right)/2,d=u;if((A.chrome||A.gecko)&&Gt(n,l).getBoundingClientRect().left==c.right&&(d=!u),f<=0)return{node:n,offset:l+(d?1:0)};s=l+(d?1:0),r=f}}}return{node:n,offset:s>-1?s:o>0?n.nodeValue.length:0}}function nh(n,{x:e,y:t},i,s=-1){var r;let o=n.contentDOM.getBoundingClientRect(),l=o.top+n.viewState.paddingTop,a,{docHeight:h}=n.viewState,c=t-l;if(c<0)return 0;if(c>h)return n.state.doc.length;for(let b=n.defaultLineHeight/2,v=!1;a=n.elementAtHeight(c),a.type!=H.Text;)for(;c=s>0?a.bottom+b:a.top-b,!(c>=0&&c<=h);){if(v)return i?null:0;v=!0,s=-s}t=l+c;let f=a.from;if(fn.viewport.to)return n.viewport.to==n.state.doc.length?n.state.doc.length:i?null:Mo(n,o,a,e,t);let u=n.dom.ownerDocument,d=n.root.elementFromPoint?n.root:u,p=d.elementFromPoint(e,t);p&&!n.contentDOM.contains(p)&&(p=null),p||(e=Math.max(o.left+1,Math.min(o.right-1,e)),p=d.elementFromPoint(e,t),p&&!n.contentDOM.contains(p)&&(p=null));let g,y=-1;if(p&&((r=n.docView.nearest(p))===null||r===void 0?void 0:r.isEditable)!=!1){if(u.caretPositionFromPoint){let b=u.caretPositionFromPoint(e,t);b&&({offsetNode:g,offset:y}=b)}else if(u.caretRangeFromPoint){let b=u.caretRangeFromPoint(e,t);b&&({startContainer:g,startOffset:y}=b,(!n.contentDOM.contains(g)||A.safari&&yu(g,y,e)||A.chrome&&bu(g,y,e))&&(g=void 0))}}if(!g||!n.docView.dom.contains(g)){let b=pe.find(n.docView,f);if(!b)return c>a.top+a.height/2?a.to:a.from;({node:g,offset:y}=Xs(b.dom,e,t))}return n.docView.posFromDOM(g,y)}function Mo(n,e,t,i,s){let r=Math.round((i-e.left)*n.defaultCharacterWidth);if(n.lineWrapping&&t.height>n.defaultLineHeight*1.5){let l=Math.floor((s-t.top)/n.defaultLineHeight);r+=l*n.viewState.heightOracle.lineLength}let o=n.state.sliceDoc(t.from,t.to);return t.from+_s(o,r,n.state.tabSize)}function yu(n,e,t){let i;if(n.nodeType!=3||e!=(i=n.nodeValue.length))return!1;for(let s=n.nextSibling;s;s=s.nextSibling)if(s.nodeType!=1||s.nodeName!="BR")return!1;return Gt(n,i-1,i).getBoundingClientRect().left>t}function bu(n,e,t){if(e!=0)return!1;for(let s=n;;){let r=s.parentNode;if(!r||r.nodeType!=1||r.firstChild!=s)return!1;if(r.classList.contains("cm-line"))break;s=r}let i=n.nodeType==1?n.getBoundingClientRect():Gt(n,0,Math.max(n.nodeValue.length,1)).getBoundingClientRect();return t-i.left>5}function wu(n,e,t,i){let s=n.state.doc.lineAt(e.head),r=!i||!n.lineWrapping?null:n.coordsAtPos(e.assoc<0&&e.head>s.from?e.head-1:e.head);if(r){let a=n.dom.getBoundingClientRect(),h=n.textDirectionAt(s.from),c=n.posAtCoords({x:t==(h==Y.LTR)?a.right-1:a.left+1,y:(r.top+r.bottom)/2});if(c!=null)return w.cursor(c,t?-1:1)}let o=pe.find(n.docView,e.head),l=o?t?o.posAtEnd:o.posAtStart:t?s.to:s.from;return w.cursor(l,t?-1:1)}function Do(n,e,t,i){let s=n.state.doc.lineAt(e.head),r=n.bidiSpans(s),o=n.textDirectionAt(s.from);for(let l=e,a=null;;){let h=ou(s,r,o,l,t),c=Qa;if(!h){if(s.number==(t?n.state.doc.lines:1))return l;c=` -`,s=n.state.doc.line(s.number+(t?1:-1)),r=n.bidiSpans(s),h=w.cursor(t?s.from:s.to)}if(a){if(!a(c))return l}else{if(!i)return h;a=i(c)}l=h}}function ku(n,e,t){let i=n.state.charCategorizer(e),s=i(t);return r=>{let o=i(r);return s==Ae.Space&&(s=o),s==o}}function vu(n,e,t,i){let s=e.head,r=t?1:-1;if(s==(t?n.state.doc.length:0))return w.cursor(s,e.assoc);let o=e.goalColumn,l,a=n.contentDOM.getBoundingClientRect(),h=n.coordsAtPos(s),c=n.documentTop;if(h)o==null&&(o=h.left-a.left),l=r<0?h.top:h.bottom;else{let d=n.viewState.lineBlockAt(s);o==null&&(o=Math.min(a.right-a.left,n.defaultCharacterWidth*(s-d.from))),l=(r<0?d.top:d.bottom)+c}let f=a.left+o,u=i??n.defaultLineHeight>>1;for(let d=0;;d+=10){let p=l+(u+d)*r,g=nh(n,{x:f,y:p},!1,r);if(pa.bottom||(r<0?gs))return w.cursor(g,e.assoc,void 0,o)}}function ts(n,e,t){let i=n.state.facet(Ja).map(s=>s(n));for(;;){let s=!1;for(let r of i)r.between(t.from-1,t.from+1,(o,l,a)=>{t.from>o&&t.fromt.from?w.cursor(o,1):w.cursor(l,-1),s=!0)});if(!s)return t}}class xu{constructor(e){this.lastKeyCode=0,this.lastKeyTime=0,this.lastTouchTime=0,this.lastFocusTime=0,this.lastScrollTop=0,this.lastScrollLeft=0,this.chromeScrollHack=-1,this.pendingIOSKey=void 0,this.lastSelectionOrigin=null,this.lastSelectionTime=0,this.lastEscPress=0,this.lastContextMenu=0,this.scrollHandlers=[],this.registeredEvents=[],this.customHandlers=[],this.composing=-1,this.compositionFirstChange=null,this.compositionEndedAt=0,this.mouseSelection=null;for(let t in ne){let i=ne[t];e.contentDOM.addEventListener(t,s=>{!To(e,s)||this.ignoreDuringComposition(s)||t=="keydown"&&this.keydown(e,s)||(this.mustFlushObserver(s)&&e.observer.forceFlush(),this.runCustomHandlers(t,e,s)?s.preventDefault():i(e,s))},Zs[t]),this.registeredEvents.push(t)}A.chrome&&A.chrome_version==102&&e.scrollDOM.addEventListener("wheel",()=>{this.chromeScrollHack<0?e.contentDOM.style.pointerEvents="none":window.clearTimeout(this.chromeScrollHack),this.chromeScrollHack=setTimeout(()=>{this.chromeScrollHack=-1,e.contentDOM.style.pointerEvents=""},100)},{passive:!0}),this.notifiedFocused=e.hasFocus,A.safari&&e.contentDOM.addEventListener("input",()=>null)}setSelectionOrigin(e){this.lastSelectionOrigin=e,this.lastSelectionTime=Date.now()}ensureHandlers(e,t){var i;let s;this.customHandlers=[];for(let r of t)if(s=(i=r.update(e).spec)===null||i===void 0?void 0:i.domEventHandlers){this.customHandlers.push({plugin:r.value,handlers:s});for(let o in s)this.registeredEvents.indexOf(o)<0&&o!="scroll"&&(this.registeredEvents.push(o),e.contentDOM.addEventListener(o,l=>{To(e,l)&&this.runCustomHandlers(o,e,l)&&l.preventDefault()}))}}runCustomHandlers(e,t,i){for(let s of this.customHandlers){let r=s.handlers[e];if(r)try{if(r.call(s.plugin,i,t)||i.defaultPrevented)return!0}catch(o){Ee(t.state,o)}}return!1}runScrollHandlers(e,t){this.lastScrollTop=e.scrollDOM.scrollTop,this.lastScrollLeft=e.scrollDOM.scrollLeft;for(let i of this.customHandlers){let s=i.handlers.scroll;if(s)try{s.call(i.plugin,t,e)}catch(r){Ee(e.state,r)}}}keydown(e,t){if(this.lastKeyCode=t.keyCode,this.lastKeyTime=Date.now(),t.keyCode==9&&Date.now()s.keyCode==t.keyCode))&&!t.ctrlKey||Su.indexOf(t.key)>-1&&t.ctrlKey&&!t.shiftKey)?(this.pendingIOSKey=i||t,setTimeout(()=>this.flushIOSKey(e),250),!0):!1}flushIOSKey(e){let t=this.pendingIOSKey;return t?(this.pendingIOSKey=void 0,Kt(e.contentDOM,t.key,t.keyCode)):!1}ignoreDuringComposition(e){return/^key/.test(e.type)?this.composing>0?!0:A.safari&&!A.ios&&Date.now()-this.compositionEndedAt<100?(this.compositionEndedAt=0,!0):!1:!1}mustFlushObserver(e){return e.type=="keydown"&&e.keyCode!=229}startMouseSelection(e){this.mouseSelection&&this.mouseSelection.destroy(),this.mouseSelection=e}update(e){this.mouseSelection&&this.mouseSelection.update(e),e.transactions.length&&(this.lastKeyCode=this.lastSelectionTime=0)}destroy(){this.mouseSelection&&this.mouseSelection.destroy()}}const sh=[{key:"Backspace",keyCode:8,inputType:"deleteContentBackward"},{key:"Enter",keyCode:13,inputType:"insertParagraph"},{key:"Delete",keyCode:46,inputType:"deleteContentForward"}],Su="dthko",rh=[16,17,18,20,91,92,224,225];class Cu{constructor(e,t,i,s){this.view=e,this.style=i,this.mustSelect=s,this.lastEvent=t;let r=e.contentDOM.ownerDocument;r.addEventListener("mousemove",this.move=this.move.bind(this)),r.addEventListener("mouseup",this.up=this.up.bind(this)),this.extend=t.shiftKey,this.multiple=e.state.facet(N.allowMultipleSelections)&&Au(e,t),this.dragMove=Mu(e,t),this.dragging=Du(e,t)&&hh(t)==1?null:!1,this.dragging===!1&&(t.preventDefault(),this.select(t))}move(e){if(e.buttons==0)return this.destroy();this.dragging===!1&&this.select(this.lastEvent=e)}up(e){this.dragging==null&&this.select(this.lastEvent),this.dragging||e.preventDefault(),this.destroy()}destroy(){let e=this.view.contentDOM.ownerDocument;e.removeEventListener("mousemove",this.move),e.removeEventListener("mouseup",this.up),this.view.inputState.mouseSelection=null}select(e){let t=this.style.get(e,this.extend,this.multiple);(this.mustSelect||!t.eq(this.view.state.selection)||t.main.assoc!=this.view.state.selection.main.assoc)&&this.view.dispatch({selection:t,userEvent:"select.pointer",scrollIntoView:!0}),this.mustSelect=!1}update(e){e.docChanged&&this.dragging&&(this.dragging=this.dragging.map(e.changes)),this.style.update(e)&&setTimeout(()=>this.select(this.lastEvent),20)}}function Au(n,e){let t=n.state.facet(Ha);return t.length?t[0](e):A.mac?e.metaKey:e.ctrlKey}function Mu(n,e){let t=n.state.facet(Wa);return t.length?t[0](e):A.mac?!e.altKey:!e.ctrlKey}function Du(n,e){let{main:t}=n.state.selection;if(t.empty)return!1;let i=wn(n.root);if(!i||i.rangeCount==0)return!0;let s=i.getRangeAt(0).getClientRects();for(let r=0;r=e.clientX&&o.top<=e.clientY&&o.bottom>=e.clientY)return!0}return!1}function To(n,e){if(!e.bubbles)return!0;if(e.defaultPrevented)return!1;for(let t=e.target,i;t!=n.contentDOM;t=t.parentNode)if(!t||t.nodeType==11||(i=q.get(t))&&i.ignoreEvent(e))return!1;return!0}const ne=Object.create(null),Zs=Object.create(null),oh=A.ie&&A.ie_version<15||A.ios&&A.webkit_version<604;function Tu(n){let e=n.dom.parentNode;if(!e)return;let t=e.appendChild(document.createElement("textarea"));t.style.cssText="position: fixed; left: -10000px; top: 10px",t.focus(),setTimeout(()=>{n.focus(),t.remove(),lh(n,t.value)},50)}function lh(n,e){let{state:t}=n,i,s=1,r=t.toText(e),o=r.lines==t.selection.ranges.length;if(Qs!=null&&t.selection.ranges.every(a=>a.empty)&&Qs==r.toString()){let a=-1;i=t.changeByRange(h=>{let c=t.doc.lineAt(h.from);if(c.from==a)return{range:h};a=c.from;let f=t.toText((o?r.line(s++).text:e)+t.lineBreak);return{changes:{from:c.from,insert:f},range:w.cursor(h.from+f.length)}})}else o?i=t.changeByRange(a=>{let h=r.line(s++);return{changes:{from:a.from,to:a.to,insert:h.text},range:w.cursor(a.from+h.length)}}):i=t.replaceSelection(r);n.dispatch(i,{userEvent:"input.paste",scrollIntoView:!0})}ne.keydown=(n,e)=>{n.inputState.setSelectionOrigin("select"),e.keyCode==27?n.inputState.lastEscPress=Date.now():rh.indexOf(e.keyCode)<0&&(n.inputState.lastEscPress=0)};ne.touchstart=(n,e)=>{n.inputState.lastTouchTime=Date.now(),n.inputState.setSelectionOrigin("select.pointer")};ne.touchmove=n=>{n.inputState.setSelectionOrigin("select.pointer")};Zs.touchstart=Zs.touchmove={passive:!0};ne.mousedown=(n,e)=>{if(n.observer.flush(),n.inputState.lastTouchTime>Date.now()-2e3)return;let t=null;for(let i of n.state.facet(za))if(t=i(n,e),t)break;if(!t&&e.button==0&&(t=Pu(n,e)),t){let i=n.root.activeElement!=n.contentDOM;i&&n.observer.ignore(()=>Da(n.contentDOM)),n.inputState.startMouseSelection(new Cu(n,e,t,i))}};function Oo(n,e,t,i){if(i==1)return w.cursor(e,t);if(i==2)return pu(n.state,e,t);{let s=pe.find(n.docView,e),r=n.state.doc.lineAt(s?s.posAtEnd:e),o=s?s.posAtStart:r.from,l=s?s.posAtEnd:r.to;return ln>=e.top&&n<=e.bottom,Bo=(n,e,t)=>ah(e,t)&&n>=t.left&&n<=t.right;function Ou(n,e,t,i){let s=pe.find(n.docView,e);if(!s)return 1;let r=e-s.posAtStart;if(r==0)return 1;if(r==s.length)return-1;let o=s.coordsAt(r,-1);if(o&&Bo(t,i,o))return-1;let l=s.coordsAt(r,1);return l&&Bo(t,i,l)?1:o&&ah(i,o)?-1:1}function Po(n,e){let t=n.posAtCoords({x:e.clientX,y:e.clientY},!1);return{pos:t,bias:Ou(n,t,e.clientX,e.clientY)}}const Bu=A.ie&&A.ie_version<=11;let Eo=null,Ro=0,Lo=0;function hh(n){if(!Bu)return n.detail;let e=Eo,t=Lo;return Eo=n,Lo=Date.now(),Ro=!e||t>Date.now()-400&&Math.abs(e.clientX-n.clientX)<2&&Math.abs(e.clientY-n.clientY)<2?(Ro+1)%3:1}function Pu(n,e){let t=Po(n,e),i=hh(e),s=n.state.selection,r=t,o=e;return{update(l){l.docChanged&&(t.pos=l.changes.mapPos(t.pos),s=s.map(l.changes),o=null)},get(l,a,h){let c;o&&l.clientX==o.clientX&&l.clientY==o.clientY?c=r:(c=r=Po(n,l),o=l);let f=Oo(n,c.pos,c.bias,i);if(t.pos!=c.pos&&!a){let u=Oo(n,t.pos,t.bias,i),d=Math.min(u.from,f.from),p=Math.max(u.to,f.to);f=d1&&s.ranges.some(u=>u.eq(f))?Eu(s,f):h?s.addRange(f):w.create([f])}}}function Eu(n,e){for(let t=0;;t++)if(n.ranges[t].eq(e))return w.create(n.ranges.slice(0,t).concat(n.ranges.slice(t+1)),n.mainIndex==t?0:n.mainIndex-(n.mainIndex>t?1:0))}ne.dragstart=(n,e)=>{let{selection:{main:t}}=n.state,{mouseSelection:i}=n.inputState;i&&(i.dragging=t),e.dataTransfer&&(e.dataTransfer.setData("Text",n.state.sliceDoc(t.from,t.to)),e.dataTransfer.effectAllowed="copyMove")};function Io(n,e,t,i){if(!t)return;let s=n.posAtCoords({x:e.clientX,y:e.clientY},!1);e.preventDefault();let{mouseSelection:r}=n.inputState,o=i&&r&&r.dragging&&r.dragMove?{from:r.dragging.from,to:r.dragging.to}:null,l={from:s,insert:t},a=n.state.changes(o?[o,l]:l);n.focus(),n.dispatch({changes:a,selection:{anchor:a.mapPos(s,-1),head:a.mapPos(s,1)},userEvent:o?"move.drop":"input.drop"})}ne.drop=(n,e)=>{if(!e.dataTransfer)return;if(n.state.readOnly)return e.preventDefault();let t=e.dataTransfer.files;if(t&&t.length){e.preventDefault();let i=Array(t.length),s=0,r=()=>{++s==t.length&&Io(n,e,i.filter(o=>o!=null).join(n.state.lineBreak),!1)};for(let o=0;o{/[\x00-\x08\x0e-\x1f]{2}/.test(l.result)||(i[o]=l.result),r()},l.readAsText(t[o])}}else Io(n,e,e.dataTransfer.getData("Text"),!0)};ne.paste=(n,e)=>{if(n.state.readOnly)return e.preventDefault();n.observer.flush();let t=oh?null:e.clipboardData;t?(lh(n,t.getData("text/plain")),e.preventDefault()):Tu(n)};function Ru(n,e){let t=n.dom.parentNode;if(!t)return;let i=t.appendChild(document.createElement("textarea"));i.style.cssText="position: fixed; left: -10000px; top: 10px",i.value=e,i.focus(),i.selectionEnd=e.length,i.selectionStart=0,setTimeout(()=>{i.remove(),n.focus()},50)}function Lu(n){let e=[],t=[],i=!1;for(let s of n.selection.ranges)s.empty||(e.push(n.sliceDoc(s.from,s.to)),t.push(s));if(!e.length){let s=-1;for(let{from:r}of n.selection.ranges){let o=n.doc.lineAt(r);o.number>s&&(e.push(o.text),t.push({from:o.from,to:Math.min(n.doc.length,o.to+1)})),s=o.number}i=!0}return{text:e.join(n.lineBreak),ranges:t,linewise:i}}let Qs=null;ne.copy=ne.cut=(n,e)=>{let{text:t,ranges:i,linewise:s}=Lu(n.state);if(!t&&!s)return;Qs=s?t:null;let r=oh?null:e.clipboardData;r?(e.preventDefault(),r.clearData(),r.setData("text/plain",t)):Ru(n,t),e.type=="cut"&&!n.state.readOnly&&n.dispatch({changes:i,scrollIntoView:!0,userEvent:"delete.cut"})};function ch(n){setTimeout(()=>{n.hasFocus!=n.inputState.notifiedFocused&&n.update([])},10)}ne.focus=n=>{n.inputState.lastFocusTime=Date.now(),!n.scrollDOM.scrollTop&&(n.inputState.lastScrollTop||n.inputState.lastScrollLeft)&&(n.scrollDOM.scrollTop=n.inputState.lastScrollTop,n.scrollDOM.scrollLeft=n.inputState.lastScrollLeft),ch(n)};ne.blur=n=>{n.observer.clearSelectionRange(),ch(n)};ne.compositionstart=ne.compositionupdate=n=>{n.inputState.compositionFirstChange==null&&(n.inputState.compositionFirstChange=!0),n.inputState.composing<0&&(n.inputState.composing=0)};ne.compositionend=n=>{n.inputState.composing=-1,n.inputState.compositionEndedAt=Date.now(),n.inputState.compositionFirstChange=null,A.chrome&&A.android&&n.observer.flushSoon(),setTimeout(()=>{n.inputState.composing<0&&n.docView.compositionDeco.size&&n.update([])},50)};ne.contextmenu=n=>{n.inputState.lastContextMenu=Date.now()};ne.beforeinput=(n,e)=>{var t;let i;if(A.chrome&&A.android&&(i=sh.find(s=>s.inputType==e.inputType))&&(n.observer.delayAndroidKey(i.key,i.keyCode),i.key=="Backspace"||i.key=="Delete")){let s=((t=window.visualViewport)===null||t===void 0?void 0:t.height)||0;setTimeout(()=>{var r;(((r=window.visualViewport)===null||r===void 0?void 0:r.height)||0)>s+10&&n.hasFocus&&(n.contentDOM.blur(),n.focus())},100)}};const No=["pre-wrap","normal","pre-line","break-spaces"];class Iu{constructor(){this.doc=_.empty,this.lineWrapping=!1,this.heightSamples={},this.lineHeight=14,this.charWidth=7,this.lineLength=30,this.heightChanged=!1}heightForGap(e,t){let i=this.doc.lineAt(t).number-this.doc.lineAt(e).number+1;return this.lineWrapping&&(i+=Math.ceil((t-e-i*this.lineLength*.5)/this.lineLength)),this.lineHeight*i}heightForLine(e){return this.lineWrapping?(1+Math.max(0,Math.ceil((e-this.lineLength)/(this.lineLength-5))))*this.lineHeight:this.lineHeight}setDoc(e){return this.doc=e,this}mustRefreshForWrapping(e){return No.indexOf(e)>-1!=this.lineWrapping}mustRefreshForHeights(e){let t=!1;for(let i=0;i-1,l=Math.round(t)!=Math.round(this.lineHeight)||this.lineWrapping!=o;if(this.lineWrapping=o,this.lineHeight=t,this.charWidth=i,this.lineLength=s,l){this.heightSamples={};for(let a=0;a0}set outdated(e){this.flags=(e?2:0)|this.flags&-3}setHeight(e,t){this.height!=t&&(Math.abs(this.height-t)>hn&&(e.heightChanged=!0),this.height=t)}replace(e,t,i){return me.of(i)}decomposeLeft(e,t){t.push(this)}decomposeRight(e,t){t.push(this)}applyChanges(e,t,i,s){let r=this;for(let o=s.length-1;o>=0;o--){let{fromA:l,toA:a,fromB:h,toB:c}=s[o],f=r.lineAt(l,z.ByPosNoHeight,t,0,0),u=f.to>=a?f:r.lineAt(a,z.ByPosNoHeight,t,0,0);for(c+=u.to-a,a=u.to;o>0&&f.from<=s[o-1].toA;)l=s[o-1].fromA,h=s[o-1].fromB,o--,lr*2){let l=e[t-1];l.break?e.splice(--t,1,l.left,null,l.right):e.splice(--t,1,l.left,l.right),i+=1+l.break,s-=l.size}else if(r>s*2){let l=e[i];l.break?e.splice(i,1,l.left,null,l.right):e.splice(i,1,l.left,l.right),i+=2+l.break,r-=l.size}else break;else if(s=r&&o(this.blockAt(0,i,s,r))}updateHeight(e,t=0,i=!1,s){return s&&s.from<=t&&s.more&&this.setHeight(e,s.heights[s.index++]),this.outdated=!1,this}toString(){return`block(${this.length})`}}class we extends fh{constructor(e,t){super(e,t,H.Text),this.collapsed=0,this.widgetHeight=0}replace(e,t,i){let s=i[0];return i.length==1&&(s instanceof we||s instanceof re&&s.flags&4)&&Math.abs(this.length-s.length)<10?(s instanceof re?s=new we(s.length,this.height):s.height=this.height,this.outdated||(s.outdated=!1),s):me.of(i)}updateHeight(e,t=0,i=!1,s){return s&&s.from<=t&&s.more?this.setHeight(e,s.heights[s.index++]):(i||this.outdated)&&this.setHeight(e,Math.max(this.widgetHeight,e.heightForLine(this.length-this.collapsed))),this.outdated=!1,this}toString(){return`line(${this.length}${this.collapsed?-this.collapsed:""}${this.widgetHeight?":"+this.widgetHeight:""})`}}class re extends me{constructor(e){super(e,0)}lines(e,t){let i=e.lineAt(t).number,s=e.lineAt(t+this.length).number;return{firstLine:i,lastLine:s,lineHeight:this.height/(s-i+1)}}blockAt(e,t,i,s){let{firstLine:r,lastLine:o,lineHeight:l}=this.lines(t,s),a=Math.max(0,Math.min(o-r,Math.floor((e-i)/l))),{from:h,length:c}=t.line(r+a);return new ot(h,c,i+l*a,l,H.Text)}lineAt(e,t,i,s,r){if(t==z.ByHeight)return this.blockAt(e,i,s,r);if(t==z.ByPosNoHeight){let{from:f,to:u}=i.lineAt(e);return new ot(f,u-f,0,0,H.Text)}let{firstLine:o,lineHeight:l}=this.lines(i,r),{from:a,length:h,number:c}=i.lineAt(e);return new ot(a,h,s+l*(c-o),l,H.Text)}forEachLine(e,t,i,s,r,o){let{firstLine:l,lineHeight:a}=this.lines(i,r);for(let h=Math.max(e,r),c=Math.min(r+this.length,t);h<=c;){let f=i.lineAt(h);h==e&&(s+=a*(f.number-l)),o(new ot(f.from,f.length,s,a,H.Text)),s+=a,h=f.to+1}}replace(e,t,i){let s=this.length-t;if(s>0){let r=i[i.length-1];r instanceof re?i[i.length-1]=new re(r.length+s):i.push(null,new re(s-1))}if(e>0){let r=i[0];r instanceof re?i[0]=new re(e+r.length):i.unshift(new re(e-1),null)}return me.of(i)}decomposeLeft(e,t){t.push(new re(e-1),null)}decomposeRight(e,t){t.push(null,new re(this.length-e-1))}updateHeight(e,t=0,i=!1,s){let r=t+this.length;if(s&&s.from<=t+this.length&&s.more){let o=[],l=Math.max(t,s.from),a=-1,h=e.heightChanged;for(s.from>t&&o.push(new re(s.from-t-1).updateHeight(e,t));l<=r&&s.more;){let f=e.doc.lineAt(l).length;o.length&&o.push(null);let u=s.heights[s.index++];a==-1?a=u:Math.abs(u-a)>=hn&&(a=-2);let d=new we(f,u);d.outdated=!1,o.push(d),l+=f+1}l<=r&&o.push(null,new re(r-l).updateHeight(e,l));let c=me.of(o);return e.heightChanged=h||a<0||Math.abs(c.height-this.height)>=hn||Math.abs(a-this.lines(e.doc,t).lineHeight)>=hn,c}else(i||this.outdated)&&(this.setHeight(e,e.heightForGap(t,t+this.length)),this.outdated=!1);return this}toString(){return`gap(${this.length})`}}class _u extends me{constructor(e,t,i){super(e.length+t+i.length,e.height+i.height,t|(e.outdated||i.outdated?2:0)),this.left=e,this.right=i,this.size=e.size+i.size}get break(){return this.flags&1}blockAt(e,t,i,s){let r=i+this.left.height;return el))return h;let c=t==z.ByPosNoHeight?z.ByPosNoHeight:z.ByPos;return a?h.join(this.right.lineAt(l,c,i,o,l)):this.left.lineAt(l,c,i,s,r).join(h)}forEachLine(e,t,i,s,r,o){let l=s+this.left.height,a=r+this.left.length+this.break;if(this.break)e=a&&this.right.forEachLine(e,t,i,l,a,o);else{let h=this.lineAt(a,z.ByPos,i,s,r);e=e&&h.from<=t&&o(h),t>h.to&&this.right.forEachLine(h.to+1,t,i,l,a,o)}}replace(e,t,i){let s=this.left.length+this.break;if(tthis.left.length)return this.balanced(this.left,this.right.replace(e-s,t-s,i));let r=[];e>0&&this.decomposeLeft(e,r);let o=r.length;for(let l of i)r.push(l);if(e>0&&_o(r,o-1),t=i&&t.push(null)),e>i&&this.right.decomposeLeft(e-i,t)}decomposeRight(e,t){let i=this.left.length,s=i+this.break;if(e>=s)return this.right.decomposeRight(e-s,t);e2*t.size||t.size>2*e.size?me.of(this.break?[e,null,t]:[e,t]):(this.left=e,this.right=t,this.height=e.height+t.height,this.outdated=e.outdated||t.outdated,this.size=e.size+t.size,this.length=e.length+this.break+t.length,this)}updateHeight(e,t=0,i=!1,s){let{left:r,right:o}=this,l=t+r.length+this.break,a=null;return s&&s.from<=t+r.length&&s.more?a=r=r.updateHeight(e,t,i,s):r.updateHeight(e,t,i),s&&s.from<=l+o.length&&s.more?a=o=o.updateHeight(e,l,i,s):o.updateHeight(e,l,i),a?this.balanced(r,o):(this.height=this.left.height+this.right.height,this.outdated=!1,this)}toString(){return this.left+(this.break?" ":"-")+this.right}}function _o(n,e){let t,i;n[e]==null&&(t=n[e-1])instanceof re&&(i=n[e+1])instanceof re&&n.splice(e-1,3,new re(t.length+1+i.length))}const Vu=5;class Dr{constructor(e,t){this.pos=e,this.oracle=t,this.nodes=[],this.lineStart=-1,this.lineEnd=-1,this.covering=null,this.writtenTo=e}get isCovered(){return this.covering&&this.nodes[this.nodes.length-1]==this.covering}span(e,t){if(this.lineStart>-1){let i=Math.min(t,this.lineEnd),s=this.nodes[this.nodes.length-1];s instanceof we?s.length+=i-this.pos:(i>this.pos||!this.isCovered)&&this.nodes.push(new we(i-this.pos,-1)),this.writtenTo=i,t>i&&(this.nodes.push(null),this.writtenTo++,this.lineStart=-1)}this.pos=t}point(e,t,i){if(e=Vu)&&this.addLineDeco(s,r)}else t>e&&this.span(e,t);this.lineEnd>-1&&this.lineEnd-1)return;let{from:e,to:t}=this.oracle.doc.lineAt(this.pos);this.lineStart=e,this.lineEnd=t,this.writtenToe&&this.nodes.push(new we(this.pos-e,-1)),this.writtenTo=this.pos}blankContent(e,t){let i=new re(t-e);return this.oracle.doc.lineAt(e).to==t&&(i.flags|=4),i}ensureLine(){this.enterLine();let e=this.nodes.length?this.nodes[this.nodes.length-1]:null;if(e instanceof we)return e;let t=new we(0,-1);return this.nodes.push(t),t}addBlock(e){this.enterLine(),e.type==H.WidgetAfter&&!this.isCovered&&this.ensureLine(),this.nodes.push(e),this.writtenTo=this.pos=this.pos+e.length,e.type!=H.WidgetBefore&&(this.covering=e)}addLineDeco(e,t){let i=this.ensureLine();i.length+=t,i.collapsed+=t,i.widgetHeight=Math.max(i.widgetHeight,e),this.writtenTo=this.pos=this.pos+t}finish(e){let t=this.nodes.length==0?null:this.nodes[this.nodes.length-1];this.lineStart>-1&&!(t instanceof we)&&!this.isCovered?this.nodes.push(new we(0,-1)):(this.writtenToc.clientHeight||c.scrollWidth>c.clientWidth)&&f.overflow!="visible"){let u=c.getBoundingClientRect();r=Math.max(r,u.left),o=Math.min(o,u.right),l=Math.max(l,u.top),a=h==n.parentNode?u.bottom:Math.min(a,u.bottom)}h=f.position=="absolute"||f.position=="fixed"?c.offsetParent:c.parentNode}else if(h.nodeType==11)h=h.host;else break;return{left:r-t.left,right:Math.max(r,o)-t.left,top:l-(t.top+e),bottom:Math.max(l,a)-(t.top+e)}}function zu(n,e){let t=n.getBoundingClientRect();return{left:0,right:t.right-t.left,top:e,bottom:t.bottom-(t.top+e)}}class is{constructor(e,t,i){this.from=e,this.to=t,this.size=i}static same(e,t){if(e.length!=t.length)return!1;for(let i=0;itypeof t!="function"),this.heightMap=me.empty().applyChanges(this.stateDeco,_.empty,this.heightOracle.setDoc(e.doc),[new Ke(0,0,0,e.doc.length)]),this.viewport=this.getViewport(0,null),this.updateViewportLines(),this.updateForViewport(),this.lineGaps=this.ensureLineGaps([]),this.lineGapDeco=E.set(this.lineGaps.map(t=>t.draw(!1))),this.computeVisibleRanges()}updateForViewport(){let e=[this.viewport],{main:t}=this.state.selection;for(let i=0;i<=1;i++){let s=i?t.head:t.anchor;if(!e.some(({from:r,to:o})=>s>=r&&s<=o)){let{from:r,to:o}=this.lineBlockAt(s);e.push(new ji(r,o))}}this.viewports=e.sort((i,s)=>i.from-s.from),this.scaler=this.heightMap.height<=7e6?Fo:new $u(this.heightOracle.doc,this.heightMap,this.viewports)}updateViewportLines(){this.viewportLines=[],this.heightMap.forEachLine(this.viewport.from,this.viewport.to,this.state.doc,0,0,e=>{this.viewportLines.push(this.scaler.scale==1?e:di(e,this.scaler))})}update(e,t=null){this.state=e.state;let i=this.stateDeco;this.stateDeco=this.state.facet(Di).filter(h=>typeof h!="function");let s=e.changedRanges,r=Ke.extendWithRanges(s,Fu(i,this.stateDeco,e?e.changes:te.empty(this.state.doc.length))),o=this.heightMap.height;this.heightMap=this.heightMap.applyChanges(this.stateDeco,e.startState.doc,this.heightOracle.setDoc(this.state.doc),r),this.heightMap.height!=o&&(e.flags|=2);let l=r.length?this.mapViewport(this.viewport,e.changes):this.viewport;(t&&(t.range.headl.to)||!this.viewportIsAppropriate(l))&&(l=this.getViewport(0,t));let a=!e.changes.empty||e.flags&2||l.from!=this.viewport.from||l.to!=this.viewport.to;this.viewport=l,this.updateForViewport(),a&&this.updateViewportLines(),(this.lineGaps.length||this.viewport.to-this.viewport.from>4e3)&&this.updateLineGaps(this.ensureLineGaps(this.mapLineGaps(this.lineGaps,e.changes))),e.flags|=this.computeVisibleRanges(),t&&(this.scrollTarget=t),!this.mustEnforceCursorAssoc&&e.selectionSet&&e.view.lineWrapping&&e.state.selection.main.empty&&e.state.selection.main.assoc&&!e.state.facet($a)&&(this.mustEnforceCursorAssoc=!0)}measure(e){let t=e.contentDOM,i=window.getComputedStyle(t),s=this.heightOracle,r=i.whiteSpace;this.defaultTextDirection=i.direction=="rtl"?Y.RTL:Y.LTR;let o=this.heightOracle.mustRefreshForWrapping(r),l=o||this.mustMeasureContent||this.contentDOMHeight!=t.clientHeight;this.contentDOMHeight=t.clientHeight,this.mustMeasureContent=!1;let a=0,h=0,c=parseInt(i.paddingTop)||0,f=parseInt(i.paddingBottom)||0;(this.paddingTop!=c||this.paddingBottom!=f)&&(this.paddingTop=c,this.paddingBottom=f,a|=10),this.editorWidth!=e.scrollDOM.clientWidth&&(s.lineWrapping&&(l=!0),this.editorWidth=e.scrollDOM.clientWidth,a|=8);let u=(this.printing?zu:Wu)(t,this.paddingTop),d=u.top-this.pixelViewport.top,p=u.bottom-this.pixelViewport.bottom;this.pixelViewport=u;let g=this.pixelViewport.bottom>this.pixelViewport.top&&this.pixelViewport.right>this.pixelViewport.left;if(g!=this.inView&&(this.inView=g,g&&(l=!0)),!this.inView&&!this.scrollTarget)return 0;let y=t.clientWidth;if((this.contentDOMWidth!=y||this.editorHeight!=e.scrollDOM.clientHeight)&&(this.contentDOMWidth=y,this.editorHeight=e.scrollDOM.clientHeight,a|=8),l){let v=e.docView.measureVisibleLineHeights(this.viewport);if(s.mustRefreshForHeights(v)&&(o=!0),o||s.lineWrapping&&Math.abs(y-this.contentDOMWidth)>s.charWidth){let{lineHeight:S,charWidth:k}=e.docView.measureTextSize();o=S>0&&s.refresh(r,S,k,y/k,v),o&&(e.docView.minWidth=0,a|=8)}d>0&&p>0?h=Math.max(d,p):d<0&&p<0&&(h=Math.min(d,p)),s.heightChanged=!1;for(let S of this.viewports){let k=S.from==this.viewport.from?v:e.docView.measureVisibleLineHeights(S);this.heightMap=o?me.empty().applyChanges(this.stateDeco,_.empty,this.heightOracle,[new Ke(0,0,0,e.state.doc.length)]):this.heightMap.updateHeight(s,0,o,new Nu(S.from,k))}s.heightChanged&&(a|=2)}let b=!this.viewportIsAppropriate(this.viewport,h)||this.scrollTarget&&(this.scrollTarget.range.headthis.viewport.to);return b&&(this.viewport=this.getViewport(h,this.scrollTarget)),this.updateForViewport(),(a&2||b)&&this.updateViewportLines(),(this.lineGaps.length||this.viewport.to-this.viewport.from>4e3)&&this.updateLineGaps(this.ensureLineGaps(o?[]:this.lineGaps,e)),a|=this.computeVisibleRanges(),this.mustEnforceCursorAssoc&&(this.mustEnforceCursorAssoc=!1,e.docView.enforceCursorAssoc()),a}get visibleTop(){return this.scaler.fromDOM(this.pixelViewport.top)}get visibleBottom(){return this.scaler.fromDOM(this.pixelViewport.bottom)}getViewport(e,t){let i=.5-Math.max(-.5,Math.min(.5,e/1e3/2)),s=this.heightMap,r=this.state.doc,{visibleTop:o,visibleBottom:l}=this,a=new ji(s.lineAt(o-i*1e3,z.ByHeight,r,0,0).from,s.lineAt(l+(1-i)*1e3,z.ByHeight,r,0,0).to);if(t){let{head:h}=t.range;if(ha.to){let c=Math.min(this.editorHeight,this.pixelViewport.bottom-this.pixelViewport.top),f=s.lineAt(h,z.ByPos,r,0,0),u;t.y=="center"?u=(f.top+f.bottom)/2-c/2:t.y=="start"||t.y=="nearest"&&h=l+Math.max(10,Math.min(i,250)))&&s>o-2*1e3&&r>1,o=s<<1;if(this.defaultTextDirection!=Y.LTR&&!i)return[];let l=[],a=(h,c,f,u)=>{if(c-hh&&yy.from>=f.from&&y.to<=f.to&&Math.abs(y.from-h)y.fromb));if(!g){if(cy.from<=c&&y.to>=c)){let y=t.moveToLineBoundary(w.cursor(c),!1,!0).head;y>h&&(c=y)}g=new is(h,c,this.gapSize(f,h,c,u))}l.push(g)};for(let h of this.viewportLines){if(h.lengthh.from&&a(h.from,u,h,c),dt.draw(this.heightOracle.lineWrapping))))}computeVisibleRanges(){let e=this.stateDeco;this.lineGaps.length&&(e=e.concat(this.lineGapDeco));let t=[];F.spans(e,this.viewport.from,this.viewport.to,{span(s,r){t.push({from:s,to:r})},point(){}},20);let i=t.length!=this.visibleRanges.length||this.visibleRanges.some((s,r)=>s.from!=t[r].from||s.to!=t[r].to);return this.visibleRanges=t,i?4:0}lineBlockAt(e){return e>=this.viewport.from&&e<=this.viewport.to&&this.viewportLines.find(t=>t.from<=e&&t.to>=e)||di(this.heightMap.lineAt(e,z.ByPos,this.state.doc,0,0),this.scaler)}lineBlockAtHeight(e){return di(this.heightMap.lineAt(this.scaler.fromDOM(e),z.ByHeight,this.state.doc,0,0),this.scaler)}elementAtHeight(e){return di(this.heightMap.blockAt(this.scaler.fromDOM(e),this.state.doc,0,0),this.scaler)}get docHeight(){return this.scaler.toDOM(this.heightMap.height)}get contentHeight(){return this.docHeight+this.paddingTop+this.paddingBottom}}class ji{constructor(e,t){this.from=e,this.to=t}}function ju(n,e,t){let i=[],s=n,r=0;return F.spans(t,n,e,{span(){},point(o,l){o>s&&(i.push({from:s,to:o}),r+=o-s),s=l}},20),s=1)return e[e.length-1].to;let i=Math.floor(n*t);for(let s=0;;s++){let{from:r,to:o}=e[s],l=o-r;if(i<=l)return r+i;i-=l}}function $i(n,e){let t=0;for(let{from:i,to:s}of n.ranges){if(e<=s){t+=e-i;break}t+=s-i}return t/n.total}function Ku(n,e){for(let t of n)if(e(t))return t}const Fo={toDOM(n){return n},fromDOM(n){return n},scale:1};class $u{constructor(e,t,i){let s=0,r=0,o=0;this.viewports=i.map(({from:l,to:a})=>{let h=t.lineAt(l,z.ByPos,e,0,0).top,c=t.lineAt(a,z.ByPos,e,0,0).bottom;return s+=c-h,{from:l,to:a,top:h,bottom:c,domTop:0,domBottom:0}}),this.scale=(7e6-s)/(t.height-s);for(let l of this.viewports)l.domTop=o+(l.top-r)*this.scale,o=l.domBottom=l.domTop+(l.bottom-l.top),r=l.bottom}toDOM(e){for(let t=0,i=0,s=0;;t++){let r=tdi(s,e)):n.type)}const Ui=D.define({combine:n=>n.join(" ")}),er=D.define({combine:n=>n.indexOf(!0)>-1}),tr=lt.newName(),uh=lt.newName(),dh=lt.newName(),ph={"&light":"."+uh,"&dark":"."+dh};function ir(n,e,t){return new lt(e,{finish(i){return/&/.test(i)?i.replace(/&\w*/,s=>{if(s=="&")return n;if(!t||!t[s])throw new RangeError(`Unsupported selector: ${s}`);return t[s]}):n+" "+i}})}const Uu=ir("."+tr,{"&.cm-editor":{position:"relative !important",boxSizing:"border-box","&.cm-focused":{outline:"1px dotted #212121"},display:"flex !important",flexDirection:"column"},".cm-scroller":{display:"flex !important",alignItems:"flex-start !important",fontFamily:"monospace",lineHeight:1.4,height:"100%",overflowX:"auto",position:"relative",zIndex:0},".cm-content":{margin:0,flexGrow:2,flexShrink:0,minHeight:"100%",display:"block",whiteSpace:"pre",wordWrap:"normal",boxSizing:"border-box",padding:"4px 0",outline:"none","&[contenteditable=true]":{WebkitUserModify:"read-write-plaintext-only"}},".cm-lineWrapping":{whiteSpace_fallback:"pre-wrap",whiteSpace:"break-spaces",wordBreak:"break-word",overflowWrap:"anywhere",flexShrink:1},"&light .cm-content":{caretColor:"black"},"&dark .cm-content":{caretColor:"white"},".cm-line":{display:"block",padding:"0 2px 0 4px"},".cm-selectionLayer":{zIndex:-1,contain:"size style"},".cm-selectionBackground":{position:"absolute"},"&light .cm-selectionBackground":{background:"#d9d9d9"},"&dark .cm-selectionBackground":{background:"#222"},"&light.cm-focused .cm-selectionBackground":{background:"#d7d4f0"},"&dark.cm-focused .cm-selectionBackground":{background:"#233"},".cm-cursorLayer":{zIndex:100,contain:"size style",pointerEvents:"none"},"&.cm-focused .cm-cursorLayer":{animation:"steps(1) cm-blink 1.2s infinite"},"@keyframes cm-blink":{"0%":{},"50%":{opacity:0},"100%":{}},"@keyframes cm-blink2":{"0%":{},"50%":{opacity:0},"100%":{}},".cm-cursor, .cm-dropCursor":{position:"absolute",borderLeft:"1.2px solid black",marginLeft:"-0.6px",pointerEvents:"none"},".cm-cursor":{display:"none"},"&dark .cm-cursor":{borderLeftColor:"#444"},"&.cm-focused .cm-cursor":{display:"block"},"&light .cm-activeLine":{backgroundColor:"#cceeff44"},"&dark .cm-activeLine":{backgroundColor:"#99eeff33"},"&light .cm-specialChar":{color:"red"},"&dark .cm-specialChar":{color:"#f78"},".cm-gutters":{flexShrink:0,display:"flex",height:"100%",boxSizing:"border-box",left:0,zIndex:200},"&light .cm-gutters":{backgroundColor:"#f5f5f5",color:"#6c6c6c",borderRight:"1px solid #ddd"},"&dark .cm-gutters":{backgroundColor:"#333338",color:"#ccc"},".cm-gutter":{display:"flex !important",flexDirection:"column",flexShrink:0,boxSizing:"border-box",minHeight:"100%",overflow:"hidden"},".cm-gutterElement":{boxSizing:"border-box"},".cm-lineNumbers .cm-gutterElement":{padding:"0 3px 0 5px",minWidth:"20px",textAlign:"right",whiteSpace:"nowrap"},"&light .cm-activeLineGutter":{backgroundColor:"#e2f2ff"},"&dark .cm-activeLineGutter":{backgroundColor:"#222227"},".cm-panels":{boxSizing:"border-box",position:"sticky",left:0,right:0},"&light .cm-panels":{backgroundColor:"#f5f5f5",color:"black"},"&light .cm-panels-top":{borderBottom:"1px solid #ddd"},"&light .cm-panels-bottom":{borderTop:"1px solid #ddd"},"&dark .cm-panels":{backgroundColor:"#333338",color:"white"},".cm-tab":{display:"inline-block",overflow:"hidden",verticalAlign:"bottom"},".cm-widgetBuffer":{verticalAlign:"text-top",height:"1em",width:0,display:"inline"},".cm-placeholder":{color:"#888",display:"inline-block",verticalAlign:"top"},".cm-button":{verticalAlign:"middle",color:"inherit",fontSize:"70%",padding:".2em 1em",borderRadius:"1px"},"&light .cm-button":{backgroundImage:"linear-gradient(#eff1f5, #d9d9df)",border:"1px solid #888","&:active":{backgroundImage:"linear-gradient(#b4b4b4, #d0d3d6)"}},"&dark .cm-button":{backgroundImage:"linear-gradient(#393939, #111)",border:"1px solid #888","&:active":{backgroundImage:"linear-gradient(#111, #333)"}},".cm-textfield":{verticalAlign:"middle",color:"inherit",fontSize:"70%",border:"1px solid silver",padding:".2em .5em"},"&light .cm-textfield":{backgroundColor:"white"},"&dark .cm-textfield":{border:"1px solid #555",backgroundColor:"inherit"}},ph);class Gu{constructor(e,t,i,s){this.typeOver=s,this.bounds=null,this.text="";let{impreciseHead:r,impreciseAnchor:o}=e.docView;if(t>-1&&!e.state.readOnly&&(this.bounds=e.docView.domBoundsAround(t,i,0))){let l=r||o?[]:Yu(e),a=new eh(l,e.state);a.readRange(this.bounds.startDOM,this.bounds.endDOM),this.text=a.text,this.newSel=Xu(l,this.bounds.from)}else{let l=e.observer.selectionRange,a=r&&r.node==l.focusNode&&r.offset==l.focusOffset||!Ut(e.contentDOM,l.focusNode)?e.state.selection.main.head:e.docView.posFromDOM(l.focusNode,l.focusOffset),h=o&&o.node==l.anchorNode&&o.offset==l.anchorOffset||!Ut(e.contentDOM,l.anchorNode)?e.state.selection.main.anchor:e.docView.posFromDOM(l.anchorNode,l.anchorOffset);this.newSel=w.single(h,a)}}}function mh(n,e){let t,{newSel:i}=e,s=n.state.selection.main;if(e.bounds){let{from:r,to:o}=e.bounds,l=s.from,a=null;(n.inputState.lastKeyCode===8&&n.inputState.lastKeyTime>Date.now()-100||A.android&&e.text.length=s.from&&t.to<=s.to&&(t.from!=s.from||t.to!=s.to)&&s.to-s.from-(t.to-t.from)<=4?t={from:s.from,to:s.to,insert:n.state.doc.slice(s.from,t.from).append(t.insert).append(n.state.doc.slice(t.to,s.to))}:(A.mac||A.android)&&t&&t.from==t.to&&t.from==s.head-1&&/^\. ?$/.test(t.insert.toString())?(i&&t.insert.length==2&&(i=w.single(i.main.anchor-1,i.main.head-1)),t={from:s.from,to:s.to,insert:_.of([" "])}):A.chrome&&t&&t.from==t.to&&t.from==s.head&&t.insert.toString()==` - `&&n.lineWrapping&&(i&&(i=w.single(i.main.anchor-1,i.main.head-1)),t={from:s.from,to:s.to,insert:_.of([" "])}),t){let r=n.state;if(A.ios&&n.inputState.flushIOSKey(n)||A.android&&(t.from==s.from&&t.to==s.to&&t.insert.length==1&&t.insert.lines==2&&Kt(n.contentDOM,"Enter",13)||t.from==s.from-1&&t.to==s.to&&t.insert.length==0&&Kt(n.contentDOM,"Backspace",8)||t.from==s.from&&t.to==s.to+1&&t.insert.length==0&&Kt(n.contentDOM,"Delete",46)))return!0;let o=t.insert.toString();if(n.state.facet(ja).some(h=>h(n,t.from,t.to,o)))return!0;n.inputState.composing>=0&&n.inputState.composing++;let l;if(t.from>=s.from&&t.to<=s.to&&t.to-t.from>=(s.to-s.from)/3&&(!i||i.main.empty&&i.main.from==t.from+t.insert.length)&&n.inputState.composing<0){let h=s.fromt.to?r.sliceDoc(t.to,s.to):"";l=r.replaceSelection(n.state.toText(h+t.insert.sliceString(0,void 0,n.state.lineBreak)+c))}else{let h=r.changes(t),c=i&&!r.selection.main.eq(i.main)&&i.main.to<=h.newLength?i.main:void 0;if(r.selection.ranges.length>1&&n.inputState.composing>=0&&t.to<=s.to&&t.to>=s.to-10){let f=n.state.sliceDoc(t.from,t.to),u=th(n)||n.state.doc.lineAt(s.head),d=s.to-t.to,p=s.to-s.from;l=r.changeByRange(g=>{if(g.from==s.from&&g.to==s.to)return{changes:h,range:c||g.map(h)};let y=g.to-d,b=y-f.length;if(g.to-g.from!=p||n.state.sliceDoc(b,y)!=f||u&&g.to>=u.from&&g.from<=u.to)return{range:g};let v=r.changes({from:b,to:y,insert:t.insert}),S=g.to-s.to;return{changes:v,range:c?w.range(Math.max(0,c.anchor+S),Math.max(0,c.head+S)):g.map(v)}})}else l={changes:h,selection:c&&r.selection.replaceRange(c)}}let a="input.type";return n.composing&&(a+=".compose",n.inputState.compositionFirstChange&&(a+=".start",n.inputState.compositionFirstChange=!1)),n.dispatch(l,{scrollIntoView:!0,userEvent:a}),!0}else if(i&&!i.main.eq(s)){let r=!1,o="select";return n.inputState.lastSelectionTime>Date.now()-50&&(n.inputState.lastSelectionOrigin=="select"&&(r=!0),o=n.inputState.lastSelectionOrigin),n.dispatch({selection:i,scrollIntoView:r,userEvent:o}),!0}else return!1}function Ju(n,e,t,i){let s=Math.min(n.length,e.length),r=0;for(;r0&&l>0&&n.charCodeAt(o-1)==e.charCodeAt(l-1);)o--,l--;if(i=="end"){let a=Math.max(0,r-Math.min(o,l));t-=o+a-r}if(o=o?r-t:0;r-=a,l=r+(l-o),o=r}else if(l=l?r-t:0;r-=a,o=r+(o-l),l=r}return{from:r,toA:o,toB:l}}function Yu(n){let e=[];if(n.root.activeElement!=n.contentDOM)return e;let{anchorNode:t,anchorOffset:i,focusNode:s,focusOffset:r}=n.observer.selectionRange;return t&&(e.push(new ko(t,i)),(s!=t||r!=i)&&e.push(new ko(s,r))),e}function Xu(n,e){if(n.length==0)return null;let t=n[0].pos,i=n.length==2?n[1].pos:t;return t>-1&&i>-1?w.single(t+e,i+e):null}const Zu={childList:!0,characterData:!0,subtree:!0,attributes:!0,characterDataOldValue:!0},ns=A.ie&&A.ie_version<=11;class Qu{constructor(e){this.view=e,this.active=!1,this.selectionRange=new $f,this.selectionChanged=!1,this.delayedFlush=-1,this.resizeTimeout=-1,this.queue=[],this.delayedAndroidKey=null,this.flushingAndroidKey=-1,this.lastChange=0,this.scrollTargets=[],this.intersection=null,this.resize=null,this.intersecting=!1,this.gapIntersection=null,this.gaps=[],this.parentCheck=-1,this.dom=e.contentDOM,this.observer=new MutationObserver(t=>{for(let i of t)this.queue.push(i);(A.ie&&A.ie_version<=11||A.ios&&e.composing)&&t.some(i=>i.type=="childList"&&i.removedNodes.length||i.type=="characterData"&&i.oldValue.length>i.target.nodeValue.length)?this.flushSoon():this.flush()}),ns&&(this.onCharData=t=>{this.queue.push({target:t.target,type:"characterData",oldValue:t.prevValue}),this.flushSoon()}),this.onSelectionChange=this.onSelectionChange.bind(this),this.onResize=this.onResize.bind(this),this.onPrint=this.onPrint.bind(this),this.onScroll=this.onScroll.bind(this),typeof ResizeObserver=="function"&&(this.resize=new ResizeObserver(()=>{var t;((t=this.view.docView)===null||t===void 0?void 0:t.lastUpdate){this.parentCheck<0&&(this.parentCheck=setTimeout(this.listenForScroll.bind(this),1e3)),t.length>0&&t[t.length-1].intersectionRatio>0!=this.intersecting&&(this.intersecting=!this.intersecting,this.intersecting!=this.view.inView&&this.onScrollChanged(document.createEvent("Event")))},{}),this.intersection.observe(this.dom),this.gapIntersection=new IntersectionObserver(t=>{t.length>0&&t[t.length-1].intersectionRatio>0&&this.onScrollChanged(document.createEvent("Event"))},{})),this.listenForScroll(),this.readSelectionRange()}onScrollChanged(e){this.view.inputState.runScrollHandlers(this.view,e),this.intersecting&&this.view.measure()}onScroll(e){this.intersecting&&this.flush(!1),this.onScrollChanged(e)}onResize(){this.resizeTimeout<0&&(this.resizeTimeout=setTimeout(()=>{this.resizeTimeout=-1,this.view.requestMeasure()},50))}onPrint(){this.view.viewState.printing=!0,this.view.measure(),setTimeout(()=>{this.view.viewState.printing=!1,this.view.requestMeasure()},500)}updateGaps(e){if(this.gapIntersection&&(e.length!=this.gaps.length||this.gaps.some((t,i)=>t!=e[i]))){this.gapIntersection.disconnect();for(let t of e)this.gapIntersection.observe(t);this.gaps=e}}onSelectionChange(e){let t=this.selectionChanged;if(!this.readSelectionRange()||this.delayedAndroidKey)return;let{view:i}=this,s=this.selectionRange;if(i.state.facet(_n)?i.root.activeElement!=this.dom:!an(i.dom,s))return;let r=s.anchorNode&&i.docView.nearest(s.anchorNode);if(r&&r.ignoreEvent(e)){t||(this.selectionChanged=!1);return}(A.ie&&A.ie_version<=11||A.android&&A.chrome)&&!i.state.selection.main.empty&&s.focusNode&&kn(s.focusNode,s.focusOffset,s.anchorNode,s.anchorOffset)?this.flushSoon():this.flush(!1)}readSelectionRange(){let{view:e}=this,t=A.safari&&e.root.nodeType==11&&qf(this.dom.ownerDocument)==this.dom&&ed(this.view)||wn(e.root);if(!t||this.selectionRange.eq(t))return!1;let i=an(this.dom,t);return i&&!this.selectionChanged&&e.inputState.lastFocusTime>Date.now()-200&&e.inputState.lastTouchTime{let r=this.delayedAndroidKey;r&&(this.clearDelayedAndroidKey(),!this.flush()&&r.force&&Kt(this.dom,r.key,r.keyCode))};this.flushingAndroidKey=this.view.win.requestAnimationFrame(s)}(!this.delayedAndroidKey||e=="Enter")&&(this.delayedAndroidKey={key:e,keyCode:t,force:this.lastChange{this.delayedFlush=-1,this.flush()}))}forceFlush(){this.delayedFlush>=0&&(this.view.win.cancelAnimationFrame(this.delayedFlush),this.delayedFlush=-1),this.flush()}processRecords(){let e=this.queue;for(let r of this.observer.takeRecords())e.push(r);e.length&&(this.queue=[]);let t=-1,i=-1,s=!1;for(let r of e){let o=this.readMutation(r);o&&(o.typeOver&&(s=!0),t==-1?{from:t,to:i}=o:(t=Math.min(o.from,t),i=Math.max(o.to,i)))}return{from:t,to:i,typeOver:s}}readChange(){let{from:e,to:t,typeOver:i}=this.processRecords(),s=this.selectionChanged&&an(this.dom,this.selectionRange);return e<0&&!s?null:(e>-1&&(this.lastChange=Date.now()),this.view.inputState.lastFocusTime=0,this.selectionChanged=!1,new Gu(this.view,e,t,i))}flush(e=!0){if(this.delayedFlush>=0||this.delayedAndroidKey)return!1;e&&this.readSelectionRange();let t=this.readChange();if(!t)return!1;let i=this.view.state,s=mh(this.view,t);return this.view.state==i&&this.view.update([]),s}readMutation(e){let t=this.view.docView.nearest(e.target);if(!t||t.ignoreMutation(e))return null;if(t.markDirty(e.type=="attributes"),e.type=="attributes"&&(t.dirty|=4),e.type=="childList"){let i=Ho(t,e.previousSibling||e.target.previousSibling,-1),s=Ho(t,e.nextSibling||e.target.nextSibling,1);return{from:i?t.posAfter(i):t.posAtStart,to:s?t.posBefore(s):t.posAtEnd,typeOver:!1}}else return e.type=="characterData"?{from:t.posAtStart,to:t.posAtEnd,typeOver:e.target.nodeValue==e.oldValue}:null}setWindow(e){e!=this.win&&(this.removeWindowListeners(this.win),this.win=e,this.addWindowListeners(this.win))}addWindowListeners(e){e.addEventListener("resize",this.onResize),e.addEventListener("beforeprint",this.onPrint),e.addEventListener("scroll",this.onScroll),e.document.addEventListener("selectionchange",this.onSelectionChange)}removeWindowListeners(e){e.removeEventListener("scroll",this.onScroll),e.removeEventListener("resize",this.onResize),e.removeEventListener("beforeprint",this.onPrint),e.document.removeEventListener("selectionchange",this.onSelectionChange)}destroy(){var e,t,i;this.stop(),(e=this.intersection)===null||e===void 0||e.disconnect(),(t=this.gapIntersection)===null||t===void 0||t.disconnect(),(i=this.resize)===null||i===void 0||i.disconnect();for(let s of this.scrollTargets)s.removeEventListener("scroll",this.onScroll);this.removeWindowListeners(this.win),clearTimeout(this.parentCheck),clearTimeout(this.resizeTimeout),this.win.cancelAnimationFrame(this.delayedFlush),this.win.cancelAnimationFrame(this.flushingAndroidKey)}}function Ho(n,e,t){for(;e;){let i=q.get(e);if(i&&i.parent==n)return i;let s=e.parentNode;e=s!=n.dom?s:t>0?e.nextSibling:e.previousSibling}return null}function ed(n){let e=null;function t(a){a.preventDefault(),a.stopImmediatePropagation(),e=a.getTargetRanges()[0]}if(n.contentDOM.addEventListener("beforeinput",t,!0),n.dom.ownerDocument.execCommand("indent"),n.contentDOM.removeEventListener("beforeinput",t,!0),!e)return null;let i=e.startContainer,s=e.startOffset,r=e.endContainer,o=e.endOffset,l=n.docView.domAtPos(n.state.selection.main.anchor);return kn(l.node,l.offset,r,o)&&([i,s,r,o]=[r,o,i,s]),{anchorNode:i,anchorOffset:s,focusNode:r,focusOffset:o}}class O{constructor(e={}){this.plugins=[],this.pluginMap=new Map,this.editorAttrs={},this.contentAttrs={},this.bidiCache=[],this.destroyed=!1,this.updateState=2,this.measureScheduled=-1,this.measureRequests=[],this.contentDOM=document.createElement("div"),this.scrollDOM=document.createElement("div"),this.scrollDOM.tabIndex=-1,this.scrollDOM.className="cm-scroller",this.scrollDOM.appendChild(this.contentDOM),this.announceDOM=document.createElement("div"),this.announceDOM.style.cssText="position: absolute; top: -10000px",this.announceDOM.setAttribute("aria-live","polite"),this.dom=document.createElement("div"),this.dom.appendChild(this.announceDOM),this.dom.appendChild(this.scrollDOM),this._dispatch=e.dispatch||(t=>this.update([t])),this.dispatch=this.dispatch.bind(this),this._root=e.root||Uf(e.parent)||document,this.viewState=new Vo(e.state||N.create(e)),this.plugins=this.state.facet(fi).map(t=>new Qn(t));for(let t of this.plugins)t.update(this);this.observer=new Qu(this),this.inputState=new xu(this),this.inputState.ensureHandlers(this,this.plugins),this.docView=new vo(this),this.mountStyles(),this.updateAttrs(),this.updateState=0,this.requestMeasure(),e.parent&&e.parent.appendChild(this.dom)}get state(){return this.viewState.state}get viewport(){return this.viewState.viewport}get visibleRanges(){return this.viewState.visibleRanges}get inView(){return this.viewState.inView}get composing(){return this.inputState.composing>0}get compositionStarted(){return this.inputState.composing>=0}get root(){return this._root}get win(){return this.dom.ownerDocument.defaultView||window}dispatch(...e){this._dispatch(e.length==1&&e[0]instanceof ie?e[0]:this.state.update(...e))}update(e){if(this.updateState!=0)throw new Error("Calls to EditorView.update are not allowed while an update is in progress");let t=!1,i=!1,s,r=this.state;for(let h of e){if(h.startState!=r)throw new RangeError("Trying to update state with a transaction that doesn't start from the previous state.");r=h.state}if(this.destroyed){this.viewState.state=r;return}let o=this.observer.delayedAndroidKey,l=null;if(o?(this.observer.clearDelayedAndroidKey(),l=this.observer.readChange(),(l&&!this.state.doc.eq(r.doc)||!this.state.selection.eq(r.selection))&&(l=null)):this.observer.clear(),r.facet(N.phrases)!=this.state.facet(N.phrases))return this.setState(r);s=Sn.create(this,r,e);let a=this.viewState.scrollTarget;try{this.updateState=2;for(let h of e){if(a&&(a=a.map(h.changes)),h.scrollIntoView){let{main:c}=h.state.selection;a=new xn(c.empty?c:w.cursor(c.head,c.head>c.anchor?-1:1))}for(let c of h.effects)c.is(bo)&&(a=c.value)}this.viewState.update(s,a),this.bidiCache=Cn.update(this.bidiCache,s.changes),s.empty||(this.updatePlugins(s),this.inputState.update(s)),t=this.docView.update(s),this.state.facet(ui)!=this.styleModules&&this.mountStyles(),i=this.updateAttrs(),this.showAnnouncements(e),this.docView.updateSelection(t,e.some(h=>h.isUserEvent("select.pointer")))}finally{this.updateState=0}if(s.startState.facet(Ui)!=s.state.facet(Ui)&&(this.viewState.mustMeasureContent=!0),(t||i||a||this.viewState.mustEnforceCursorAssoc||this.viewState.mustMeasureContent)&&this.requestMeasure(),!s.empty)for(let h of this.state.facet(Gs))h(s);l&&!mh(this,l)&&o.force&&Kt(this.contentDOM,o.key,o.keyCode)}setState(e){if(this.updateState!=0)throw new Error("Calls to EditorView.setState are not allowed while an update is in progress");if(this.destroyed){this.viewState.state=e;return}this.updateState=2;let t=this.hasFocus;try{for(let i of this.plugins)i.destroy(this);this.viewState=new Vo(e),this.plugins=e.facet(fi).map(i=>new Qn(i)),this.pluginMap.clear();for(let i of this.plugins)i.update(this);this.docView=new vo(this),this.inputState.ensureHandlers(this,this.plugins),this.mountStyles(),this.updateAttrs(),this.bidiCache=[]}finally{this.updateState=0}t&&this.focus(),this.requestMeasure()}updatePlugins(e){let t=e.startState.facet(fi),i=e.state.facet(fi);if(t!=i){let s=[];for(let r of i){let o=t.indexOf(r);if(o<0)s.push(new Qn(r));else{let l=this.plugins[o];l.mustUpdate=e,s.push(l)}}for(let r of this.plugins)r.mustUpdate!=e&&r.destroy(this);this.plugins=s,this.pluginMap.clear(),this.inputState.ensureHandlers(this,this.plugins)}else for(let s of this.plugins)s.mustUpdate=e;for(let s=0;s-1&&cancelAnimationFrame(this.measureScheduled),this.measureScheduled=0,e&&this.observer.forceFlush();let t=null,{scrollHeight:i,scrollTop:s,clientHeight:r}=this.scrollDOM,o=s>i-r-4?i:s;try{for(let l=0;;l++){this.updateState=1;let a=this.viewport,h=this.viewState.lineBlockAtHeight(o),c=this.viewState.measure(this);if(!c&&!this.measureRequests.length&&this.viewState.scrollTarget==null)break;if(l>5){console.warn(this.measureRequests.length?"Measure loop restarted more than 5 times":"Viewport failed to stabilize");break}let f=[];c&4||([this.measureRequests,f]=[f,this.measureRequests]);let u=f.map(y=>{try{return y.read(this)}catch(b){return Ee(this.state,b),Wo}}),d=Sn.create(this,this.state,[]),p=!1,g=!1;d.flags|=c,t?t.flags|=c:t=d,this.updateState=2,d.empty||(this.updatePlugins(d),this.inputState.update(d),this.updateAttrs(),p=this.docView.update(d));for(let y=0;y1||y<-1)&&(this.scrollDOM.scrollTop+=y,g=!0)}if(p&&this.docView.updateSelection(!0),this.viewport.from==a.from&&this.viewport.to==a.to&&!g&&this.measureRequests.length==0)break}}finally{this.updateState=0,this.measureScheduled=-1}if(t&&!t.empty)for(let l of this.state.facet(Gs))l(t)}get themeClasses(){return tr+" "+(this.state.facet(er)?dh:uh)+" "+this.state.facet(Ui)}updateAttrs(){let e=zo(this,Ua,{class:"cm-editor"+(this.hasFocus?" cm-focused ":" ")+this.themeClasses}),t={spellcheck:"false",autocorrect:"off",autocapitalize:"off",translate:"no",contenteditable:this.state.facet(_n)?"true":"false",class:"cm-content",style:`${A.tabSize}: ${this.state.tabSize}`,role:"textbox","aria-multiline":"true"};this.state.readOnly&&(t["aria-readonly"]="true"),zo(this,Ga,t);let i=this.observer.ignore(()=>{let s=$s(this.contentDOM,this.contentAttrs,t),r=$s(this.dom,this.editorAttrs,e);return s||r});return this.editorAttrs=e,this.contentAttrs=t,i}showAnnouncements(e){let t=!0;for(let i of e)for(let s of i.effects)if(s.is(O.announce)){t&&(this.announceDOM.textContent=""),t=!1;let r=this.announceDOM.appendChild(document.createElement("div"));r.textContent=s.value}}mountStyles(){this.styleModules=this.state.facet(ui),lt.mount(this.root,this.styleModules.concat(Uu).reverse())}readMeasured(){if(this.updateState==2)throw new Error("Reading the editor layout isn't allowed during an update");this.updateState==0&&this.measureScheduled>-1&&this.measure(!1)}requestMeasure(e){if(this.measureScheduled<0&&(this.measureScheduled=this.win.requestAnimationFrame(()=>this.measure())),e){if(e.key!=null){for(let t=0;ti.spec==e)||null),t&&t.update(this).value}get documentTop(){return this.contentDOM.getBoundingClientRect().top+this.viewState.paddingTop}get documentPadding(){return{top:this.viewState.paddingTop,bottom:this.viewState.paddingBottom}}elementAtHeight(e){return this.readMeasured(),this.viewState.elementAtHeight(e)}lineBlockAtHeight(e){return this.readMeasured(),this.viewState.lineBlockAtHeight(e)}get viewportLineBlocks(){return this.viewState.viewportLines}lineBlockAt(e){return this.viewState.lineBlockAt(e)}get contentHeight(){return this.viewState.contentHeight}moveByChar(e,t,i){return ts(this,e,Do(this,e,t,i))}moveByGroup(e,t){return ts(this,e,Do(this,e,t,i=>ku(this,e.head,i)))}moveToLineBoundary(e,t,i=!0){return wu(this,e,t,i)}moveVertically(e,t,i){return ts(this,e,vu(this,e,t,i))}domAtPos(e){return this.docView.domAtPos(e)}posAtDOM(e,t=0){return this.docView.posFromDOM(e,t)}posAtCoords(e,t=!0){return this.readMeasured(),nh(this,e,t)}coordsAtPos(e,t=1){this.readMeasured();let i=this.docView.coordsAt(e,t);if(!i||i.left==i.right)return i;let s=this.state.doc.lineAt(e),r=this.bidiSpans(s),o=r[$t.find(r,e-s.from,-1,t)];return Sr(i,o.dir==Y.LTR==t>0)}get defaultCharacterWidth(){return this.viewState.heightOracle.charWidth}get defaultLineHeight(){return this.viewState.heightOracle.lineHeight}get textDirection(){return this.viewState.defaultTextDirection}textDirectionAt(e){return!this.state.facet(Ka)||ethis.viewport.to?this.textDirection:(this.readMeasured(),this.docView.textDirectionAt(e))}get lineWrapping(){return this.viewState.heightOracle.lineWrapping}bidiSpans(e){if(e.length>td)return Za(e.length);let t=this.textDirectionAt(e.from);for(let s of this.bidiCache)if(s.from==e.from&&s.dir==t)return s.order;let i=ru(e.text,t);return this.bidiCache.push(new Cn(e.from,e.to,t,i)),i}get hasFocus(){var e;return(this.dom.ownerDocument.hasFocus()||A.safari&&((e=this.inputState)===null||e===void 0?void 0:e.lastContextMenu)>Date.now()-3e4)&&this.root.activeElement==this.contentDOM}focus(){this.observer.ignore(()=>{Da(this.contentDOM),this.docView.updateSelection()})}setRoot(e){this._root!=e&&(this._root=e,this.observer.setWindow((e.nodeType==9?e:e.ownerDocument).defaultView||window),this.mountStyles())}destroy(){for(let e of this.plugins)e.destroy(this);this.plugins=[],this.inputState.destroy(),this.dom.remove(),this.observer.destroy(),this.measureScheduled>-1&&cancelAnimationFrame(this.measureScheduled),this.destroyed=!0}static scrollIntoView(e,t={}){return bo.of(new xn(typeof e=="number"?w.cursor(e):e,t.y,t.x,t.yMargin,t.xMargin))}static domEventHandlers(e){return ue.define(()=>({}),{eventHandlers:e})}static theme(e,t){let i=lt.newName(),s=[Ui.of(i),ui.of(ir(`.${i}`,e))];return t&&t.dark&&s.push(er.of(!0)),s}static baseTheme(e){return Ri.lowest(ui.of(ir("."+tr,e,ph)))}static findFromDOM(e){var t;let i=e.querySelector(".cm-content"),s=i&&q.get(i)||q.get(e);return((t=s?.rootView)===null||t===void 0?void 0:t.view)||null}}O.styleModule=ui;O.inputHandler=ja;O.perLineTextDirection=Ka;O.exceptionSink=qa;O.updateListener=Gs;O.editable=_n;O.mouseSelectionStyle=za;O.dragMovesSelection=Wa;O.clickAddsSelectionRange=Ha;O.decorations=Di;O.atomicRanges=Ja;O.scrollMargins=Ya;O.darkTheme=er;O.contentAttributes=Ga;O.editorAttributes=Ua;O.lineWrapping=O.contentAttributes.of({class:"cm-lineWrapping"});O.announce=R.define();const td=4096,Wo={};class Cn{constructor(e,t,i,s){this.from=e,this.to=t,this.dir=i,this.order=s}static update(e,t){if(t.empty)return e;let i=[],s=e.length?e[e.length-1].dir:Y.LTR;for(let r=Math.max(0,e.length-10);r=0;s--){let r=i[s],o=typeof r=="function"?r(n):r;o&&Ks(o,t)}return t}const id=A.mac?"mac":A.windows?"win":A.linux?"linux":"key";function nd(n,e){const t=n.split(/-(?!$)/);let i=t[t.length-1];i=="Space"&&(i=" ");let s,r,o,l;for(let a=0;ai.concat(s),[]))),t}let it=null;const od=4e3;function ld(n,e=id){let t=Object.create(null),i=Object.create(null),s=(o,l)=>{let a=i[o];if(a==null)i[o]=l;else if(a!=l)throw new Error("Key binding "+o+" is used both as a regular binding and as a multi-stroke prefix")},r=(o,l,a,h)=>{var c,f;let u=t[o]||(t[o]=Object.create(null)),d=l.split(/ (?!$)/).map(y=>nd(y,e));for(let y=1;y{let S=it={view:v,prefix:b,scope:o};return setTimeout(()=>{it==S&&(it=null)},od),!0}]})}let p=d.join(" ");s(p,!1);let g=u[p]||(u[p]={preventDefault:!1,run:((f=(c=u._any)===null||c===void 0?void 0:c.run)===null||f===void 0?void 0:f.slice())||[]});a&&g.run.push(a),h&&(g.preventDefault=!0)};for(let o of n){let l=o.scope?o.scope.split(" "):["editor"];if(o.any)for(let h of l){let c=t[h]||(t[h]=Object.create(null));c._any||(c._any={preventDefault:!1,run:[]});for(let f in c)c[f].run.push(o.any)}let a=o[e]||o.key;if(a)for(let h of l)r(h,a,o.run,o.preventDefault),o.shift&&r(h,"Shift-"+a,o.shift,o.preventDefault)}return t}function ad(n,e,t,i){let s=zf(e),r=ce(s,0),o=Ce(r)==s.length&&s!=" ",l="",a=!1;it&&it.view==t&&it.scope==i&&(l=it.prefix+" ",(a=rh.indexOf(e.keyCode)<0)&&(it=null));let h=new Set,c=p=>{if(p){for(let g of p.run)if(!h.has(g)&&(h.add(g),g(t,e)))return!0;p.preventDefault&&(a=!0)}return!1},f=n[i],u,d;if(f){if(c(f[l+Gi(s,e,!o)]))return!0;if(o&&(e.shiftKey||e.altKey||e.metaKey||r>127)&&(u=at[e.keyCode])&&u!=s){if(c(f[l+Gi(u,e,!0)]))return!0;if(e.shiftKey&&(d=Ci[e.keyCode])!=s&&d!=u&&c(f[l+Gi(d,e,!1)]))return!0}else if(o&&e.shiftKey&&c(f[l+Gi(s,e,!0)]))return!0;if(c(f._any))return!0}return a}const gh=!A.ios,pi=D.define({combine(n){return Rt(n,{cursorBlinkRate:1200,drawRangeCursor:!0},{cursorBlinkRate:(e,t)=>Math.min(e,t),drawRangeCursor:(e,t)=>e||t})}});function hd(n={}){return[pi.of(n),cd,fd,$a.of(!0)]}class yh{constructor(e,t,i,s,r){this.left=e,this.top=t,this.width=i,this.height=s,this.className=r}draw(){let e=document.createElement("div");return e.className=this.className,this.adjust(e),e}adjust(e){e.style.left=this.left+"px",e.style.top=this.top+"px",this.width>=0&&(e.style.width=this.width+"px"),e.style.height=this.height+"px"}eq(e){return this.left==e.left&&this.top==e.top&&this.width==e.width&&this.height==e.height&&this.className==e.className}}const cd=ue.fromClass(class{constructor(n){this.view=n,this.rangePieces=[],this.cursors=[],this.measureReq={read:this.readPos.bind(this),write:this.drawSel.bind(this)},this.selectionLayer=n.scrollDOM.appendChild(document.createElement("div")),this.selectionLayer.className="cm-selectionLayer",this.selectionLayer.setAttribute("aria-hidden","true"),this.cursorLayer=n.scrollDOM.appendChild(document.createElement("div")),this.cursorLayer.className="cm-cursorLayer",this.cursorLayer.setAttribute("aria-hidden","true"),n.requestMeasure(this.measureReq),this.setBlinkRate()}setBlinkRate(){this.cursorLayer.style.animationDuration=this.view.state.facet(pi).cursorBlinkRate+"ms"}update(n){let e=n.startState.facet(pi)!=n.state.facet(pi);(e||n.selectionSet||n.geometryChanged||n.viewportChanged)&&this.view.requestMeasure(this.measureReq),n.transactions.some(t=>t.scrollIntoView)&&(this.cursorLayer.style.animationName=this.cursorLayer.style.animationName=="cm-blink"?"cm-blink2":"cm-blink"),e&&this.setBlinkRate()}readPos(){let{state:n}=this.view,e=n.facet(pi),t=n.selection.ranges.map(s=>s.empty?[]:ud(this.view,s)).reduce((s,r)=>s.concat(r)),i=[];for(let s of n.selection.ranges){let r=s==n.selection.main;if(s.empty?!r||gh:e.drawRangeCursor){let o=dd(this.view,s,r);o&&i.push(o)}}return{rangePieces:t,cursors:i}}drawSel({rangePieces:n,cursors:e}){if(n.length!=this.rangePieces.length||n.some((t,i)=>!t.eq(this.rangePieces[i]))){this.selectionLayer.textContent="";for(let t of n)this.selectionLayer.appendChild(t.draw());this.rangePieces=n}if(e.length!=this.cursors.length||e.some((t,i)=>!t.eq(this.cursors[i]))){let t=this.cursorLayer.children;if(t.length!==e.length){this.cursorLayer.textContent="";for(const i of e)this.cursorLayer.appendChild(i.draw())}else e.forEach((i,s)=>i.adjust(t[s]));this.cursors=e}}destroy(){this.selectionLayer.remove(),this.cursorLayer.remove()}}),bh={".cm-line":{"& ::selection":{backgroundColor:"transparent !important"},"&::selection":{backgroundColor:"transparent !important"}}};gh&&(bh[".cm-line"].caretColor="transparent !important");const fd=Ri.highest(O.theme(bh));function wh(n){let e=n.scrollDOM.getBoundingClientRect();return{left:(n.textDirection==Y.LTR?e.left:e.right-n.scrollDOM.clientWidth)-n.scrollDOM.scrollLeft,top:e.top-n.scrollDOM.scrollTop}}function jo(n,e,t){let i=w.cursor(e);return{from:Math.max(t.from,n.moveToLineBoundary(i,!1,!0).from),to:Math.min(t.to,n.moveToLineBoundary(i,!0,!0).from),type:H.Text}}function Ko(n,e){let t=n.lineBlockAt(e);if(Array.isArray(t.type)){for(let i of t.type)if(i.to>e||i.to==e&&(i.to==t.to||i.type==H.Text))return i}return t}function ud(n,e){if(e.to<=n.viewport.from||e.from>=n.viewport.to)return[];let t=Math.max(e.from,n.viewport.from),i=Math.min(e.to,n.viewport.to),s=n.textDirection==Y.LTR,r=n.contentDOM,o=r.getBoundingClientRect(),l=wh(n),a=window.getComputedStyle(r.firstChild),h=o.left+parseInt(a.paddingLeft)+Math.min(0,parseInt(a.textIndent)),c=o.right-parseInt(a.paddingRight),f=Ko(n,t),u=Ko(n,i),d=f.type==H.Text?f:null,p=u.type==H.Text?u:null;if(n.lineWrapping&&(d&&(d=jo(n,t,d)),p&&(p=jo(n,i,p))),d&&p&&d.from==p.from)return y(b(e.from,e.to,d));{let S=d?b(e.from,null,d):v(f,!1),k=p?b(null,e.to,p):v(u,!0),C=[];return(d||f).to<(p||u).from-1?C.push(g(h,S.bottom,c,k.top)):S.bottomP&&K.from=M)break;U>X&&I(Math.max(se,X),S==null&&se<=P,Math.min(U,M),k==null&&U>=V,G.dir)}if(X=$.to+1,X>=M)break}return j.length==0&&I(P,S==null,V,k==null,n.textDirection),{top:T,bottom:B,horizontal:j}}function v(S,k){let C=o.top+(k?S.top:S.bottom);return{top:C,bottom:C,horizontal:[]}}}function dd(n,e,t){let i=n.coordsAtPos(e.head,e.assoc||1);if(!i)return null;let s=wh(n);return new yh(i.left-s.left,i.top-s.top,-1,i.bottom-i.top,t?"cm-cursor cm-cursor-primary":"cm-cursor cm-cursor-secondary")}function $o(n,e,t,i,s){e.lastIndex=0;for(let r=n.iterRange(t,i),o=t,l;!r.next().done;o+=r.value.length)if(!r.lineBreak)for(;l=e.exec(r.value);)s(o+l.index,l)}function pd(n,e){let t=n.visibleRanges;if(t.length==1&&t[0].from==n.viewport.from&&t[0].to==n.viewport.to)return t;let i=[];for(let{from:s,to:r}of t)s=Math.max(n.state.doc.lineAt(s).from,s-e),r=Math.min(n.state.doc.lineAt(r).to,r+e),i.length&&i[i.length-1].to>=s?i[i.length-1].to=r:i.push({from:s,to:r});return i}class md{constructor(e){const{regexp:t,decoration:i,decorate:s,boundary:r,maxLength:o=1e3}=e;if(!t.global)throw new RangeError("The regular expression given to MatchDecorator should have its 'g' flag set");if(this.regexp=t,s)this.addMatch=(l,a,h,c)=>s(c,h,h+l[0].length,l,a);else if(typeof i=="function")this.addMatch=(l,a,h,c)=>{let f=i(l,a,h);f&&c(h,h+l[0].length,f)};else if(i)this.addMatch=(l,a,h,c)=>c(h,h+l[0].length,i);else throw new RangeError("Either 'decorate' or 'decoration' should be provided to MatchDecorator");this.boundary=r,this.maxLength=o}createDeco(e){let t=new Ct,i=t.add.bind(t);for(let{from:s,to:r}of pd(e,this.maxLength))$o(e.state.doc,this.regexp,s,r,(o,l)=>this.addMatch(l,e,o,i));return t.finish()}updateDeco(e,t){let i=1e9,s=-1;return e.docChanged&&e.changes.iterChanges((r,o,l,a)=>{a>e.view.viewport.from&&l1e3?this.createDeco(e.view):s>-1?this.updateRange(e.view,t.map(e.changes),i,s):t}updateRange(e,t,i,s){for(let r of e.visibleRanges){let o=Math.max(r.from,i),l=Math.min(r.to,s);if(l>o){let a=e.state.doc.lineAt(o),h=a.toa.from;o--)if(this.boundary.test(a.text[o-1-a.from])){c=o;break}for(;lu.push(b.range(g,y));if(a==h)for(this.regexp.lastIndex=c-a.from;(d=this.regexp.exec(a.text))&&d.indexthis.addMatch(y,e,g,p));t=t.update({filterFrom:c,filterTo:f,filter:(g,y)=>gf,add:u})}}return t}}const nr=/x/.unicode!=null?"gu":"g",gd=new RegExp(`[\0-\b ---Ÿ­؜​‎‏\u2028\u2029‭‮⁦⁧⁩\uFEFF-]`,nr),yd={0:"null",7:"bell",8:"backspace",10:"newline",11:"vertical tab",13:"carriage return",27:"escape",8203:"zero width space",8204:"zero width non-joiner",8205:"zero width joiner",8206:"left-to-right mark",8207:"right-to-left mark",8232:"line separator",8237:"left-to-right override",8238:"right-to-left override",8294:"left-to-right isolate",8295:"right-to-left isolate",8297:"pop directional isolate",8233:"paragraph separator",65279:"zero width no-break space",65532:"object replacement"};let ss=null;function bd(){var n;if(ss==null&&typeof document<"u"&&document.body){let e=document.body.style;ss=((n=e.tabSize)!==null&&n!==void 0?n:e.MozTabSize)!=null}return ss||!1}const cn=D.define({combine(n){let e=Rt(n,{render:null,specialChars:gd,addSpecialChars:null});return(e.replaceTabs=!bd())&&(e.specialChars=new RegExp(" |"+e.specialChars.source,nr)),e.addSpecialChars&&(e.specialChars=new RegExp(e.specialChars.source+"|"+e.addSpecialChars.source,nr)),e}});function wd(n={}){return[cn.of(n),kd()]}let Uo=null;function kd(){return Uo||(Uo=ue.fromClass(class{constructor(n){this.view=n,this.decorations=E.none,this.decorationCache=Object.create(null),this.decorator=this.makeDecorator(n.state.facet(cn)),this.decorations=this.decorator.createDeco(n)}makeDecorator(n){return new md({regexp:n.specialChars,decoration:(e,t,i)=>{let{doc:s}=t.state,r=ce(e[0],0);if(r==9){let o=s.lineAt(i),l=t.state.tabSize,a=Li(o.text,l,i-o.from);return E.replace({widget:new Cd((l-a%l)*this.view.defaultCharacterWidth)})}return this.decorationCache[r]||(this.decorationCache[r]=E.replace({widget:new Sd(n,r)}))},boundary:n.replaceTabs?void 0:/[^]/})}update(n){let e=n.state.facet(cn);n.startState.facet(cn)!=e?(this.decorator=this.makeDecorator(e),this.decorations=this.decorator.createDeco(n.view)):this.decorations=this.decorator.updateDeco(n,this.decorations)}},{decorations:n=>n.decorations}))}const vd="•";function xd(n){return n>=32?vd:n==10?"␤":String.fromCharCode(9216+n)}class Sd extends Ue{constructor(e,t){super(),this.options=e,this.code=t}eq(e){return e.code==this.code}toDOM(e){let t=xd(this.code),i=e.state.phrase("Control character")+" "+(yd[this.code]||"0x"+this.code.toString(16)),s=this.options.render&&this.options.render(this.code,i,t);if(s)return s;let r=document.createElement("span");return r.textContent=t,r.title=i,r.setAttribute("aria-label",i),r.className="cm-specialChar",r}ignoreEvent(){return!1}}class Cd extends Ue{constructor(e){super(),this.width=e}eq(e){return e.width==this.width}toDOM(){let e=document.createElement("span");return e.textContent=" ",e.className="cm-tab",e.style.width=this.width+"px",e}ignoreEvent(){return!1}}class Ad extends Ue{constructor(e){super(),this.content=e}toDOM(){let e=document.createElement("span");return e.className="cm-placeholder",e.style.pointerEvents="none",e.appendChild(typeof this.content=="string"?document.createTextNode(this.content):this.content),typeof this.content=="string"?e.setAttribute("aria-label","placeholder "+this.content):e.setAttribute("aria-hidden","true"),e}ignoreEvent(){return!1}}function Md(n){return ue.fromClass(class{constructor(e){this.view=e,this.placeholder=E.set([E.widget({widget:new Ad(n),side:1}).range(0)])}get decorations(){return this.view.state.doc.length?E.none:this.placeholder}},{decorations:e=>e.decorations})}const sr=2e3;function Dd(n,e,t){let i=Math.min(e.line,t.line),s=Math.max(e.line,t.line),r=[];if(e.off>sr||t.off>sr||e.col<0||t.col<0){let o=Math.min(e.off,t.off),l=Math.max(e.off,t.off);for(let a=i;a<=s;a++){let h=n.doc.line(a);h.length<=l&&r.push(w.range(h.from+o,h.to+l))}}else{let o=Math.min(e.col,t.col),l=Math.max(e.col,t.col);for(let a=i;a<=s;a++){let h=n.doc.line(a),c=_s(h.text,o,n.tabSize,!0);if(c<0)r.push(w.cursor(h.to));else{let f=_s(h.text,l,n.tabSize);r.push(w.range(h.from+c,h.from+f))}}}return r}function Td(n,e){let t=n.coordsAtPos(n.viewport.from);return t?Math.round(Math.abs((t.left-e)/n.defaultCharacterWidth)):-1}function Go(n,e){let t=n.posAtCoords({x:e.clientX,y:e.clientY},!1),i=n.state.doc.lineAt(t),s=t-i.from,r=s>sr?-1:s==i.length?Td(n,e.clientX):Li(i.text,n.state.tabSize,t-i.from);return{line:i.number,col:r,off:s}}function Od(n,e){let t=Go(n,e),i=n.state.selection;return t?{update(s){if(s.docChanged){let r=s.changes.mapPos(s.startState.doc.line(t.line).from),o=s.state.doc.lineAt(r);t={line:o.number,col:t.col,off:Math.min(t.off,o.length)},i=i.map(s.changes)}},get(s,r,o){let l=Go(n,s);if(!l)return i;let a=Dd(n.state,t,l);return a.length?o?w.create(a.concat(i.ranges)):w.create(a):i}}:null}function Bd(n){let e=n?.eventFilter||(t=>t.altKey&&t.button==0);return O.mouseSelectionStyle.of((t,i)=>e(i)?Od(t,i):null)}const Pd={Alt:[18,n=>n.altKey],Control:[17,n=>n.ctrlKey],Shift:[16,n=>n.shiftKey],Meta:[91,n=>n.metaKey]},Ed={style:"cursor: crosshair"};function Rd(n={}){let[e,t]=Pd[n.key||"Alt"],i=ue.fromClass(class{constructor(s){this.view=s,this.isDown=!1}set(s){this.isDown!=s&&(this.isDown=s,this.view.update([]))}},{eventHandlers:{keydown(s){this.set(s.keyCode==e||t(s))},keyup(s){(s.keyCode==e||!t(s))&&this.set(!1)},mousemove(s){this.set(t(s))}}});return[i,O.contentAttributes.of(s=>{var r;return!((r=s.plugin(i))===null||r===void 0)&&r.isDown?Ed:null})]}const rs="-10000px";class kh{constructor(e,t,i){this.facet=t,this.createTooltipView=i,this.input=e.state.facet(t),this.tooltips=this.input.filter(s=>s),this.tooltipViews=this.tooltips.map(i)}update(e){var t;let i=e.state.facet(this.facet),s=i.filter(o=>o);if(i===this.input){for(let o of this.tooltipViews)o.update&&o.update(e);return!1}let r=[];for(let o=0;o{var e,t,i;return{position:A.ios?"absolute":((e=n.find(s=>s.position))===null||e===void 0?void 0:e.position)||"fixed",parent:((t=n.find(s=>s.parent))===null||t===void 0?void 0:t.parent)||null,tooltipSpace:((i=n.find(s=>s.tooltipSpace))===null||i===void 0?void 0:i.tooltipSpace)||Ld}}}),vh=ue.fromClass(class{constructor(n){this.view=n,this.inView=!0,this.lastTransaction=0,this.measureTimeout=-1;let e=n.state.facet(os);this.position=e.position,this.parent=e.parent,this.classes=n.themeClasses,this.createContainer(),this.measureReq={read:this.readMeasure.bind(this),write:this.writeMeasure.bind(this),key:this},this.manager=new kh(n,Tr,t=>this.createTooltip(t)),this.intersectionObserver=typeof IntersectionObserver=="function"?new IntersectionObserver(t=>{Date.now()>this.lastTransaction-50&&t.length>0&&t[t.length-1].intersectionRatio<1&&this.measureSoon()},{threshold:[1]}):null,this.observeIntersection(),n.win.addEventListener("resize",this.measureSoon=this.measureSoon.bind(this)),this.maybeMeasure()}createContainer(){this.parent?(this.container=document.createElement("div"),this.container.style.position="relative",this.container.className=this.view.themeClasses,this.parent.appendChild(this.container)):this.container=this.view.dom}observeIntersection(){if(this.intersectionObserver){this.intersectionObserver.disconnect();for(let n of this.manager.tooltipViews)this.intersectionObserver.observe(n.dom)}}measureSoon(){this.measureTimeout<0&&(this.measureTimeout=setTimeout(()=>{this.measureTimeout=-1,this.maybeMeasure()},50))}update(n){n.transactions.length&&(this.lastTransaction=Date.now());let e=this.manager.update(n);e&&this.observeIntersection();let t=e||n.geometryChanged,i=n.state.facet(os);if(i.position!=this.position){this.position=i.position;for(let s of this.manager.tooltipViews)s.dom.style.position=this.position;t=!0}if(i.parent!=this.parent){this.parent&&this.container.remove(),this.parent=i.parent,this.createContainer();for(let s of this.manager.tooltipViews)this.container.appendChild(s.dom);t=!0}else this.parent&&this.view.themeClasses!=this.classes&&(this.classes=this.container.className=this.view.themeClasses);t&&this.maybeMeasure()}createTooltip(n){let e=n.create(this.view);if(e.dom.classList.add("cm-tooltip"),n.arrow&&!e.dom.querySelector(".cm-tooltip > .cm-tooltip-arrow")){let t=document.createElement("div");t.className="cm-tooltip-arrow",e.dom.appendChild(t)}return e.dom.style.position=this.position,e.dom.style.top=rs,this.container.appendChild(e.dom),e.mount&&e.mount(this.view),e}destroy(){var n,e;this.view.win.removeEventListener("resize",this.measureSoon);for(let t of this.manager.tooltipViews)t.dom.remove(),(n=t.destroy)===null||n===void 0||n.call(t);(e=this.intersectionObserver)===null||e===void 0||e.disconnect(),clearTimeout(this.measureTimeout)}readMeasure(){let n=this.view.dom.getBoundingClientRect();return{editor:n,parent:this.parent?this.container.getBoundingClientRect():n,pos:this.manager.tooltips.map((e,t)=>{let i=this.manager.tooltipViews[t];return i.getCoords?i.getCoords(e.pos):this.view.coordsAtPos(e.pos)}),size:this.manager.tooltipViews.map(({dom:e})=>e.getBoundingClientRect()),space:this.view.state.facet(os).tooltipSpace(this.view)}}writeMeasure(n){let{editor:e,space:t}=n,i=[];for(let s=0;s=Math.min(e.bottom,t.bottom)||a.rightMath.min(e.right,t.right)+.1){l.style.top=rs;continue}let c=r.arrow?o.dom.querySelector(".cm-tooltip-arrow"):null,f=c?7:0,u=h.right-h.left,d=h.bottom-h.top,p=o.offset||Nd,g=this.view.textDirection==Y.LTR,y=h.width>t.right-t.left?g?t.left:t.right-h.width:g?Math.min(a.left-(c?14:0)+p.x,t.right-u):Math.max(t.left,a.left-u+(c?14:0)-p.x),b=!!r.above;!r.strictSide&&(b?a.top-(h.bottom-h.top)-p.yt.bottom)&&b==t.bottom-a.bottom>a.top-t.top&&(b=!b);let v=b?a.top-d-f-p.y:a.bottom+f+p.y,S=y+u;if(o.overlap!==!0)for(let k of i)k.lefty&&k.topv&&(v=b?k.top-d-2-f:k.bottom+f+2);this.position=="absolute"?(l.style.top=v-n.parent.top+"px",l.style.left=y-n.parent.left+"px"):(l.style.top=v+"px",l.style.left=y+"px"),c&&(c.style.left=`${a.left+(g?p.x:-p.x)-(y+14-7)}px`),o.overlap!==!0&&i.push({left:y,top:v,right:S,bottom:v+d}),l.classList.toggle("cm-tooltip-above",b),l.classList.toggle("cm-tooltip-below",!b),o.positioned&&o.positioned()}}maybeMeasure(){if(this.manager.tooltips.length&&(this.view.inView&&this.view.requestMeasure(this.measureReq),this.inView!=this.view.inView&&(this.inView=this.view.inView,!this.inView)))for(let n of this.manager.tooltipViews)n.dom.style.top=rs}},{eventHandlers:{scroll(){this.maybeMeasure()}}}),Id=O.baseTheme({".cm-tooltip":{zIndex:100},"&light .cm-tooltip":{border:"1px solid #bbb",backgroundColor:"#f5f5f5"},"&light .cm-tooltip-section:not(:first-child)":{borderTop:"1px solid #bbb"},"&dark .cm-tooltip":{backgroundColor:"#333338",color:"white"},".cm-tooltip-arrow":{height:"7px",width:`${7*2}px`,position:"absolute",zIndex:-1,overflow:"hidden","&:before, &:after":{content:"''",position:"absolute",width:0,height:0,borderLeft:"7px solid transparent",borderRight:"7px solid transparent"},".cm-tooltip-above &":{bottom:"-7px","&:before":{borderTop:"7px solid #bbb"},"&:after":{borderTop:"7px solid #f5f5f5",bottom:"1px"}},".cm-tooltip-below &":{top:"-7px","&:before":{borderBottom:"7px solid #bbb"},"&:after":{borderBottom:"7px solid #f5f5f5",top:"1px"}}},"&dark .cm-tooltip .cm-tooltip-arrow":{"&:before":{borderTopColor:"#333338",borderBottomColor:"#333338"},"&:after":{borderTopColor:"transparent",borderBottomColor:"transparent"}}}),Nd={x:0,y:0},Tr=D.define({enables:[vh,Id]}),An=D.define();class Or{constructor(e){this.view=e,this.mounted=!1,this.dom=document.createElement("div"),this.dom.classList.add("cm-tooltip-hover"),this.manager=new kh(e,An,t=>this.createHostedView(t))}static create(e){return new Or(e)}createHostedView(e){let t=e.create(this.view);return t.dom.classList.add("cm-tooltip-section"),this.dom.appendChild(t.dom),this.mounted&&t.mount&&t.mount(this.view),t}mount(e){for(let t of this.manager.tooltipViews)t.mount&&t.mount(e);this.mounted=!0}positioned(){for(let e of this.manager.tooltipViews)e.positioned&&e.positioned()}update(e){this.manager.update(e)}}const _d=Tr.compute([An],n=>{let e=n.facet(An).filter(t=>t);return e.length===0?null:{pos:Math.min(...e.map(t=>t.pos)),end:Math.max(...e.filter(t=>t.end!=null).map(t=>t.end)),create:Or.create,above:e[0].above,arrow:e.some(t=>t.arrow)}});class Vd{constructor(e,t,i,s,r){this.view=e,this.source=t,this.field=i,this.setHover=s,this.hoverTime=r,this.hoverTimeout=-1,this.restartTimeout=-1,this.pending=null,this.lastMove={x:0,y:0,target:e.dom,time:0},this.checkHover=this.checkHover.bind(this),e.dom.addEventListener("mouseleave",this.mouseleave=this.mouseleave.bind(this)),e.dom.addEventListener("mousemove",this.mousemove=this.mousemove.bind(this))}update(){this.pending&&(this.pending=null,clearTimeout(this.restartTimeout),this.restartTimeout=setTimeout(()=>this.startHover(),20))}get active(){return this.view.state.field(this.field)}checkHover(){if(this.hoverTimeout=-1,this.active)return;let e=Date.now()-this.lastMove.time;ei.bottom||e.xi.right+this.view.defaultCharacterWidth)return;let s=this.view.bidiSpans(this.view.state.doc.lineAt(t)).find(l=>l.from<=t&&l.to>=t),r=s&&s.dir==Y.RTL?-1:1,o=this.source(this.view,t,e.x{this.pending==l&&(this.pending=null,a&&this.view.dispatch({effects:this.setHover.of(a)}))},a=>Ee(this.view.state,a,"hover tooltip"))}else o&&this.view.dispatch({effects:this.setHover.of(o)})}mousemove(e){var t;this.lastMove={x:e.clientX,y:e.clientY,target:e.target,time:Date.now()},this.hoverTimeout<0&&(this.hoverTimeout=setTimeout(this.checkHover,this.hoverTime));let i=this.active;if(i&&!Fd(this.lastMove.target)||this.pending){let{pos:s}=i||this.pending,r=(t=i?.end)!==null&&t!==void 0?t:s;(s==r?this.view.posAtCoords(this.lastMove)!=s:!Hd(this.view,s,r,e.clientX,e.clientY,6))&&(this.view.dispatch({effects:this.setHover.of(null)}),this.pending=null)}}mouseleave(){clearTimeout(this.hoverTimeout),this.hoverTimeout=-1,this.active&&this.view.dispatch({effects:this.setHover.of(null)})}destroy(){clearTimeout(this.hoverTimeout),this.view.dom.removeEventListener("mouseleave",this.mouseleave),this.view.dom.removeEventListener("mousemove",this.mousemove)}}function Fd(n){for(let e=n;e;e=e.parentNode)if(e.nodeType==1&&e.classList.contains("cm-tooltip"))return!0;return!1}function Hd(n,e,t,i,s,r){let o=document.createRange(),l=n.domAtPos(e),a=n.domAtPos(t);o.setEnd(a.node,a.offset),o.setStart(l.node,l.offset);let h=o.getClientRects();o.detach();for(let c=0;cAn.from(s)});return[i,ue.define(s=>new Vd(s,n,i,t,e.hoverTime||300)),_d]}function zd(n,e){let t=n.plugin(vh);if(!t)return null;let i=t.manager.tooltips.indexOf(e);return i<0?null:t.manager.tooltipViews[i]}const qd=R.define(),Jo=D.define({combine(n){let e,t;for(let i of n)e=e||i.topContainer,t=t||i.bottomContainer;return{topContainer:e,bottomContainer:t}}});function jd(n,e){let t=n.plugin(xh),i=t?t.specs.indexOf(e):-1;return i>-1?t.panels[i]:null}const xh=ue.fromClass(class{constructor(n){this.input=n.state.facet(rr),this.specs=this.input.filter(t=>t),this.panels=this.specs.map(t=>t(n));let e=n.state.facet(Jo);this.top=new Ji(n,!0,e.topContainer),this.bottom=new Ji(n,!1,e.bottomContainer),this.top.sync(this.panels.filter(t=>t.top)),this.bottom.sync(this.panels.filter(t=>!t.top));for(let t of this.panels)t.dom.classList.add("cm-panel"),t.mount&&t.mount()}update(n){let e=n.state.facet(Jo);this.top.container!=e.topContainer&&(this.top.sync([]),this.top=new Ji(n.view,!0,e.topContainer)),this.bottom.container!=e.bottomContainer&&(this.bottom.sync([]),this.bottom=new Ji(n.view,!1,e.bottomContainer)),this.top.syncClasses(),this.bottom.syncClasses();let t=n.state.facet(rr);if(t!=this.input){let i=t.filter(a=>a),s=[],r=[],o=[],l=[];for(let a of i){let h=this.specs.indexOf(a),c;h<0?(c=a(n.view),l.push(c)):(c=this.panels[h],c.update&&c.update(n)),s.push(c),(c.top?r:o).push(c)}this.specs=i,this.panels=s,this.top.sync(r),this.bottom.sync(o);for(let a of l)a.dom.classList.add("cm-panel"),a.mount&&a.mount()}else for(let i of this.panels)i.update&&i.update(n)}destroy(){this.top.sync([]),this.bottom.sync([])}},{provide:n=>O.scrollMargins.of(e=>{let t=e.plugin(n);return t&&{top:t.top.scrollMargin(),bottom:t.bottom.scrollMargin()}})});class Ji{constructor(e,t,i){this.view=e,this.top=t,this.container=i,this.dom=void 0,this.classes="",this.panels=[],this.syncClasses()}sync(e){for(let t of this.panels)t.destroy&&e.indexOf(t)<0&&t.destroy();this.panels=e,this.syncDOM()}syncDOM(){if(this.panels.length==0){this.dom&&(this.dom.remove(),this.dom=void 0);return}if(!this.dom){this.dom=document.createElement("div"),this.dom.className=this.top?"cm-panels cm-panels-top":"cm-panels cm-panels-bottom",this.dom.style[this.top?"top":"bottom"]="0";let t=this.container||this.view.dom;t.insertBefore(this.dom,this.top?t.firstChild:null)}let e=this.dom.firstChild;for(let t of this.panels)if(t.dom.parentNode==this.dom){for(;e!=t.dom;)e=Yo(e);e=e.nextSibling}else this.dom.insertBefore(t.dom,e);for(;e;)e=Yo(e)}scrollMargin(){return!this.dom||this.container?0:Math.max(0,this.top?this.dom.getBoundingClientRect().bottom-Math.max(0,this.view.scrollDOM.getBoundingClientRect().top):Math.min(innerHeight,this.view.scrollDOM.getBoundingClientRect().bottom)-this.dom.getBoundingClientRect().top)}syncClasses(){if(!(!this.container||this.classes==this.view.themeClasses)){for(let e of this.classes.split(" "))e&&this.container.classList.remove(e);for(let e of(this.classes=this.view.themeClasses).split(" "))e&&this.container.classList.add(e)}}}function Yo(n){let e=n.nextSibling;return n.remove(),e}const rr=D.define({enables:xh});class ct extends St{compare(e){return this==e||this.constructor==e.constructor&&this.eq(e)}eq(e){return!1}destroy(e){}}ct.prototype.elementClass="";ct.prototype.toDOM=void 0;ct.prototype.mapMode=le.TrackBefore;ct.prototype.startSide=ct.prototype.endSide=-1;ct.prototype.point=!0;const ls=D.define(),Kd={class:"",renderEmptyElements:!1,elementStyle:"",markers:()=>F.empty,lineMarker:()=>null,lineMarkerChange:null,initialSpacer:null,updateSpacer:null,domEventHandlers:{}},wi=D.define();function $d(n){return[Sh(),wi.of(Object.assign(Object.assign({},Kd),n))]}const or=D.define({combine:n=>n.some(e=>e)});function Sh(n){let e=[Ud];return n&&n.fixed===!1&&e.push(or.of(!0)),e}const Ud=ue.fromClass(class{constructor(n){this.view=n,this.prevViewport=n.viewport,this.dom=document.createElement("div"),this.dom.className="cm-gutters",this.dom.setAttribute("aria-hidden","true"),this.dom.style.minHeight=this.view.contentHeight+"px",this.gutters=n.state.facet(wi).map(e=>new Zo(n,e));for(let e of this.gutters)this.dom.appendChild(e.dom);this.fixed=!n.state.facet(or),this.fixed&&(this.dom.style.position="sticky"),this.syncGutters(!1),n.scrollDOM.insertBefore(this.dom,n.contentDOM)}update(n){if(this.updateGutters(n)){let e=this.prevViewport,t=n.view.viewport,i=Math.min(e.to,t.to)-Math.max(e.from,t.from);this.syncGutters(i<(t.to-t.from)*.8)}n.geometryChanged&&(this.dom.style.minHeight=this.view.contentHeight+"px"),this.view.state.facet(or)!=!this.fixed&&(this.fixed=!this.fixed,this.dom.style.position=this.fixed?"sticky":""),this.prevViewport=n.view.viewport}syncGutters(n){let e=this.dom.nextSibling;n&&this.dom.remove();let t=F.iter(this.view.state.facet(ls),this.view.viewport.from),i=[],s=this.gutters.map(r=>new Gd(r,this.view.viewport,-this.view.documentPadding.top));for(let r of this.view.viewportLineBlocks){let o;if(Array.isArray(r.type)){for(let l of r.type)if(l.type==H.Text){o=l;break}}else o=r.type==H.Text?r:void 0;if(o){i.length&&(i=[]),Ch(t,i,r.from);for(let l of s)l.line(this.view,o,i)}}for(let r of s)r.finish();n&&this.view.scrollDOM.insertBefore(this.dom,e)}updateGutters(n){let e=n.startState.facet(wi),t=n.state.facet(wi),i=n.docChanged||n.heightChanged||n.viewportChanged||!F.eq(n.startState.facet(ls),n.state.facet(ls),n.view.viewport.from,n.view.viewport.to);if(e==t)for(let s of this.gutters)s.update(n)&&(i=!0);else{i=!0;let s=[];for(let r of t){let o=e.indexOf(r);o<0?s.push(new Zo(this.view,r)):(this.gutters[o].update(n),s.push(this.gutters[o]))}for(let r of this.gutters)r.dom.remove(),s.indexOf(r)<0&&r.destroy();for(let r of s)this.dom.appendChild(r.dom);this.gutters=s}return i}destroy(){for(let n of this.gutters)n.destroy();this.dom.remove()}},{provide:n=>O.scrollMargins.of(e=>{let t=e.plugin(n);return!t||t.gutters.length==0||!t.fixed?null:e.textDirection==Y.LTR?{left:t.dom.offsetWidth}:{right:t.dom.offsetWidth}})});function Xo(n){return Array.isArray(n)?n:[n]}function Ch(n,e,t){for(;n.value&&n.from<=t;)n.from==t&&e.push(n.value),n.next()}class Gd{constructor(e,t,i){this.gutter=e,this.height=i,this.localMarkers=[],this.i=0,this.cursor=F.iter(e.markers,t.from)}line(e,t,i){this.localMarkers.length&&(this.localMarkers=[]),Ch(this.cursor,this.localMarkers,t.from);let s=i.length?this.localMarkers.concat(i):this.localMarkers,r=this.gutter.config.lineMarker(e,t,s);r&&s.unshift(r);let o=this.gutter;if(s.length==0&&!o.config.renderEmptyElements)return;let l=t.top-this.height;if(this.i==o.elements.length){let a=new Ah(e,t.height,l,s);o.elements.push(a),o.dom.appendChild(a.dom)}else o.elements[this.i].update(e,t.height,l,s);this.height=t.bottom,this.i++}finish(){let e=this.gutter;for(;e.elements.length>this.i;){let t=e.elements.pop();e.dom.removeChild(t.dom),t.destroy()}}}class Zo{constructor(e,t){this.view=e,this.config=t,this.elements=[],this.spacer=null,this.dom=document.createElement("div"),this.dom.className="cm-gutter"+(this.config.class?" "+this.config.class:"");for(let i in t.domEventHandlers)this.dom.addEventListener(i,s=>{let r=e.lineBlockAtHeight(s.clientY-e.documentTop);t.domEventHandlers[i](e,r,s)&&s.preventDefault()});this.markers=Xo(t.markers(e)),t.initialSpacer&&(this.spacer=new Ah(e,0,0,[t.initialSpacer(e)]),this.dom.appendChild(this.spacer.dom),this.spacer.dom.style.cssText+="visibility: hidden; pointer-events: none")}update(e){let t=this.markers;if(this.markers=Xo(this.config.markers(e.view)),this.spacer&&this.config.updateSpacer){let s=this.config.updateSpacer(this.spacer.markers[0],e);s!=this.spacer.markers[0]&&this.spacer.update(e.view,0,0,[s])}let i=e.view.viewport;return!F.eq(this.markers,t,i.from,i.to)||(this.config.lineMarkerChange?this.config.lineMarkerChange(e):!1)}destroy(){for(let e of this.elements)e.destroy()}}class Ah{constructor(e,t,i,s){this.height=-1,this.above=0,this.markers=[],this.dom=document.createElement("div"),this.dom.className="cm-gutterElement",this.update(e,t,i,s)}update(e,t,i,s){this.height!=t&&(this.dom.style.height=(this.height=t)+"px"),this.above!=i&&(this.dom.style.marginTop=(this.above=i)?i+"px":""),Jd(this.markers,s)||this.setMarkers(e,s)}setMarkers(e,t){let i="cm-gutterElement",s=this.dom.firstChild;for(let r=0,o=0;;){let l=o,a=rr(l,a,h)||o(l,a,h):o}return i}})}});class as extends ct{constructor(e){super(),this.number=e}eq(e){return this.number==e.number}toDOM(){return document.createTextNode(this.number)}}function hs(n,e){return n.state.facet(Ft).formatNumber(e,n.state)}const Xd=wi.compute([Ft],n=>({class:"cm-lineNumbers",renderEmptyElements:!1,markers(e){return e.state.facet(Yd)},lineMarker(e,t,i){return i.some(s=>s.toDOM)?null:new as(hs(e,e.state.doc.lineAt(t.from).number))},lineMarkerChange:e=>e.startState.facet(Ft)!=e.state.facet(Ft),initialSpacer(e){return new as(hs(e,Qo(e.state.doc.lines)))},updateSpacer(e,t){let i=hs(t.view,Qo(t.view.state.doc.lines));return i==e.number?e:new as(i)},domEventHandlers:n.facet(Ft).domEventHandlers}));function Zd(n={}){return[Ft.of(n),Sh(),Xd]}function Qo(n){let e=9;for(;e{throw new Error("This node type doesn't define a deserialize function")})}add(e){if(this.perNode)throw new RangeError("Can't add per-node props to node types");return typeof e!="function"&&(e=ge.match(e)),t=>{let i=e(t);return i===void 0?null:[this,i]}}}L.closedBy=new L({deserialize:n=>n.split(" ")});L.openedBy=new L({deserialize:n=>n.split(" ")});L.group=new L({deserialize:n=>n.split(" ")});L.contextHash=new L({perNode:!0});L.lookAhead=new L({perNode:!0});L.mounted=new L({perNode:!0});class tp{constructor(e,t,i){this.tree=e,this.overlay=t,this.parser=i}}const ip=Object.create(null);class ge{constructor(e,t,i,s=0){this.name=e,this.props=t,this.id=i,this.flags=s}static define(e){let t=e.props&&e.props.length?Object.create(null):ip,i=(e.top?1:0)|(e.skipped?2:0)|(e.error?4:0)|(e.name==null?8:0),s=new ge(e.name||"",t,e.id,i);if(e.props){for(let r of e.props)if(Array.isArray(r)||(r=r(s)),r){if(r[0].perNode)throw new RangeError("Can't store a per-node prop on a node type");t[r[0].id]=r[1]}}return s}prop(e){return this.props[e.id]}get isTop(){return(this.flags&1)>0}get isSkipped(){return(this.flags&2)>0}get isError(){return(this.flags&4)>0}get isAnonymous(){return(this.flags&8)>0}is(e){if(typeof e=="string"){if(this.name==e)return!0;let t=this.prop(L.group);return t?t.indexOf(e)>-1:!1}return this.id==e}static match(e){let t=Object.create(null);for(let i in e)for(let s of i.split(" "))t[s]=e[i];return i=>{for(let s=i.prop(L.group),r=-1;r<(s?s.length:0);r++){let o=t[r<0?i.name:s[r]];if(o)return o}}}}ge.none=new ge("",Object.create(null),0,8);class Br{constructor(e){this.types=e;for(let t=0;t=s&&(o.type.isAnonymous||t(o)!==!1)){if(o.firstChild())continue;l=!0}for(;l&&i&&!o.type.isAnonymous&&i(o),!o.nextSibling();){if(!o.parent())return;l=!0}}}prop(e){return e.perNode?this.props?this.props[e.id]:void 0:this.type.prop(e)}get propValues(){let e=[];if(this.props)for(let t in this.props)e.push([+t,this.props[t]]);return e}balance(e={}){return this.children.length<=8?this:Rr(ge.none,this.children,this.positions,0,this.children.length,0,this.length,(t,i,s)=>new W(this.type,t,i,s,this.propValues),e.makeTree||((t,i,s)=>new W(ge.none,t,i,s)))}static build(e){return sp(e)}}W.empty=new W(ge.none,[],[],0);class Pr{constructor(e,t){this.buffer=e,this.index=t}get id(){return this.buffer[this.index-4]}get start(){return this.buffer[this.index-3]}get end(){return this.buffer[this.index-2]}get size(){return this.buffer[this.index-1]}get pos(){return this.index}next(){this.index-=4}fork(){return new Pr(this.buffer,this.index)}}class Lt{constructor(e,t,i){this.buffer=e,this.length=t,this.set=i}get type(){return ge.none}toString(){let e=[];for(let t=0;t0));a=o[a+3]);return l}slice(e,t,i){let s=this.buffer,r=new Uint16Array(t-e),o=0;for(let l=e,a=0;l=e&&te;case 1:return t<=e&&i>e;case 2:return i>e;case 4:return!0}}function Dh(n,e){let t=n.childBefore(e);for(;t;){let i=t.lastChild;if(!i||i.to!=t.to)break;i.type.isError&&i.from==i.to?(n=t,t=i.prevSibling):t=i}return n}function Yt(n,e,t,i){for(var s;n.from==n.to||(t<1?n.from>=e:n.from>e)||(t>-1?n.to<=e:n.to0?l.length:-1;e!=h;e+=t){let c=l[e],f=a[e]+o.from;if(Mh(s,i,f,f+c.length)){if(c instanceof Lt){if(r&Z.ExcludeBuffers)continue;let u=c.findChild(0,c.buffer.length,t,i-f,s);if(u>-1)return new ze(new np(o,c,e,f),null,u)}else if(r&Z.IncludeAnonymous||!c.type.isAnonymous||Er(c)){let u;if(!(r&Z.IgnoreMounts)&&c.props&&(u=c.prop(L.mounted))&&!u.overlay)return new Oe(u.tree,f,e,o);let d=new Oe(c,f,e,o);return r&Z.IncludeAnonymous||!d.type.isAnonymous?d:d.nextChild(t<0?c.children.length-1:0,t,i,s)}}}if(r&Z.IncludeAnonymous||!o.type.isAnonymous||(o.index>=0?e=o.index+t:e=t<0?-1:o._parent._tree.children.length,o=o._parent,!o))return null}}get firstChild(){return this.nextChild(0,1,0,4)}get lastChild(){return this.nextChild(this._tree.children.length-1,-1,0,4)}childAfter(e){return this.nextChild(0,1,e,2)}childBefore(e){return this.nextChild(this._tree.children.length-1,-1,e,-2)}enter(e,t,i=0){let s;if(!(i&Z.IgnoreOverlays)&&(s=this._tree.prop(L.mounted))&&s.overlay){let r=e-this.from;for(let{from:o,to:l}of s.overlay)if((t>0?o<=r:o=r:l>r))return new Oe(s.tree,s.overlay[0].from+this.from,-1,this)}return this.nextChild(0,1,e,t,i)}nextSignificantParent(){let e=this;for(;e.type.isAnonymous&&e._parent;)e=e._parent;return e}get parent(){return this._parent?this._parent.nextSignificantParent():null}get nextSibling(){return this._parent&&this.index>=0?this._parent.nextChild(this.index+1,1,0,4):null}get prevSibling(){return this._parent&&this.index>=0?this._parent.nextChild(this.index-1,-1,0,4):null}cursor(e=0){return new Ti(this,e)}get tree(){return this._tree}toTree(){return this._tree}resolve(e,t=0){return Yt(this,e,t,!1)}resolveInner(e,t=0){return Yt(this,e,t,!0)}enterUnfinishedNodesBefore(e){return Dh(this,e)}getChild(e,t=null,i=null){let s=Mn(this,e,t,i);return s.length?s[0]:null}getChildren(e,t=null,i=null){return Mn(this,e,t,i)}toString(){return this._tree.toString()}get node(){return this}matchContext(e){return Dn(this,e)}}function Mn(n,e,t,i){let s=n.cursor(),r=[];if(!s.firstChild())return r;if(t!=null){for(;!s.type.is(t);)if(!s.nextSibling())return r}for(;;){if(i!=null&&s.type.is(i))return r;if(s.type.is(e)&&r.push(s.node),!s.nextSibling())return i==null?r:[]}}function Dn(n,e,t=e.length-1){for(let i=n.parent;t>=0;i=i.parent){if(!i)return!1;if(!i.type.isAnonymous){if(e[t]&&e[t]!=i.name)return!1;t--}}return!0}class np{constructor(e,t,i,s){this.parent=e,this.buffer=t,this.index=i,this.start=s}}class ze{get name(){return this.type.name}get from(){return this.context.start+this.context.buffer.buffer[this.index+1]}get to(){return this.context.start+this.context.buffer.buffer[this.index+2]}constructor(e,t,i){this.context=e,this._parent=t,this.index=i,this.type=e.buffer.set.types[e.buffer.buffer[i]]}child(e,t,i){let{buffer:s}=this.context,r=s.findChild(this.index+4,s.buffer[this.index+3],e,t-this.context.start,i);return r<0?null:new ze(this.context,this,r)}get firstChild(){return this.child(1,0,4)}get lastChild(){return this.child(-1,0,4)}childAfter(e){return this.child(1,e,2)}childBefore(e){return this.child(-1,e,-2)}enter(e,t,i=0){if(i&Z.ExcludeBuffers)return null;let{buffer:s}=this.context,r=s.findChild(this.index+4,s.buffer[this.index+3],t>0?1:-1,e-this.context.start,t);return r<0?null:new ze(this.context,this,r)}get parent(){return this._parent||this.context.parent.nextSignificantParent()}externalSibling(e){return this._parent?null:this.context.parent.nextChild(this.context.index+e,e,0,4)}get nextSibling(){let{buffer:e}=this.context,t=e.buffer[this.index+3];return t<(this._parent?e.buffer[this._parent.index+3]:e.buffer.length)?new ze(this.context,this._parent,t):this.externalSibling(1)}get prevSibling(){let{buffer:e}=this.context,t=this._parent?this._parent.index+4:0;return this.index==t?this.externalSibling(-1):new ze(this.context,this._parent,e.findChild(t,this.index,-1,0,4))}cursor(e=0){return new Ti(this,e)}get tree(){return null}toTree(){let e=[],t=[],{buffer:i}=this.context,s=this.index+4,r=i.buffer[this.index+3];if(r>s){let o=i.buffer[this.index+1];e.push(i.slice(s,r,o)),t.push(0)}return new W(this.type,e,t,this.to-this.from)}resolve(e,t=0){return Yt(this,e,t,!1)}resolveInner(e,t=0){return Yt(this,e,t,!0)}enterUnfinishedNodesBefore(e){return Dh(this,e)}toString(){return this.context.buffer.childString(this.index)}getChild(e,t=null,i=null){let s=Mn(this,e,t,i);return s.length?s[0]:null}getChildren(e,t=null,i=null){return Mn(this,e,t,i)}get node(){return this}matchContext(e){return Dn(this,e)}}class Ti{get name(){return this.type.name}constructor(e,t=0){if(this.mode=t,this.buffer=null,this.stack=[],this.index=0,this.bufferNode=null,e instanceof Oe)this.yieldNode(e);else{this._tree=e.context.parent,this.buffer=e.context;for(let i=e._parent;i;i=i._parent)this.stack.unshift(i.index);this.bufferNode=e,this.yieldBuf(e.index)}}yieldNode(e){return e?(this._tree=e,this.type=e.type,this.from=e.from,this.to=e.to,!0):!1}yieldBuf(e,t){this.index=e;let{start:i,buffer:s}=this.buffer;return this.type=t||s.set.types[s.buffer[e]],this.from=i+s.buffer[e+1],this.to=i+s.buffer[e+2],!0}yield(e){return e?e instanceof Oe?(this.buffer=null,this.yieldNode(e)):(this.buffer=e.context,this.yieldBuf(e.index,e.type)):!1}toString(){return this.buffer?this.buffer.buffer.childString(this.index):this._tree.toString()}enterChild(e,t,i){if(!this.buffer)return this.yield(this._tree.nextChild(e<0?this._tree._tree.children.length-1:0,e,t,i,this.mode));let{buffer:s}=this.buffer,r=s.findChild(this.index+4,s.buffer[this.index+3],e,t-this.buffer.start,i);return r<0?!1:(this.stack.push(this.index),this.yieldBuf(r))}firstChild(){return this.enterChild(1,0,4)}lastChild(){return this.enterChild(-1,0,4)}childAfter(e){return this.enterChild(1,e,2)}childBefore(e){return this.enterChild(-1,e,-2)}enter(e,t,i=this.mode){return this.buffer?i&Z.ExcludeBuffers?!1:this.enterChild(1,e,t):this.yield(this._tree.enter(e,t,i))}parent(){if(!this.buffer)return this.yieldNode(this.mode&Z.IncludeAnonymous?this._tree._parent:this._tree.parent);if(this.stack.length)return this.yieldBuf(this.stack.pop());let e=this.mode&Z.IncludeAnonymous?this.buffer.parent:this.buffer.parent.nextSignificantParent();return this.buffer=null,this.yieldNode(e)}sibling(e){if(!this.buffer)return this._tree._parent?this.yield(this._tree.index<0?null:this._tree._parent.nextChild(this._tree.index+e,e,0,4,this.mode)):!1;let{buffer:t}=this.buffer,i=this.stack.length-1;if(e<0){let s=i<0?0:this.stack[i]+4;if(this.index!=s)return this.yieldBuf(t.findChild(s,this.index,-1,0,4))}else{let s=t.buffer[this.index+3];if(s<(i<0?t.buffer.length:t.buffer[this.stack[i]+3]))return this.yieldBuf(s)}return i<0?this.yield(this.buffer.parent.nextChild(this.buffer.index+e,e,0,4,this.mode)):!1}nextSibling(){return this.sibling(1)}prevSibling(){return this.sibling(-1)}atLastNode(e){let t,i,{buffer:s}=this;if(s){if(e>0){if(this.index-1)for(let r=t+e,o=e<0?-1:i._tree.children.length;r!=o;r+=e){let l=i._tree.children[r];if(this.mode&Z.IncludeAnonymous||l instanceof Lt||!l.type.isAnonymous||Er(l))return!1}return!0}move(e,t){if(t&&this.enterChild(e,0,4))return!0;for(;;){if(this.sibling(e))return!0;if(this.atLastNode(e)||!this.parent())return!1}}next(e=!0){return this.move(1,e)}prev(e=!0){return this.move(-1,e)}moveTo(e,t=0){for(;(this.from==this.to||(t<1?this.from>=e:this.from>e)||(t>-1?this.to<=e:this.to=0;){for(let o=e;o;o=o._parent)if(o.index==s){if(s==this.index)return o;t=o,i=r+1;break e}s=this.stack[--r]}for(let s=i;s=0;r--){if(r<0)return Dn(this.node,e,s);let o=i[t.buffer[this.stack[r]]];if(!o.isAnonymous){if(e[s]&&e[s]!=o.name)return!1;s--}}return!0}}function Er(n){return n.children.some(e=>e instanceof Lt||!e.type.isAnonymous||Er(e))}function sp(n){var e;let{buffer:t,nodeSet:i,maxBufferLength:s=Qd,reused:r=[],minRepeatType:o=i.types.length}=n,l=Array.isArray(t)?new Pr(t,t.length):t,a=i.types,h=0,c=0;function f(k,C,T,B,j){let{id:I,start:P,end:V,size:K}=l,X=c;for(;K<0;)if(l.next(),K==-1){let U=r[I];T.push(U),B.push(P-k);return}else if(K==-3){h=I;return}else if(K==-4){c=I;return}else throw new RangeError(`Unrecognized record size: ${K}`);let M=a[I],$,G,se=P-k;if(V-P<=s&&(G=g(l.pos-C,j))){let U=new Uint16Array(G.size-G.skip),ee=l.pos-G.size,Je=U.length;for(;l.pos>ee;)Je=y(G.start,U,Je);$=new Lt(U,V-G.start,i),se=G.start-k}else{let U=l.pos-K;l.next();let ee=[],Je=[],dt=I>=o?I:-1,It=0,Fi=V;for(;l.pos>U;)dt>=0&&l.id==dt&&l.size>=0?(l.end<=Fi-s&&(d(ee,Je,P,It,l.end,Fi,dt,X),It=ee.length,Fi=l.end),l.next()):f(P,U,ee,Je,dt);if(dt>=0&&It>0&&It-1&&It>0){let Xr=u(M);$=Rr(M,ee,Je,0,ee.length,0,V-P,Xr,Xr)}else $=p(M,ee,Je,V-P,X-V)}T.push($),B.push(se)}function u(k){return(C,T,B)=>{let j=0,I=C.length-1,P,V;if(I>=0&&(P=C[I])instanceof W){if(!I&&P.type==k&&P.length==B)return P;(V=P.prop(L.lookAhead))&&(j=T[I]+P.length+V)}return p(k,C,T,B,j)}}function d(k,C,T,B,j,I,P,V){let K=[],X=[];for(;k.length>B;)K.push(k.pop()),X.push(C.pop()+T-j);k.push(p(i.types[P],K,X,I-j,V-I)),C.push(j-T)}function p(k,C,T,B,j=0,I){if(h){let P=[L.contextHash,h];I=I?[P].concat(I):[P]}if(j>25){let P=[L.lookAhead,j];I=I?[P].concat(I):[P]}return new W(k,C,T,B,I)}function g(k,C){let T=l.fork(),B=0,j=0,I=0,P=T.end-s,V={size:0,start:0,skip:0};e:for(let K=T.pos-k;T.pos>K;){let X=T.size;if(T.id==C&&X>=0){V.size=B,V.start=j,V.skip=I,I+=4,B+=4,T.next();continue}let M=T.pos-X;if(X<0||M=o?4:0,G=T.start;for(T.next();T.pos>M;){if(T.size<0)if(T.size==-3)$+=4;else break e;else T.id>=o&&($+=4);T.next()}j=G,B+=X,I+=$}return(C<0||B==k)&&(V.size=B,V.start=j,V.skip=I),V.size>4?V:void 0}function y(k,C,T){let{id:B,start:j,end:I,size:P}=l;if(l.next(),P>=0&&B4){let K=l.pos-(P-4);for(;l.pos>K;)T=y(k,C,T)}C[--T]=V,C[--T]=I-k,C[--T]=j-k,C[--T]=B}else P==-3?h=B:P==-4&&(c=B);return T}let b=[],v=[];for(;l.pos>0;)f(n.start||0,n.bufferStart||0,b,v,-1);let S=(e=n.length)!==null&&e!==void 0?e:b.length?v[0]+b[0].length:0;return new W(a[n.topID],b.reverse(),v.reverse(),S)}const tl=new WeakMap;function fn(n,e){if(!n.isAnonymous||e instanceof Lt||e.type!=n)return 1;let t=tl.get(e);if(t==null){t=1;for(let i of e.children){if(i.type!=n||!(i instanceof W)){t=1;break}t+=fn(n,i)}tl.set(e,t)}return t}function Rr(n,e,t,i,s,r,o,l,a){let h=0;for(let p=i;p=c)break;T+=B}if(S==k+1){if(T>c){let B=p[k];d(B.children,B.positions,0,B.children.length,g[k]+v);continue}f.push(p[k])}else{let B=g[S-1]+p[S-1].length-C;f.push(Rr(n,p,g,k,S,C,B,null,a))}u.push(C+v-r)}}return d(e,t,i,s,0),(l||a)(f,u,o)}class db{constructor(){this.map=new WeakMap}setBuffer(e,t,i){let s=this.map.get(e);s||this.map.set(e,s=new Map),s.set(t,i)}getBuffer(e,t){let i=this.map.get(e);return i&&i.get(t)}set(e,t){e instanceof ze?this.setBuffer(e.context.buffer,e.index,t):e instanceof Oe&&this.map.set(e.tree,t)}get(e){return e instanceof ze?this.getBuffer(e.context.buffer,e.index):e instanceof Oe?this.map.get(e.tree):void 0}cursorSet(e,t){e.buffer?this.setBuffer(e.buffer.buffer,e.index,t):this.map.set(e.tree,t)}cursorGet(e){return e.buffer?this.getBuffer(e.buffer.buffer,e.index):this.map.get(e.tree)}}class Qe{constructor(e,t,i,s,r=!1,o=!1){this.from=e,this.to=t,this.tree=i,this.offset=s,this.open=(r?1:0)|(o?2:0)}get openStart(){return(this.open&1)>0}get openEnd(){return(this.open&2)>0}static addTree(e,t=[],i=!1){let s=[new Qe(0,e.length,e,0,!1,i)];for(let r of t)r.to>e.length&&s.push(r);return s}static applyChanges(e,t,i=128){if(!t.length)return e;let s=[],r=1,o=e.length?e[0]:null;for(let l=0,a=0,h=0;;l++){let c=l=i)for(;o&&o.from=u.from||f<=u.to||h){let d=Math.max(u.from,a)-h,p=Math.min(u.to,f)-h;u=d>=p?null:new Qe(d,p,u.tree,u.offset+h,l>0,!!c)}if(u&&s.push(u),o.to>f)break;o=rnew Me(s.from,s.to)):[new Me(0,0)]:[new Me(0,e.length)],this.createParse(e,t||[],i)}parse(e,t,i){let s=this.startParse(e,t,i);for(;;){let r=s.advance();if(r)return r}}}class rp{constructor(e){this.string=e}get length(){return this.string.length}chunk(e){return this.string.slice(e)}get lineChunks(){return!1}read(e,t){return this.string.slice(e,t)}}function pb(n){return(e,t,i,s)=>new lp(e,n,t,i,s)}class il{constructor(e,t,i,s,r){this.parser=e,this.parse=t,this.overlay=i,this.target=s,this.ranges=r}}class op{constructor(e,t,i,s,r,o,l){this.parser=e,this.predicate=t,this.mounts=i,this.index=s,this.start=r,this.target=o,this.prev=l,this.depth=0,this.ranges=[]}}const lr=new L({perNode:!0});class lp{constructor(e,t,i,s,r){this.nest=t,this.input=i,this.fragments=s,this.ranges=r,this.inner=[],this.innerDone=0,this.baseTree=null,this.stoppedAt=null,this.baseParse=e}advance(){if(this.baseParse){let i=this.baseParse.advance();if(!i)return null;if(this.baseParse=null,this.baseTree=i,this.startInner(),this.stoppedAt!=null)for(let s of this.inner)s.parse.stopAt(this.stoppedAt)}if(this.innerDone==this.inner.length){let i=this.baseTree;return this.stoppedAt!=null&&(i=new W(i.type,i.children,i.positions,i.length,i.propValues.concat([[lr,this.stoppedAt]]))),i}let e=this.inner[this.innerDone],t=e.parse.advance();if(t){this.innerDone++;let i=Object.assign(Object.create(null),e.target.props);i[L.mounted.id]=new tp(t,e.overlay,e.parser),e.target.props=i}return null}get parsedPos(){if(this.baseParse)return 0;let e=this.input.length;for(let t=this.innerDone;tc.frag.from<=s.from&&c.frag.to>=s.to&&c.mount.overlay);if(h)for(let c of h.mount.overlay){let f=c.from+h.pos,u=c.to+h.pos;f>=s.from&&u<=s.to&&!t.ranges.some(d=>d.fromf)&&t.ranges.push({from:f,to:u})}}l=!1}else if(i&&(o=ap(i.ranges,s.from,s.to)))l=o!=2;else if(!s.type.isAnonymous&&s.fromnew Me(f.from-s.from,f.to-s.from)):null,s.tree,c)),r.overlay?c.length&&(i={ranges:c,depth:0,prev:i}):l=!1}}else t&&(a=t.predicate(s))&&(a===!0&&(a=new Me(s.from,s.to)),a.fromnew Me(c.from-t.start,c.to-t.start)),t.target,h)),t=t.prev}i&&!--i.depth&&(i=i.prev)}}}}function ap(n,e,t){for(let i of n){if(i.from>=t)break;if(i.to>e)return i.from<=e&&i.to>=t?2:1}return 0}function nl(n,e,t,i,s,r){if(e=e.to);i++);let o=s.children[i],l=o.buffer;function a(h,c,f,u,d){let p=h;for(;l[p+2]+r<=e.from;)p=l[p+3];let g=[],y=[];nl(o,h,p,g,y,u);let b=l[p+1],v=l[p+2],S=b+r==e.from&&v+r==e.to&&l[p]==e.type.id;return g.push(S?e.toTree():a(p+4,l[p+3],o.set.types[l[p]],b,v-b)),y.push(b-u),nl(o,l[p+3],c,g,y,u),new W(f,g,y,d)}s.children[i]=a(0,l.length,ge.none,0,o.length);for(let h=0;h<=t;h++)n.childAfter(e.from)}class sl{constructor(e,t){this.offset=t,this.done=!1,this.cursor=e.cursor(Z.IncludeAnonymous|Z.IgnoreMounts)}moveTo(e){let{cursor:t}=this,i=e-this.offset;for(;!this.done&&t.from=e&&t.enter(i,1,Z.IgnoreOverlays|Z.ExcludeBuffers)||t.next(!1)||(this.done=!0)}hasNode(e){if(this.moveTo(e.from),!this.done&&this.cursor.from+this.offset==e.from&&this.cursor.tree)for(let t=this.cursor.tree;;){if(t==e.tree)return!0;if(t.children.length&&t.positions[0]==0&&t.children[0]instanceof W)t=t.children[0];else break}return!1}}class cp{constructor(e){var t;if(this.fragments=e,this.curTo=0,this.fragI=0,e.length){let i=this.curFrag=e[0];this.curTo=(t=i.tree.prop(lr))!==null&&t!==void 0?t:i.to,this.inner=new sl(i.tree,-i.offset)}else this.curFrag=this.inner=null}hasNode(e){for(;this.curFrag&&e.from>=this.curTo;)this.nextFrag();return this.curFrag&&this.curFrag.from<=e.from&&this.curTo>=e.to&&this.inner.hasNode(e)}nextFrag(){var e;if(this.fragI++,this.fragI==this.fragments.length)this.curFrag=this.inner=null;else{let t=this.curFrag=this.fragments[this.fragI];this.curTo=(e=t.tree.prop(lr))!==null&&e!==void 0?e:t.to,this.inner=new sl(t.tree,-t.offset)}}findMounts(e,t){var i;let s=[];if(this.inner){this.inner.cursor.moveTo(e,1);for(let r=this.inner.cursor.node;r;r=r.parent){let o=(i=r.tree)===null||i===void 0?void 0:i.prop(L.mounted);if(o&&o.parser==t)for(let l=this.fragI;l=r.to)break;a.tree==this.curFrag.tree&&s.push({frag:a,pos:r.from-a.offset,mount:o})}}}return s}}function rl(n,e){let t=null,i=e;for(let s=1,r=0;s=l)break;a.to<=o||(t||(i=t=e.slice()),a.froml&&t.splice(r+1,0,new Me(l,a.to))):a.to>l?t[r--]=new Me(l,a.to):t.splice(r--,1))}}return i}function fp(n,e,t,i){let s=0,r=0,o=!1,l=!1,a=-1e9,h=[];for(;;){let c=s==n.length?1e9:o?n[s].to:n[s].from,f=r==e.length?1e9:l?e[r].to:e[r].from;if(o!=l){let u=Math.max(a,t),d=Math.min(c,f,i);unew Me(u.from+i,u.to+i)),f=fp(e,c,a,h);for(let u=0,d=a;;u++){let p=u==f.length,g=p?h:f[u].from;if(g>d&&t.push(new Qe(d,g,s.tree,-o,r.from>=d||r.openStart,r.to<=g||r.openEnd)),p)break;d=f[u].to}}else t.push(new Qe(a,h,s.tree,-o,r.from>=o||r.openStart,r.to<=l||r.openEnd))}return t}let up=0;class Fe{constructor(e,t,i){this.set=e,this.base=t,this.modified=i,this.id=up++}static define(e){if(e?.base)throw new Error("Can not derive from a modified tag");let t=new Fe([],null,[]);if(t.set.push(t),e)for(let i of e.set)t.set.push(i);return t}static defineModifier(){let e=new Tn;return t=>t.modified.indexOf(e)>-1?t:Tn.get(t.base||t,t.modified.concat(e).sort((i,s)=>i.id-s.id))}}let dp=0;class Tn{constructor(){this.instances=[],this.id=dp++}static get(e,t){if(!t.length)return e;let i=t[0].instances.find(l=>l.base==e&&pp(t,l.modified));if(i)return i;let s=[],r=new Fe(s,e,t);for(let l of t)l.instances.push(r);let o=mp(t);for(let l of e.set)if(!l.modified.length)for(let a of o)s.push(Tn.get(l,a));return r}}function pp(n,e){return n.length==e.length&&n.every((t,i)=>t==e[i])}function mp(n){let e=[[]];for(let t=0;ti.length-t.length)}function gp(n){let e=Object.create(null);for(let t in n){let i=n[t];Array.isArray(i)||(i=[i]);for(let s of t.split(" "))if(s){let r=[],o=2,l=s;for(let f=0;;){if(l=="..."&&f>0&&f+3==s.length){o=1;break}let u=/^"(?:[^"\\]|\\.)*?"|[^\/!]+/.exec(l);if(!u)throw new RangeError("Invalid path: "+s);if(r.push(u[0]=="*"?"":u[0][0]=='"'?JSON.parse(u[0]):u[0]),f+=u[0].length,f==s.length)break;let d=s[f++];if(f==s.length&&d=="!"){o=0;break}if(d!="/")throw new RangeError("Invalid path: "+s);l=s.slice(f)}let a=r.length-1,h=r[a];if(!h)throw new RangeError("Invalid path: "+s);let c=new On(i,o,a>0?r.slice(0,a):null);e[h]=c.sort(e[h])}}return Oh.add(e)}const Oh=new L;class On{constructor(e,t,i,s){this.tags=e,this.mode=t,this.context=i,this.next=s}get opaque(){return this.mode==0}get inherit(){return this.mode==1}sort(e){return!e||e.depth{let o=s;for(let l of r)for(let a of l.set){let h=t[a.id];if(h){o=o?o+" "+h:h;break}}return o},scope:i}}function yp(n,e){let t=null;for(let i of n){let s=i.style(e);s&&(t=t?t+" "+s:s)}return t}function bp(n,e,t,i=0,s=n.length){let r=new wp(i,Array.isArray(e)?e:[e],t);r.highlightRange(n.cursor(),i,s,"",r.highlighters),r.flush(s)}class wp{constructor(e,t,i){this.at=e,this.highlighters=t,this.span=i,this.class=""}startSpan(e,t){t!=this.class&&(this.flush(e),e>this.at&&(this.at=e),this.class=t)}flush(e){e>this.at&&this.class&&this.span(this.at,e,this.class)}highlightRange(e,t,i,s,r){let{type:o,from:l,to:a}=e;if(l>=i||a<=t)return;o.isTop&&(r=this.highlighters.filter(d=>!d.scope||d.scope(o)));let h=s,c=kp(e)||On.empty,f=yp(r,c.tags);if(f&&(h&&(h+=" "),h+=f,c.mode==1&&(s+=(s?" ":"")+f)),this.startSpan(e.from,h),c.opaque)return;let u=e.tree&&e.tree.prop(L.mounted);if(u&&u.overlay){let d=e.node.enter(u.overlay[0].from+l,1),p=this.highlighters.filter(y=>!y.scope||y.scope(u.tree.type)),g=e.firstChild();for(let y=0,b=l;;y++){let v=y=S||!e.nextSibling())););if(!v||S>i)break;b=v.to+l,b>t&&(this.highlightRange(d.cursor(),Math.max(t,v.from+l),Math.min(i,b),s,p),this.startSpan(b,h))}g&&e.parent()}else if(e.firstChild()){do if(!(e.to<=t)){if(e.from>=i)break;this.highlightRange(e,t,i,s,r),this.startSpan(Math.min(i,e.to),h)}while(e.nextSibling());e.parent()}}}function kp(n){let e=n.type.prop(Oh);for(;e&&e.context&&!n.matchContext(e.context);)e=e.next;return e||null}const x=Fe.define,Xi=x(),et=x(),ll=x(et),al=x(et),tt=x(),Zi=x(tt),cs=x(tt),_e=x(),pt=x(_e),Ie=x(),Ne=x(),ar=x(),oi=x(ar),Qi=x(),m={comment:Xi,lineComment:x(Xi),blockComment:x(Xi),docComment:x(Xi),name:et,variableName:x(et),typeName:ll,tagName:x(ll),propertyName:al,attributeName:x(al),className:x(et),labelName:x(et),namespace:x(et),macroName:x(et),literal:tt,string:Zi,docString:x(Zi),character:x(Zi),attributeValue:x(Zi),number:cs,integer:x(cs),float:x(cs),bool:x(tt),regexp:x(tt),escape:x(tt),color:x(tt),url:x(tt),keyword:Ie,self:x(Ie),null:x(Ie),atom:x(Ie),unit:x(Ie),modifier:x(Ie),operatorKeyword:x(Ie),controlKeyword:x(Ie),definitionKeyword:x(Ie),moduleKeyword:x(Ie),operator:Ne,derefOperator:x(Ne),arithmeticOperator:x(Ne),logicOperator:x(Ne),bitwiseOperator:x(Ne),compareOperator:x(Ne),updateOperator:x(Ne),definitionOperator:x(Ne),typeOperator:x(Ne),controlOperator:x(Ne),punctuation:ar,separator:x(ar),bracket:oi,angleBracket:x(oi),squareBracket:x(oi),paren:x(oi),brace:x(oi),content:_e,heading:pt,heading1:x(pt),heading2:x(pt),heading3:x(pt),heading4:x(pt),heading5:x(pt),heading6:x(pt),contentSeparator:x(_e),list:x(_e),quote:x(_e),emphasis:x(_e),strong:x(_e),link:x(_e),monospace:x(_e),strikethrough:x(_e),inserted:x(),deleted:x(),changed:x(),invalid:x(),meta:Qi,documentMeta:x(Qi),annotation:x(Qi),processingInstruction:x(Qi),definition:Fe.defineModifier(),constant:Fe.defineModifier(),function:Fe.defineModifier(),standard:Fe.defineModifier(),local:Fe.defineModifier(),special:Fe.defineModifier()};Bh([{tag:m.link,class:"tok-link"},{tag:m.heading,class:"tok-heading"},{tag:m.emphasis,class:"tok-emphasis"},{tag:m.strong,class:"tok-strong"},{tag:m.keyword,class:"tok-keyword"},{tag:m.atom,class:"tok-atom"},{tag:m.bool,class:"tok-bool"},{tag:m.url,class:"tok-url"},{tag:m.labelName,class:"tok-labelName"},{tag:m.inserted,class:"tok-inserted"},{tag:m.deleted,class:"tok-deleted"},{tag:m.literal,class:"tok-literal"},{tag:m.string,class:"tok-string"},{tag:m.number,class:"tok-number"},{tag:[m.regexp,m.escape,m.special(m.string)],class:"tok-string2"},{tag:m.variableName,class:"tok-variableName"},{tag:m.local(m.variableName),class:"tok-variableName tok-local"},{tag:m.definition(m.variableName),class:"tok-variableName tok-definition"},{tag:m.special(m.variableName),class:"tok-variableName2"},{tag:m.definition(m.propertyName),class:"tok-propertyName tok-definition"},{tag:m.typeName,class:"tok-typeName"},{tag:m.namespace,class:"tok-namespace"},{tag:m.className,class:"tok-className"},{tag:m.macroName,class:"tok-macroName"},{tag:m.propertyName,class:"tok-propertyName"},{tag:m.operator,class:"tok-operator"},{tag:m.comment,class:"tok-comment"},{tag:m.meta,class:"tok-meta"},{tag:m.invalid,class:"tok-invalid"},{tag:m.punctuation,class:"tok-punctuation"}]);var fs;const wt=new L;function Ph(n){return D.define({combine:n?e=>e.concat(n):void 0})}const vp=new L;class De{constructor(e,t,i=[],s=""){this.data=e,this.name=s,N.prototype.hasOwnProperty("tree")||Object.defineProperty(N.prototype,"tree",{get(){return ae(this)}}),this.parser=t,this.extension=[ft.of(this),N.languageData.of((r,o,l)=>{let a=hl(r,o,l),h=a.type.prop(wt);if(!h)return[];let c=r.facet(h),f=a.type.prop(vp);if(f){let u=a.resolve(o-a.from,l);for(let d of f)if(d.test(u,r)){let p=r.facet(d.facet);return d.type=="replace"?p:p.concat(c)}}return c})].concat(i)}isActiveAt(e,t,i=-1){return hl(e,t,i).type.prop(wt)==this.data}findRegions(e){let t=e.facet(ft);if(t?.data==this.data)return[{from:0,to:e.doc.length}];if(!t||!t.allowsNesting)return[];let i=[],s=(r,o)=>{if(r.prop(wt)==this.data){i.push({from:o,to:o+r.length});return}let l=r.prop(L.mounted);if(l){if(l.tree.prop(wt)==this.data){if(l.overlay)for(let a of l.overlay)i.push({from:a.from+o,to:a.to+o});else i.push({from:o,to:o+r.length});return}else if(l.overlay){let a=i.length;if(s(l.tree,l.overlay[0].from+o),i.length>a)return}}for(let a=0;ai.isTop?t:void 0)]}),e.name)}configure(e,t){return new hr(this.data,this.parser.configure(e),t||this.name)}get allowsNesting(){return this.parser.hasWrappers()}}function ae(n){let e=n.field(De.state,!1);return e?e.tree:W.empty}class xp{constructor(e){this.doc=e,this.cursorPos=0,this.string="",this.cursor=e.iter()}get length(){return this.doc.length}syncTo(e){return this.string=this.cursor.next(e-this.cursorPos).value,this.cursorPos=e+this.string.length,this.cursorPos-this.string.length}chunk(e){return this.syncTo(e),this.string}get lineChunks(){return!0}read(e,t){let i=this.cursorPos-this.string.length;return e=this.cursorPos?this.doc.sliceString(e,t):this.string.slice(e-i,t-i)}}let li=null;class Xt{constructor(e,t,i=[],s,r,o,l,a){this.parser=e,this.state=t,this.fragments=i,this.tree=s,this.treeLen=r,this.viewport=o,this.skipped=l,this.scheduleOn=a,this.parse=null,this.tempSkipped=[]}static create(e,t,i){return new Xt(e,t,[],W.empty,0,i,[],null)}startParse(){return this.parser.startParse(new xp(this.state.doc),this.fragments)}work(e,t){return t!=null&&t>=this.state.doc.length&&(t=void 0),this.tree!=W.empty&&this.isDone(t??this.state.doc.length)?(this.takeTree(),!0):this.withContext(()=>{var i;if(typeof e=="number"){let s=Date.now()+e;e=()=>Date.now()>s}for(this.parse||(this.parse=this.startParse()),t!=null&&(this.parse.stoppedAt==null||this.parse.stoppedAt>t)&&t=this.treeLen&&((this.parse.stoppedAt==null||this.parse.stoppedAt>e)&&this.parse.stopAt(e),this.withContext(()=>{for(;!(t=this.parse.advance()););}),this.treeLen=e,this.tree=t,this.fragments=this.withoutTempSkipped(Qe.addTree(this.tree,this.fragments,!0)),this.parse=null)}withContext(e){let t=li;li=this;try{return e()}finally{li=t}}withoutTempSkipped(e){for(let t;t=this.tempSkipped.pop();)e=cl(e,t.from,t.to);return e}changes(e,t){let{fragments:i,tree:s,treeLen:r,viewport:o,skipped:l}=this;if(this.takeTree(),!e.empty){let a=[];if(e.iterChangedRanges((h,c,f,u)=>a.push({fromA:h,toA:c,fromB:f,toB:u})),i=Qe.applyChanges(i,a),s=W.empty,r=0,o={from:e.mapPos(o.from,-1),to:e.mapPos(o.to,1)},this.skipped.length){l=[];for(let h of this.skipped){let c=e.mapPos(h.from,1),f=e.mapPos(h.to,-1);ce.from&&(this.fragments=cl(this.fragments,s,r),this.skipped.splice(i--,1))}return this.skipped.length>=t?!1:(this.reset(),!0)}reset(){this.parse&&(this.takeTree(),this.parse=null)}skipUntilInView(e,t){this.skipped.push({from:e,to:t})}static getSkippingParser(e){return new class extends Th{createParse(t,i,s){let r=s[0].from,o=s[s.length-1].to;return{parsedPos:r,advance(){let a=li;if(a){for(let h of s)a.tempSkipped.push(h);e&&(a.scheduleOn=a.scheduleOn?Promise.all([a.scheduleOn,e]):e)}return this.parsedPos=o,new W(ge.none,[],[],o-r)},stoppedAt:null,stopAt(){}}}}}isDone(e){e=Math.min(e,this.state.doc.length);let t=this.fragments;return this.treeLen>=e&&t.length&&t[0].from==0&&t[0].to>=e}static get(){return li}}function cl(n,e,t){return Qe.applyChanges(n,[{fromA:e,toA:t,fromB:e,toB:t}])}class Zt{constructor(e){this.context=e,this.tree=e.tree}apply(e){if(!e.docChanged&&this.tree==this.context.tree)return this;let t=this.context.changes(e.changes,e.state),i=this.context.treeLen==e.startState.doc.length?void 0:Math.max(e.changes.mapPos(this.context.treeLen),t.viewport.to);return t.work(20,i)||t.takeTree(),new Zt(t)}static init(e){let t=Math.min(3e3,e.doc.length),i=Xt.create(e.facet(ft).parser,e,{from:0,to:t});return i.work(20,t)||i.takeTree(),new Zt(i)}}De.state=be.define({create:Zt.init,update(n,e){for(let t of e.effects)if(t.is(De.setState))return t.value;return e.startState.facet(ft)!=e.state.facet(ft)?Zt.init(e.state):n.apply(e)}});let Eh=n=>{let e=setTimeout(()=>n(),500);return()=>clearTimeout(e)};typeof requestIdleCallback<"u"&&(Eh=n=>{let e=-1,t=setTimeout(()=>{e=requestIdleCallback(n,{timeout:500-100})},100);return()=>e<0?clearTimeout(t):cancelIdleCallback(e)});const us=typeof navigator<"u"&&(!((fs=navigator.scheduling)===null||fs===void 0)&&fs.isInputPending)?()=>navigator.scheduling.isInputPending():null,Sp=ue.fromClass(class{constructor(e){this.view=e,this.working=null,this.workScheduled=0,this.chunkEnd=-1,this.chunkBudget=-1,this.work=this.work.bind(this),this.scheduleWork()}update(e){let t=this.view.state.field(De.state).context;(t.updateViewport(e.view.viewport)||this.view.viewport.to>t.treeLen)&&this.scheduleWork(),e.docChanged&&(this.view.hasFocus&&(this.chunkBudget+=50),this.scheduleWork()),this.checkAsyncSchedule(t)}scheduleWork(){if(this.working)return;let{state:e}=this.view,t=e.field(De.state);(t.tree!=t.context.tree||!t.context.isDone(e.doc.length))&&(this.working=Eh(this.work))}work(e){this.working=null;let t=Date.now();if(this.chunkEnds+1e3,a=r.context.work(()=>us&&us()||Date.now()>o,s+(l?0:1e5));this.chunkBudget-=Date.now()-t,(a||this.chunkBudget<=0)&&(r.context.takeTree(),this.view.dispatch({effects:De.setState.of(new Zt(r.context))})),this.chunkBudget>0&&!(a&&!l)&&this.scheduleWork(),this.checkAsyncSchedule(r.context)}checkAsyncSchedule(e){e.scheduleOn&&(this.workScheduled++,e.scheduleOn.then(()=>this.scheduleWork()).catch(t=>Ee(this.view.state,t)).then(()=>this.workScheduled--),e.scheduleOn=null)}destroy(){this.working&&this.working()}isWorking(){return!!(this.working||this.workScheduled>0)}},{eventHandlers:{focus(){this.scheduleWork()}}}),ft=D.define({combine(n){return n.length?n[0]:null},enables:n=>[De.state,Sp,O.contentAttributes.compute([n],e=>{let t=e.facet(n);return t&&t.name?{"data-language":t.name}:{}})]});class gb{constructor(e,t=[]){this.language=e,this.support=t,this.extension=[e,t]}}class Rh{constructor(e,t,i,s,r,o=void 0){this.name=e,this.alias=t,this.extensions=i,this.filename=s,this.loadFunc=r,this.support=o,this.loading=null}load(){return this.loading||(this.loading=this.loadFunc().then(e=>this.support=e,e=>{throw this.loading=null,e}))}static of(e){let{load:t,support:i}=e;if(!t){if(!i)throw new RangeError("Must pass either 'load' or 'support' to LanguageDescription.of");t=()=>Promise.resolve(i)}return new Rh(e.name,(e.alias||[]).concat(e.name).map(s=>s.toLowerCase()),e.extensions||[],e.filename,t,i)}static matchFilename(e,t){for(let s of e)if(s.filename&&s.filename.test(t))return s;let i=/\.([^.]+)$/.exec(t);if(i){for(let s of e)if(s.extensions.indexOf(i[1])>-1)return s}return null}static matchLanguageName(e,t,i=!0){t=t.toLowerCase();for(let s of e)if(s.alias.some(r=>r==t))return s;if(i)for(let s of e)for(let r of s.alias){let o=t.indexOf(r);if(o>-1&&(r.length>2||!/\w/.test(t[o-1])&&!/\w/.test(t[o+r.length])))return s}return null}}const Lh=D.define(),Fn=D.define({combine:n=>{if(!n.length)return" ";let e=n[0];if(!e||/\S/.test(e)||Array.from(e).some(t=>t!=e[0]))throw new Error("Invalid indent unit: "+JSON.stringify(n[0]));return e}});function Mt(n){let e=n.facet(Fn);return e.charCodeAt(0)==9?n.tabSize*e.length:e.length}function Oi(n,e){let t="",i=n.tabSize,s=n.facet(Fn)[0];if(s==" "){for(;e>=i;)t+=" ",e-=i;s=" "}for(let r=0;r=i.from&&s<=i.to?r&&s==e?{text:"",from:e}:(t<0?s-1&&(r+=o-this.countColumn(i,i.search(/\S|$/))),r}countColumn(e,t=e.length){return Li(e,this.state.tabSize,t)}lineIndent(e,t=1){let{text:i,from:s}=this.lineAt(e,t),r=this.options.overrideIndentation;if(r){let o=r(s);if(o>-1)return o}return this.countColumn(i,i.search(/\S|$/))}get simulatedBreak(){return this.options.simulateBreak||null}}const Cp=new L;function Ap(n,e,t){return Ih(e.resolveInner(t).enterUnfinishedNodesBefore(t),t,n)}function Mp(n){return n.pos==n.options.simulateBreak&&n.options.simulateDoubleBreak}function Dp(n){let e=n.type.prop(Cp);if(e)return e;let t=n.firstChild,i;if(t&&(i=t.type.prop(L.closedBy))){let s=n.lastChild,r=s&&i.indexOf(s.name)>-1;return o=>Nh(o,!0,1,void 0,r&&!Mp(o)?s.from:void 0)}return n.parent==null?Tp:null}function Ih(n,e,t){for(;n;n=n.parent){let i=Dp(n);if(i)return i(Ir.create(t,e,n))}return null}function Tp(){return 0}class Ir extends Hn{constructor(e,t,i){super(e.state,e.options),this.base=e,this.pos=t,this.node=i}static create(e,t,i){return new Ir(e,t,i)}get textAfter(){return this.textAfterPos(this.pos)}get baseIndent(){let e=this.state.doc.lineAt(this.node.from);for(;;){let t=this.node.resolve(e.from);for(;t.parent&&t.parent.from==t.from;)t=t.parent;if(Op(t,this.node))break;e=this.state.doc.lineAt(t.from)}return this.lineIndent(e.from)}continue(){let e=this.node.parent;return e?Ih(e,this.pos,this.base):0}}function Op(n,e){for(let t=e;t;t=t.parent)if(n==t)return!0;return!1}function Bp(n){let e=n.node,t=e.childAfter(e.from),i=e.lastChild;if(!t)return null;let s=n.options.simulateBreak,r=n.state.doc.lineAt(t.from),o=s==null||s<=r.from?r.to:Math.min(r.to,s);for(let l=t.to;;){let a=e.childAfter(l);if(!a||a==i)return null;if(!a.type.isSkipped)return a.fromNh(i,e,t,n)}function Nh(n,e,t,i,s){let r=n.textAfter,o=r.match(/^\s*/)[0].length,l=i&&r.slice(o,o+i.length)==i||s==n.pos+o,a=e?Bp(n):null;return a?l?n.column(a.from):n.column(a.to):n.baseIndent+(l?0:n.unit*t)}const bb=n=>n.baseIndent;function wb({except:n,units:e=1}={}){return t=>{let i=n&&n.test(t.textAfter);return t.baseIndent+(i?0:e*t.unit)}}const Pp=200;function Ep(){return N.transactionFilter.of(n=>{if(!n.docChanged||!n.isUserEvent("input.type")&&!n.isUserEvent("input.complete"))return n;let e=n.startState.languageDataAt("indentOnInput",n.startState.selection.main.head);if(!e.length)return n;let t=n.newDoc,{head:i}=n.newSelection.main,s=t.lineAt(i);if(i>s.from+Pp)return n;let r=t.sliceString(s.from,i);if(!e.some(h=>h.test(r)))return n;let{state:o}=n,l=-1,a=[];for(let{head:h}of o.selection.ranges){let c=o.doc.lineAt(h);if(c.from==l)continue;l=c.from;let f=Lr(o,c.from);if(f==null)continue;let u=/^\s*/.exec(c.text)[0],d=Oi(o,f);u!=d&&a.push({from:c.from,to:c.from+u.length,insert:d})}return a.length?[n,{changes:a,sequential:!0}]:n})}const Rp=D.define(),Lp=new L;function kb(n){let e=n.firstChild,t=n.lastChild;return e&&e.tot)continue;if(r&&o.from=e&&a.to>t&&(r=a)}}return r}function Np(n){let e=n.lastChild;return e&&e.to==n.to&&e.type.isError}function Bn(n,e,t){for(let i of n.facet(Rp)){let s=i(n,e,t);if(s)return s}return Ip(n,e,t)}function _h(n,e){let t=e.mapPos(n.from,1),i=e.mapPos(n.to,-1);return t>=i?void 0:{from:t,to:i}}const Wn=R.define({map:_h}),Ni=R.define({map:_h});function Vh(n){let e=[];for(let{head:t}of n.state.selection.ranges)e.some(i=>i.from<=t&&i.to>=t)||e.push(n.lineBlockAt(t));return e}const Dt=be.define({create(){return E.none},update(n,e){n=n.map(e.changes);for(let t of e.effects)t.is(Wn)&&!_p(n,t.value.from,t.value.to)?n=n.update({add:[fl.range(t.value.from,t.value.to)]}):t.is(Ni)&&(n=n.update({filter:(i,s)=>t.value.from!=i||t.value.to!=s,filterFrom:t.value.from,filterTo:t.value.to}));if(e.selection){let t=!1,{head:i}=e.selection.main;n.between(i,i,(s,r)=>{si&&(t=!0)}),t&&(n=n.update({filterFrom:i,filterTo:i,filter:(s,r)=>r<=i||s>=i}))}return n},provide:n=>O.decorations.from(n),toJSON(n,e){let t=[];return n.between(0,e.doc.length,(i,s)=>{t.push(i,s)}),t},fromJSON(n){if(!Array.isArray(n)||n.length%2)throw new RangeError("Invalid JSON for fold state");let e=[];for(let t=0;t{(!s||s.from>r)&&(s={from:r,to:o})}),s}function _p(n,e,t){let i=!1;return n.between(e,e,(s,r)=>{s==e&&r==t&&(i=!0)}),i}function Fh(n,e){return n.field(Dt,!1)?e:e.concat(R.appendConfig.of(zh()))}const Vp=n=>{for(let e of Vh(n)){let t=Bn(n.state,e.from,e.to);if(t)return n.dispatch({effects:Fh(n.state,[Wn.of(t),Hh(n,t)])}),!0}return!1},Fp=n=>{if(!n.state.field(Dt,!1))return!1;let e=[];for(let t of Vh(n)){let i=Pn(n.state,t.from,t.to);i&&e.push(Ni.of(i),Hh(n,i,!1))}return e.length&&n.dispatch({effects:e}),e.length>0};function Hh(n,e,t=!0){let i=n.state.doc.lineAt(e.from).number,s=n.state.doc.lineAt(e.to).number;return O.announce.of(`${n.state.phrase(t?"Folded lines":"Unfolded lines")} ${i} ${n.state.phrase("to")} ${s}.`)}const Hp=n=>{let{state:e}=n,t=[];for(let i=0;i{let e=n.state.field(Dt,!1);if(!e||!e.size)return!1;let t=[];return e.between(0,n.state.doc.length,(i,s)=>{t.push(Ni.of({from:i,to:s}))}),n.dispatch({effects:t}),!0},zp=[{key:"Ctrl-Shift-[",mac:"Cmd-Alt-[",run:Vp},{key:"Ctrl-Shift-]",mac:"Cmd-Alt-]",run:Fp},{key:"Ctrl-Alt-[",run:Hp},{key:"Ctrl-Alt-]",run:Wp}],qp={placeholderDOM:null,placeholderText:"…"},Wh=D.define({combine(n){return Rt(n,qp)}});function zh(n){let e=[Dt,$p];return n&&e.push(Wh.of(n)),e}const fl=E.replace({widget:new class extends Ue{toDOM(n){let{state:e}=n,t=e.facet(Wh),i=r=>{let o=n.lineBlockAt(n.posAtDOM(r.target)),l=Pn(n.state,o.from,o.to);l&&n.dispatch({effects:Ni.of(l)}),r.preventDefault()};if(t.placeholderDOM)return t.placeholderDOM(n,i);let s=document.createElement("span");return s.textContent=t.placeholderText,s.setAttribute("aria-label",e.phrase("folded code")),s.title=e.phrase("unfold"),s.className="cm-foldPlaceholder",s.onclick=i,s}}}),jp={openText:"⌄",closedText:"›",markerDOM:null,domEventHandlers:{},foldingChanged:()=>!1};class ds extends ct{constructor(e,t){super(),this.config=e,this.open=t}eq(e){return this.config==e.config&&this.open==e.open}toDOM(e){if(this.config.markerDOM)return this.config.markerDOM(this.open);let t=document.createElement("span");return t.textContent=this.open?this.config.openText:this.config.closedText,t.title=e.state.phrase(this.open?"Fold line":"Unfold line"),t}}function Kp(n={}){let e=Object.assign(Object.assign({},jp),n),t=new ds(e,!0),i=new ds(e,!1),s=ue.fromClass(class{constructor(o){this.from=o.viewport.from,this.markers=this.buildMarkers(o)}update(o){(o.docChanged||o.viewportChanged||o.startState.facet(ft)!=o.state.facet(ft)||o.startState.field(Dt,!1)!=o.state.field(Dt,!1)||ae(o.startState)!=ae(o.state)||e.foldingChanged(o))&&(this.markers=this.buildMarkers(o.view))}buildMarkers(o){let l=new Ct;for(let a of o.viewportLineBlocks){let h=Pn(o.state,a.from,a.to)?i:Bn(o.state,a.from,a.to)?t:null;h&&l.add(a.from,a.from,h)}return l.finish()}}),{domEventHandlers:r}=e;return[s,$d({class:"cm-foldGutter",markers(o){var l;return((l=o.plugin(s))===null||l===void 0?void 0:l.markers)||F.empty},initialSpacer(){return new ds(e,!1)},domEventHandlers:Object.assign(Object.assign({},r),{click:(o,l,a)=>{if(r.click&&r.click(o,l,a))return!0;let h=Pn(o.state,l.from,l.to);if(h)return o.dispatch({effects:Ni.of(h)}),!0;let c=Bn(o.state,l.from,l.to);return c?(o.dispatch({effects:Wn.of(c)}),!0):!1}})}),zh()]}const $p=O.baseTheme({".cm-foldPlaceholder":{backgroundColor:"#eee",border:"1px solid #ddd",color:"#888",borderRadius:".2em",margin:"0 1px",padding:"0 1px",cursor:"pointer"},".cm-foldGutter span":{padding:"0 1px",cursor:"pointer"}});class ei{constructor(e,t){this.specs=e;let i;function s(l){let a=lt.newName();return(i||(i=Object.create(null)))["."+a]=l,a}const r=typeof t.all=="string"?t.all:t.all?s(t.all):void 0,o=t.scope;this.scope=o instanceof De?l=>l.prop(wt)==o.data:o?l=>l==o:void 0,this.style=Bh(e.map(l=>({tag:l.tag,class:l.class||s(Object.assign({},l,{tag:null}))})),{all:r}).style,this.module=i?new lt(i):null,this.themeType=t.themeType}static define(e,t){return new ei(e,t||{})}}const cr=D.define(),qh=D.define({combine(n){return n.length?[n[0]]:null}});function ps(n){let e=n.facet(cr);return e.length?e:n.facet(qh)}function Nr(n,e){let t=[Gp],i;return n instanceof ei&&(n.module&&t.push(O.styleModule.of(n.module)),i=n.themeType),e?.fallback?t.push(qh.of(n)):i?t.push(cr.computeN([O.darkTheme],s=>s.facet(O.darkTheme)==(i=="dark")?[n]:[])):t.push(cr.of(n)),t}class Up{constructor(e){this.markCache=Object.create(null),this.tree=ae(e.state),this.decorations=this.buildDeco(e,ps(e.state))}update(e){let t=ae(e.state),i=ps(e.state),s=i!=ps(e.startState);t.length{i.add(o,l,this.markCache[a]||(this.markCache[a]=E.mark({class:a})))},s,r);return i.finish()}}const Gp=Ri.high(ue.fromClass(Up,{decorations:n=>n.decorations})),Jp=ei.define([{tag:m.meta,color:"#404740"},{tag:m.link,textDecoration:"underline"},{tag:m.heading,textDecoration:"underline",fontWeight:"bold"},{tag:m.emphasis,fontStyle:"italic"},{tag:m.strong,fontWeight:"bold"},{tag:m.strikethrough,textDecoration:"line-through"},{tag:m.keyword,color:"#708"},{tag:[m.atom,m.bool,m.url,m.contentSeparator,m.labelName],color:"#219"},{tag:[m.literal,m.inserted],color:"#164"},{tag:[m.string,m.deleted],color:"#a11"},{tag:[m.regexp,m.escape,m.special(m.string)],color:"#e40"},{tag:m.definition(m.variableName),color:"#00f"},{tag:m.local(m.variableName),color:"#30a"},{tag:[m.typeName,m.namespace],color:"#085"},{tag:m.className,color:"#167"},{tag:[m.special(m.variableName),m.macroName],color:"#256"},{tag:m.definition(m.propertyName),color:"#00c"},{tag:m.comment,color:"#940"},{tag:m.invalid,color:"#f00"}]),Yp=1e4,Xp="()[]{}",Zp=new L;function fr(n,e,t){let i=n.prop(e<0?L.openedBy:L.closedBy);if(i)return i;if(n.name.length==1){let s=t.indexOf(n.name);if(s>-1&&s%2==(e<0?1:0))return[t[s+e]]}return null}function ur(n){let e=n.type.prop(Zp);return e?e(n.node):n}function Ht(n,e,t,i={}){let s=i.maxScanDistance||Yp,r=i.brackets||Xp,o=ae(n),l=o.resolveInner(e,t);for(let a=l;a;a=a.parent){let h=fr(a.type,t,r);if(h&&a.from0?e>=c.from&&ec.from&&e<=c.to))return Qp(n,e,t,a,c,h,r)}}return em(n,e,t,o,l.type,s,r)}function Qp(n,e,t,i,s,r,o){let l=i.parent,a={from:s.from,to:s.to},h=0,c=l?.cursor();if(c&&(t<0?c.childBefore(i.from):c.childAfter(i.to)))do if(t<0?c.to<=i.from:c.from>=i.to){if(h==0&&r.indexOf(c.type.name)>-1&&c.from0)return null;let h={from:t<0?e-1:e,to:t>0?e+1:e},c=n.doc.iterRange(e,t>0?n.doc.length:0),f=0;for(let u=0;!c.next().done&&u<=r;){let d=c.value;t<0&&(u+=d.length);let p=e+u*t;for(let g=t>0?0:d.length-1,y=t>0?d.length:-1;g!=y;g+=t){let b=o.indexOf(d[g]);if(!(b<0||i.resolveInner(p+g,1).type!=s))if(b%2==0==t>0)f++;else{if(f==1)return{start:h,end:{from:p+g,to:p+g+1},matched:b>>1==a>>1};f--}}t>0&&(u+=d.length)}return c.done?{start:h,matched:!1}:null}function ul(n,e,t,i=0,s=0){e==null&&(e=n.search(/[^\s\u00a0]/),e==-1&&(e=n.length));let r=s;for(let o=i;o=this.string.length}sol(){return this.pos==0}peek(){return this.string.charAt(this.pos)||void 0}next(){if(this.post}eatSpace(){let e=this.pos;for(;/[\s\u00a0]/.test(this.string.charAt(this.pos));)++this.pos;return this.pos>e}skipToEnd(){this.pos=this.string.length}skipTo(e){let t=this.string.indexOf(e,this.pos);if(t>-1)return this.pos=t,!0}backUp(e){this.pos-=e}column(){return this.lastColumnPosi?o.toLowerCase():o,r=this.string.substr(this.pos,e.length);return s(r)==s(e)?(t!==!1&&(this.pos+=e.length),!0):null}else{let s=this.string.slice(this.pos).match(e);return s&&s.index>0?null:(s&&t!==!1&&(this.pos+=s[0].length),s)}}current(){return this.string.slice(this.start,this.pos)}}function tm(n){return{name:n.name||"",token:n.token,blankLine:n.blankLine||(()=>{}),startState:n.startState||(()=>!0),copyState:n.copyState||im,indent:n.indent||(()=>null),languageData:n.languageData||{},tokenTable:n.tokenTable||Vr}}function im(n){if(typeof n!="object")return n;let e={};for(let t in n){let i=n[t];e[t]=i instanceof Array?i.slice():i}return e}const dl=new WeakMap;class Wt extends De{constructor(e){let t=Ph(e.languageData),i=tm(e),s,r=new class extends Th{createParse(o,l,a){return new sm(s,o,l,a)}};super(t,r,[Lh.of((o,l)=>this.getIndent(o,l))],e.name),this.topNode=lm(t),s=this,this.streamParser=i,this.stateAfter=new L({perNode:!0}),this.tokenTable=e.tokenTable?new Gh(i.tokenTable):om}static define(e){return new Wt(e)}getIndent(e,t){let i=ae(e.state),s=i.resolve(t);for(;s&&s.type!=this.topNode;)s=s.parent;if(!s)return null;let r,{overrideIndentation:o}=e.options;o&&(r=dl.get(e.state),r!=null&&r1e4)return null;for(;a=i&&t+e.length<=s&&e.prop(n.stateAfter);if(r)return{state:n.streamParser.copyState(r),pos:t+e.length};for(let o=e.children.length-1;o>=0;o--){let l=e.children[o],a=t+e.positions[o],h=l instanceof W&&a=e.length)return e;!s&&e.type==n.topNode&&(s=!0);for(let r=e.children.length-1;r>=0;r--){let o=e.positions[r],l=e.children[r],a;if(ot&&_r(n,s.tree,0-s.offset,t,o),a;if(l&&(a=Kh(n,s.tree,t+s.offset,l.pos+s.offset,!1)))return{state:l.state,tree:a}}return{state:n.streamParser.startState(i?Mt(i):4),tree:W.empty}}class sm{constructor(e,t,i,s){this.lang=e,this.input=t,this.fragments=i,this.ranges=s,this.stoppedAt=null,this.chunks=[],this.chunkPos=[],this.chunk=[],this.chunkReused=void 0,this.rangeIndex=0,this.to=s[s.length-1].to;let r=Xt.get(),o=s[0].from,{state:l,tree:a}=nm(e,i,o,r?.state);this.state=l,this.parsedPos=this.chunkStart=o+a.length;for(let h=0;h=t?this.finish():e&&this.parsedPos>=e.viewport.to?(e.skipUntilInView(this.parsedPos,t),this.finish()):null}stopAt(e){this.stoppedAt=e}lineAfter(e){let t=this.input.chunk(e);if(this.input.lineChunks)t==` -`&&(t="");else{let i=t.indexOf(` -`);i>-1&&(t=t.slice(0,i))}return e+t.length<=this.to?t:t.slice(0,this.to-e)}nextLine(){let e=this.parsedPos,t=this.lineAfter(e),i=e+t.length;for(let s=this.rangeIndex;;){let r=this.ranges[s].to;if(r>=i||(t=t.slice(0,r-(i-t.length)),s++,s==this.ranges.length))break;let o=this.ranges[s].from,l=this.lineAfter(o);t+=l,i=o+l.length}return{line:t,end:i}}skipGapsTo(e,t,i){for(;;){let s=this.ranges[this.rangeIndex].to,r=e+t;if(i>0?s>r:s>=r)break;let o=this.ranges[++this.rangeIndex].from;t+=o-s}return t}moveRangeIndex(){for(;this.ranges[this.rangeIndex].to1){r=this.skipGapsTo(t,r,1),t+=r;let o=this.chunk.length;r=this.skipGapsTo(i,r,-1),i+=r,s+=this.chunk.length-o}return this.chunk.push(e,t,i,s),r}parseLine(e){let{line:t,end:i}=this.nextLine(),s=0,{streamParser:r}=this.lang,o=new jh(t,e?e.state.tabSize:4,e?Mt(e.state):2);if(o.eol())r.blankLine(this.state,o.indentUnit);else for(;!o.eol();){let l=$h(r.token,o,this.state);if(l&&(s=this.emitToken(this.lang.tokenTable.resolve(l),this.parsedPos+o.start,this.parsedPos+o.pos,4,s)),o.start>1e4)break}this.parsedPos=i,this.moveRangeIndex(),this.parsedPose.start)return s}throw new Error("Stream parser failed to advance stream.")}const Vr=Object.create(null),Bi=[ge.none],rm=new Br(Bi),pl=[],Uh=Object.create(null);for(let[n,e]of[["variable","variableName"],["variable-2","variableName.special"],["string-2","string.special"],["def","variableName.definition"],["tag","tagName"],["attribute","attributeName"],["type","typeName"],["builtin","variableName.standard"],["qualifier","modifier"],["error","invalid"],["header","heading"],["property","propertyName"]])Uh[n]=Jh(Vr,e);class Gh{constructor(e){this.extra=e,this.table=Object.assign(Object.create(null),Uh)}resolve(e){return e?this.table[e]||(this.table[e]=Jh(this.extra,e)):0}}const om=new Gh(Vr);function ms(n,e){pl.indexOf(n)>-1||(pl.push(n),console.warn(e))}function Jh(n,e){let t=null;for(let r of e.split(".")){let o=n[r]||m[r];o?typeof o=="function"?t?t=o(t):ms(r,`Modifier ${r} used at start of tag`):t?ms(r,`Tag ${r} used as modifier`):t=o:ms(r,`Unknown highlighting tag ${r}`)}if(!t)return 0;let i=e.replace(/ /g,"_"),s=ge.define({id:Bi.length,name:i,props:[gp({[i]:t})]});return Bi.push(s),s.id}function lm(n){let e=ge.define({id:Bi.length,name:"Document",props:[wt.add(()=>n)]});return Bi.push(e),e}const am=n=>{let e=Hr(n.state);return e.line?hm(n):e.block?fm(n):!1};function Fr(n,e){return({state:t,dispatch:i})=>{if(t.readOnly)return!1;let s=n(e,t);return s?(i(t.update(s)),!0):!1}}const hm=Fr(pm,0),cm=Fr(Yh,0),fm=Fr((n,e)=>Yh(n,e,dm(e)),0);function Hr(n,e=n.selection.main.head){let t=n.languageDataAt("commentTokens",e);return t.length?t[0]:{}}const ai=50;function um(n,{open:e,close:t},i,s){let r=n.sliceDoc(i-ai,i),o=n.sliceDoc(s,s+ai),l=/\s*$/.exec(r)[0].length,a=/^\s*/.exec(o)[0].length,h=r.length-l;if(r.slice(h-e.length,h)==e&&o.slice(a,a+t.length)==t)return{open:{pos:i-l,margin:l&&1},close:{pos:s+a,margin:a&&1}};let c,f;s-i<=2*ai?c=f=n.sliceDoc(i,s):(c=n.sliceDoc(i,i+ai),f=n.sliceDoc(s-ai,s));let u=/^\s*/.exec(c)[0].length,d=/\s*$/.exec(f)[0].length,p=f.length-d-t.length;return c.slice(u,u+e.length)==e&&f.slice(p,p+t.length)==t?{open:{pos:i+u+e.length,margin:/\s/.test(c.charAt(u+e.length))?1:0},close:{pos:s-d-t.length,margin:/\s/.test(f.charAt(p-1))?1:0}}:null}function dm(n){let e=[];for(let t of n.selection.ranges){let i=n.doc.lineAt(t.from),s=t.to<=i.to?i:n.doc.lineAt(t.to),r=e.length-1;r>=0&&e[r].to>i.from?e[r].to=s.to:e.push({from:i.from,to:s.to})}return e}function Yh(n,e,t=e.selection.ranges){let i=t.map(r=>Hr(e,r.from).block);if(!i.every(r=>r))return null;let s=t.map((r,o)=>um(e,i[o],r.from,r.to));if(n!=2&&!s.every(r=>r))return{changes:e.changes(t.map((r,o)=>s[o]?[]:[{from:r.from,insert:i[o].open+" "},{from:r.to,insert:" "+i[o].close}]))};if(n!=1&&s.some(r=>r)){let r=[];for(let o=0,l;os&&(r==o||o>c.from)){s=c.from;let f=Hr(e,h).line;if(!f)continue;let u=/^\s*/.exec(c.text)[0].length,d=u==c.length,p=c.text.slice(u,u+f.length)==f?u:-1;ur.comment<0&&(!r.empty||r.single))){let r=[];for(let{line:l,token:a,indent:h,empty:c,single:f}of i)(f||!c)&&r.push({from:l.from+h,insert:a+" "});let o=e.changes(r);return{changes:o,selection:e.selection.map(o,1)}}else if(n!=1&&i.some(r=>r.comment>=0)){let r=[];for(let{line:o,comment:l,token:a}of i)if(l>=0){let h=o.from+l,c=h+a.length;o.text[c-o.from]==" "&&c++,r.push({from:h,to:c})}return{changes:r}}return null}const dr=Et.define(),mm=Et.define(),gm=D.define(),Xh=D.define({combine(n){return Rt(n,{minDepth:100,newGroupDelay:500},{minDepth:Math.max,newGroupDelay:Math.min})}});function ym(n){let e=0;return n.iterChangedRanges((t,i)=>e=i),e}const Zh=be.define({create(){return qe.empty},update(n,e){let t=e.state.facet(Xh),i=e.annotation(dr);if(i){let a=e.docChanged?w.single(ym(e.changes)):void 0,h=ye.fromTransaction(e,a),c=i.side,f=c==0?n.undone:n.done;return h?f=En(f,f.length,t.minDepth,h):f=tc(f,e.startState.selection),new qe(c==0?i.rest:f,c==0?f:i.rest)}let s=e.annotation(mm);if((s=="full"||s=="before")&&(n=n.isolate()),e.annotation(ie.addToHistory)===!1)return e.changes.empty?n:n.addMapping(e.changes.desc);let r=ye.fromTransaction(e),o=e.annotation(ie.time),l=e.annotation(ie.userEvent);return r?n=n.addChanges(r,o,l,t.newGroupDelay,t.minDepth):e.selection&&(n=n.addSelection(e.startState.selection,o,l,t.newGroupDelay)),(s=="full"||s=="after")&&(n=n.isolate()),n},toJSON(n){return{done:n.done.map(e=>e.toJSON()),undone:n.undone.map(e=>e.toJSON())}},fromJSON(n){return new qe(n.done.map(ye.fromJSON),n.undone.map(ye.fromJSON))}});function bm(n={}){return[Zh,Xh.of(n),O.domEventHandlers({beforeinput(e,t){let i=e.inputType=="historyUndo"?Qh:e.inputType=="historyRedo"?pr:null;return i?(e.preventDefault(),i(t)):!1}})]}function zn(n,e){return function({state:t,dispatch:i}){if(!e&&t.readOnly)return!1;let s=t.field(Zh,!1);if(!s)return!1;let r=s.pop(n,t,e);return r?(i(r),!0):!1}}const Qh=zn(0,!1),pr=zn(1,!1),wm=zn(0,!0),km=zn(1,!0);class ye{constructor(e,t,i,s,r){this.changes=e,this.effects=t,this.mapped=i,this.startSelection=s,this.selectionsAfter=r}setSelAfter(e){return new ye(this.changes,this.effects,this.mapped,this.startSelection,e)}toJSON(){var e,t,i;return{changes:(e=this.changes)===null||e===void 0?void 0:e.toJSON(),mapped:(t=this.mapped)===null||t===void 0?void 0:t.toJSON(),startSelection:(i=this.startSelection)===null||i===void 0?void 0:i.toJSON(),selectionsAfter:this.selectionsAfter.map(s=>s.toJSON())}}static fromJSON(e){return new ye(e.changes&&te.fromJSON(e.changes),[],e.mapped&&je.fromJSON(e.mapped),e.startSelection&&w.fromJSON(e.startSelection),e.selectionsAfter.map(w.fromJSON))}static fromTransaction(e,t){let i=Te;for(let s of e.startState.facet(gm)){let r=s(e);r.length&&(i=i.concat(r))}return!i.length&&e.changes.empty?null:new ye(e.changes.invert(e.startState.doc),i,void 0,t||e.startState.selection,Te)}static selection(e){return new ye(void 0,Te,void 0,void 0,e)}}function En(n,e,t,i){let s=e+1>t+20?e-t-1:0,r=n.slice(s,e);return r.push(i),r}function vm(n,e){let t=[],i=!1;return n.iterChangedRanges((s,r)=>t.push(s,r)),e.iterChangedRanges((s,r,o,l)=>{for(let a=0;a=h&&o<=c&&(i=!0)}}),i}function xm(n,e){return n.ranges.length==e.ranges.length&&n.ranges.filter((t,i)=>t.empty!=e.ranges[i].empty).length===0}function ec(n,e){return n.length?e.length?n.concat(e):n:e}const Te=[],Sm=200;function tc(n,e){if(n.length){let t=n[n.length-1],i=t.selectionsAfter.slice(Math.max(0,t.selectionsAfter.length-Sm));return i.length&&i[i.length-1].eq(e)?n:(i.push(e),En(n,n.length-1,1e9,t.setSelAfter(i)))}else return[ye.selection([e])]}function Cm(n){let e=n[n.length-1],t=n.slice();return t[n.length-1]=e.setSelAfter(e.selectionsAfter.slice(0,e.selectionsAfter.length-1)),t}function gs(n,e){if(!n.length)return n;let t=n.length,i=Te;for(;t;){let s=Am(n[t-1],e,i);if(s.changes&&!s.changes.empty||s.effects.length){let r=n.slice(0,t);return r[t-1]=s,r}else e=s.mapped,t--,i=s.selectionsAfter}return i.length?[ye.selection(i)]:Te}function Am(n,e,t){let i=ec(n.selectionsAfter.length?n.selectionsAfter.map(l=>l.map(e)):Te,t);if(!n.changes)return ye.selection(i);let s=n.changes.map(e),r=e.mapDesc(n.changes,!0),o=n.mapped?n.mapped.composeDesc(r):r;return new ye(s,R.mapEffects(n.effects,e),o,n.startSelection.map(r),i)}const Mm=/^(input\.type|delete)($|\.)/;class qe{constructor(e,t,i=0,s=void 0){this.done=e,this.undone=t,this.prevTime=i,this.prevUserEvent=s}isolate(){return this.prevTime?new qe(this.done,this.undone):this}addChanges(e,t,i,s,r){let o=this.done,l=o[o.length-1];return l&&l.changes&&!l.changes.empty&&e.changes&&(!i||Mm.test(i))&&(!l.selectionsAfter.length&&t-this.prevTime0&&t-this.prevTimet.empty?n.moveByChar(t,e):qn(t,e))}function de(n){return n.textDirectionAt(n.state.selection.main.head)==Y.LTR}const nc=n=>ic(n,!de(n)),sc=n=>ic(n,de(n));function rc(n,e){return Re(n,t=>t.empty?n.moveByGroup(t,e):qn(t,e))}const Tm=n=>rc(n,!de(n)),Om=n=>rc(n,de(n));function Bm(n,e,t){if(e.type.prop(t))return!0;let i=e.to-e.from;return i&&(i>2||/[^\s,.;:]/.test(n.sliceDoc(e.from,e.to)))||e.firstChild}function jn(n,e,t){let i=ae(n).resolveInner(e.head),s=t?L.closedBy:L.openedBy;for(let a=e.head;;){let h=t?i.childAfter(a):i.childBefore(a);if(!h)break;Bm(n,h,s)?i=h:a=t?h.to:h.from}let r=i.type.prop(s),o,l;return r&&(o=t?Ht(n,i.from,1):Ht(n,i.to,-1))&&o.matched?l=t?o.end.to:o.end.from:l=t?i.to:i.from,w.cursor(l,t?-1:1)}const Pm=n=>Re(n,e=>jn(n.state,e,!de(n))),Em=n=>Re(n,e=>jn(n.state,e,de(n)));function oc(n,e){return Re(n,t=>{if(!t.empty)return qn(t,e);let i=n.moveVertically(t,e);return i.head!=t.head?i:n.moveToLineBoundary(t,e)})}const lc=n=>oc(n,!1),ac=n=>oc(n,!0);function hc(n){return Math.max(n.defaultLineHeight,Math.min(n.dom.clientHeight,innerHeight)-5)}function cc(n,e){let{state:t}=n,i=ti(t.selection,l=>l.empty?n.moveVertically(l,e,hc(n)):qn(l,e));if(i.eq(t.selection))return!1;let s=n.coordsAtPos(t.selection.main.head),r=n.scrollDOM.getBoundingClientRect(),o;return s&&s.top>r.top&&s.bottomcc(n,!1),mr=n=>cc(n,!0);function ut(n,e,t){let i=n.lineBlockAt(e.head),s=n.moveToLineBoundary(e,t);if(s.head==e.head&&s.head!=(t?i.to:i.from)&&(s=n.moveToLineBoundary(e,t,!1)),!t&&s.head==i.from&&i.length){let r=/^\s*/.exec(n.state.sliceDoc(i.from,Math.min(i.from+100,i.to)))[0].length;r&&e.head!=i.from+r&&(s=w.cursor(i.from+r))}return s}const Rm=n=>Re(n,e=>ut(n,e,!0)),Lm=n=>Re(n,e=>ut(n,e,!1)),Im=n=>Re(n,e=>ut(n,e,!de(n))),Nm=n=>Re(n,e=>ut(n,e,de(n))),_m=n=>Re(n,e=>w.cursor(n.lineBlockAt(e.head).from,1)),Vm=n=>Re(n,e=>w.cursor(n.lineBlockAt(e.head).to,-1));function Fm(n,e,t){let i=!1,s=ti(n.selection,r=>{let o=Ht(n,r.head,-1)||Ht(n,r.head,1)||r.head>0&&Ht(n,r.head-1,1)||r.headFm(n,e,!1);function Be(n,e){let t=ti(n.state.selection,i=>{let s=e(i);return w.range(i.anchor,s.head,s.goalColumn)});return t.eq(n.state.selection)?!1:(n.dispatch(Ge(n.state,t)),!0)}function fc(n,e){return Be(n,t=>n.moveByChar(t,e))}const uc=n=>fc(n,!de(n)),dc=n=>fc(n,de(n));function pc(n,e){return Be(n,t=>n.moveByGroup(t,e))}const Wm=n=>pc(n,!de(n)),zm=n=>pc(n,de(n)),qm=n=>Be(n,e=>jn(n.state,e,!de(n))),jm=n=>Be(n,e=>jn(n.state,e,de(n)));function mc(n,e){return Be(n,t=>n.moveVertically(t,e))}const gc=n=>mc(n,!1),yc=n=>mc(n,!0);function bc(n,e){return Be(n,t=>n.moveVertically(t,e,hc(n)))}const gl=n=>bc(n,!1),yl=n=>bc(n,!0),Km=n=>Be(n,e=>ut(n,e,!0)),$m=n=>Be(n,e=>ut(n,e,!1)),Um=n=>Be(n,e=>ut(n,e,!de(n))),Gm=n=>Be(n,e=>ut(n,e,de(n))),Jm=n=>Be(n,e=>w.cursor(n.lineBlockAt(e.head).from)),Ym=n=>Be(n,e=>w.cursor(n.lineBlockAt(e.head).to)),bl=({state:n,dispatch:e})=>(e(Ge(n,{anchor:0})),!0),wl=({state:n,dispatch:e})=>(e(Ge(n,{anchor:n.doc.length})),!0),kl=({state:n,dispatch:e})=>(e(Ge(n,{anchor:n.selection.main.anchor,head:0})),!0),vl=({state:n,dispatch:e})=>(e(Ge(n,{anchor:n.selection.main.anchor,head:n.doc.length})),!0),Xm=({state:n,dispatch:e})=>(e(n.update({selection:{anchor:0,head:n.doc.length},userEvent:"select"})),!0),Zm=({state:n,dispatch:e})=>{let t=$n(n).map(({from:i,to:s})=>w.range(i,Math.min(s+1,n.doc.length)));return e(n.update({selection:w.create(t),userEvent:"select"})),!0},Qm=({state:n,dispatch:e})=>{let t=ti(n.selection,i=>{var s;let r=ae(n).resolveInner(i.head,1);for(;!(r.from=i.to||r.to>i.to&&r.from<=i.from||!(!((s=r.parent)===null||s===void 0)&&s.parent));)r=r.parent;return w.range(r.to,r.from)});return e(Ge(n,t)),!0},eg=({state:n,dispatch:e})=>{let t=n.selection,i=null;return t.ranges.length>1?i=w.create([t.main]):t.main.empty||(i=w.create([w.cursor(t.main.head)])),i?(e(Ge(n,i)),!0):!1};function Kn(n,e){if(n.state.readOnly)return!1;let t="delete.selection",{state:i}=n,s=i.changeByRange(r=>{let{from:o,to:l}=r;if(o==l){let a=e(o);ao&&(t="delete.forward",a=en(n,a,!0)),o=Math.min(o,a),l=Math.max(l,a)}else o=en(n,o,!1),l=en(n,l,!0);return o==l?{range:r}:{changes:{from:o,to:l},range:w.cursor(o)}});return s.changes.empty?!1:(n.dispatch(i.update(s,{scrollIntoView:!0,userEvent:t,effects:t=="delete.selection"?O.announce.of(i.phrase("Selection deleted")):void 0})),!0)}function en(n,e,t){if(n instanceof O)for(let i of n.state.facet(O.atomicRanges).map(s=>s(n)))i.between(e,e,(s,r)=>{se&&(e=t?r:s)});return e}const wc=(n,e)=>Kn(n,t=>{let{state:i}=n,s=i.doc.lineAt(t),r,o;if(!e&&t>s.from&&twc(n,!1),kc=n=>wc(n,!0),vc=(n,e)=>Kn(n,t=>{let i=t,{state:s}=n,r=s.doc.lineAt(i),o=s.charCategorizer(i);for(let l=null;;){if(i==(e?r.to:r.from)){i==t&&r.number!=(e?s.doc.lines:1)&&(i+=e?1:-1);break}let a=ve(r.text,i-r.from,e)+r.from,h=r.text.slice(Math.min(i,a)-r.from,Math.max(i,a)-r.from),c=o(h);if(l!=null&&c!=l)break;(h!=" "||i!=t)&&(l=c),i=a}return i}),xc=n=>vc(n,!1),tg=n=>vc(n,!0),Sc=n=>Kn(n,e=>{let t=n.lineBlockAt(e).to;return eKn(n,e=>{let t=n.lineBlockAt(e).from;return e>t?t:Math.max(0,e-1)}),ng=({state:n,dispatch:e})=>{if(n.readOnly)return!1;let t=n.changeByRange(i=>({changes:{from:i.from,to:i.to,insert:_.of(["",""])},range:w.cursor(i.from)}));return e(n.update(t,{scrollIntoView:!0,userEvent:"input"})),!0},sg=({state:n,dispatch:e})=>{if(n.readOnly)return!1;let t=n.changeByRange(i=>{if(!i.empty||i.from==0||i.from==n.doc.length)return{range:i};let s=i.from,r=n.doc.lineAt(s),o=s==r.from?s-1:ve(r.text,s-r.from,!1)+r.from,l=s==r.to?s+1:ve(r.text,s-r.from,!0)+r.from;return{changes:{from:o,to:l,insert:n.doc.slice(s,l).append(n.doc.slice(o,s))},range:w.cursor(l)}});return t.changes.empty?!1:(e(n.update(t,{scrollIntoView:!0,userEvent:"move.character"})),!0)};function $n(n){let e=[],t=-1;for(let i of n.selection.ranges){let s=n.doc.lineAt(i.from),r=n.doc.lineAt(i.to);if(!i.empty&&i.to==r.from&&(r=n.doc.lineAt(i.to-1)),t>=s.number){let o=e[e.length-1];o.to=r.to,o.ranges.push(i)}else e.push({from:s.from,to:r.to,ranges:[i]});t=r.number+1}return e}function Cc(n,e,t){if(n.readOnly)return!1;let i=[],s=[];for(let r of $n(n)){if(t?r.to==n.doc.length:r.from==0)continue;let o=n.doc.lineAt(t?r.to+1:r.from-1),l=o.length+1;if(t){i.push({from:r.to,to:o.to},{from:r.from,insert:o.text+n.lineBreak});for(let a of r.ranges)s.push(w.range(Math.min(n.doc.length,a.anchor+l),Math.min(n.doc.length,a.head+l)))}else{i.push({from:o.from,to:r.from},{from:r.to,insert:n.lineBreak+o.text});for(let a of r.ranges)s.push(w.range(a.anchor-l,a.head-l))}}return i.length?(e(n.update({changes:i,scrollIntoView:!0,selection:w.create(s,n.selection.mainIndex),userEvent:"move.line"})),!0):!1}const rg=({state:n,dispatch:e})=>Cc(n,e,!1),og=({state:n,dispatch:e})=>Cc(n,e,!0);function Ac(n,e,t){if(n.readOnly)return!1;let i=[];for(let s of $n(n))t?i.push({from:s.from,insert:n.doc.slice(s.from,s.to)+n.lineBreak}):i.push({from:s.to,insert:n.lineBreak+n.doc.slice(s.from,s.to)});return e(n.update({changes:i,scrollIntoView:!0,userEvent:"input.copyline"})),!0}const lg=({state:n,dispatch:e})=>Ac(n,e,!1),ag=({state:n,dispatch:e})=>Ac(n,e,!0),hg=n=>{if(n.state.readOnly)return!1;let{state:e}=n,t=e.changes($n(e).map(({from:s,to:r})=>(s>0?s--:rn.moveVertically(s,!0)).map(t);return n.dispatch({changes:t,selection:i,scrollIntoView:!0,userEvent:"delete.line"}),!0};function cg(n,e){if(/\(\)|\[\]|\{\}/.test(n.sliceDoc(e-1,e+1)))return{from:e,to:e};let t=ae(n).resolveInner(e),i=t.childBefore(e),s=t.childAfter(e),r;return i&&s&&i.to<=e&&s.from>=e&&(r=i.type.prop(L.closedBy))&&r.indexOf(s.name)>-1&&n.doc.lineAt(i.to).from==n.doc.lineAt(s.from).from?{from:i.to,to:s.from}:null}const fg=Mc(!1),ug=Mc(!0);function Mc(n){return({state:e,dispatch:t})=>{if(e.readOnly)return!1;let i=e.changeByRange(s=>{let{from:r,to:o}=s,l=e.doc.lineAt(r),a=!n&&r==o&&cg(e,r);n&&(r=o=(o<=l.to?l:e.doc.lineAt(o)).to);let h=new Hn(e,{simulateBreak:r,simulateDoubleBreak:!!a}),c=Lr(h,r);for(c==null&&(c=/^\s*/.exec(e.doc.lineAt(r).text)[0].length);ol.from&&r{let s=[];for(let o=i.from;o<=i.to;){let l=n.doc.lineAt(o);l.number>t&&(i.empty||i.to>l.from)&&(e(l,s,i),t=l.number),o=l.to+1}let r=n.changes(s);return{changes:s,range:w.range(r.mapPos(i.anchor,1),r.mapPos(i.head,1))}})}const dg=({state:n,dispatch:e})=>{if(n.readOnly)return!1;let t=Object.create(null),i=new Hn(n,{overrideIndentation:r=>{let o=t[r];return o??-1}}),s=Wr(n,(r,o,l)=>{let a=Lr(i,r.from);if(a==null)return;/\S/.test(r.text)||(a=0);let h=/^\s*/.exec(r.text)[0],c=Oi(n,a);(h!=c||l.fromn.readOnly?!1:(e(n.update(Wr(n,(t,i)=>{i.push({from:t.from,insert:n.facet(Fn)})}),{userEvent:"input.indent"})),!0),Tc=({state:n,dispatch:e})=>n.readOnly?!1:(e(n.update(Wr(n,(t,i)=>{let s=/^\s*/.exec(t.text)[0];if(!s)return;let r=Li(s,n.tabSize),o=0,l=Oi(n,Math.max(0,r-Mt(n)));for(;o({mac:n.key,run:n.run,shift:n.shift}))),gg=[{key:"Alt-ArrowLeft",mac:"Ctrl-ArrowLeft",run:Pm,shift:qm},{key:"Alt-ArrowRight",mac:"Ctrl-ArrowRight",run:Em,shift:jm},{key:"Alt-ArrowUp",run:rg},{key:"Shift-Alt-ArrowUp",run:lg},{key:"Alt-ArrowDown",run:og},{key:"Shift-Alt-ArrowDown",run:ag},{key:"Escape",run:eg},{key:"Mod-Enter",run:ug},{key:"Alt-l",mac:"Ctrl-l",run:Zm},{key:"Mod-i",run:Qm,preventDefault:!0},{key:"Mod-[",run:Tc},{key:"Mod-]",run:Dc},{key:"Mod-Alt-\\",run:dg},{key:"Shift-Mod-k",run:hg},{key:"Shift-Mod-\\",run:Hm},{key:"Mod-/",run:am},{key:"Alt-A",run:cm}].concat(mg),yg={key:"Tab",run:Dc,shift:Tc},bg="#2E3235",Ve="#DDDDDD",ki="#B9D2FF",tn="#b0b0b0",wg="#e0e0e0",Oc="#808080",ys="#000000",kg="#A54543",Bc="#fc6d24",mt="#fda331",bs="#8abeb7",xl="#b5bd68",hi="#6fb3d2",ci="#cc99cc",vg="#6987AF",Sl=Bc,Cl="#292d30",nn=ki+"30",xg=bg,ws=Ve,Sg="#202325",Al=Ve,Cg=O.theme({"&":{color:Ve,backgroundColor:xg},".cm-content":{caretColor:Al},".cm-cursor, .cm-dropCursor":{borderLeftColor:Al},"&.cm-focused .cm-selectionBackground, .cm-selectionBackground, .cm-content ::selection":{backgroundColor:Sg},".cm-panels":{backgroundColor:Cl,color:tn},".cm-panels.cm-panels-top":{borderBottom:"2px solid black"},".cm-panels.cm-panels-bottom":{borderTop:"2px solid black"},".cm-searchMatch":{backgroundColor:ki,outline:`1px solid ${tn}`,color:ys},".cm-searchMatch.cm-searchMatch-selected":{backgroundColor:wg,color:ys},".cm-activeLine":{backgroundColor:nn},".cm-selectionMatch":{backgroundColor:nn},"&.cm-focused .cm-matchingBracket, &.cm-focused .cm-nonmatchingBracket":{outline:`1px solid ${tn}`},"&.cm-focused .cm-matchingBracket":{backgroundColor:ki,color:ys},".cm-gutters":{borderRight:"1px solid #ffffff10",color:Oc,backgroundColor:Cl},".cm-activeLineGutter":{backgroundColor:nn},".cm-foldPlaceholder":{backgroundColor:"transparent",border:"none",color:ki},".cm-tooltip":{border:"none",backgroundColor:ws},".cm-tooltip .cm-tooltip-arrow:before":{borderTopColor:"transparent",borderBottomColor:"transparent"},".cm-tooltip .cm-tooltip-arrow:after":{borderTopColor:ws,borderBottomColor:ws},".cm-tooltip-autocomplete":{"& > ul > li[aria-selected]":{backgroundColor:nn,color:tn}}},{dark:!0}),Ag=ei.define([{tag:m.keyword,color:mt},{tag:[m.name,m.deleted,m.character,m.propertyName,m.macroName],color:xl},{tag:[m.variableName],color:hi},{tag:[m.function(m.variableName)],color:mt},{tag:[m.labelName],color:Bc},{tag:[m.color,m.constant(m.name),m.standard(m.name)],color:mt},{tag:[m.definition(m.name),m.separator],color:ci},{tag:[m.brace],color:ci},{tag:[m.annotation],color:Sl},{tag:[m.number,m.changed,m.annotation,m.modifier,m.self,m.namespace],color:mt},{tag:[m.typeName,m.className],color:hi},{tag:[m.operator,m.operatorKeyword],color:ci},{tag:[m.tagName],color:mt},{tag:[m.squareBracket],color:ci},{tag:[m.angleBracket],color:ci},{tag:[m.attributeName],color:hi},{tag:[m.regexp],color:mt},{tag:[m.quote],color:Ve},{tag:[m.string],color:xl},{tag:m.link,color:vg,textDecoration:"underline",textUnderlinePosition:"under"},{tag:[m.url,m.escape,m.special(m.string)],color:bs},{tag:[m.meta],color:kg},{tag:[m.comment],color:Oc,fontStyle:"italic"},{tag:m.monospace,color:Ve},{tag:m.strong,fontWeight:"bold",color:mt},{tag:m.emphasis,fontStyle:"italic",color:hi},{tag:m.strikethrough,textDecoration:"line-through"},{tag:m.heading,fontWeight:"bold",color:Ve},{tag:m.special(m.heading1),fontWeight:"bold",color:Ve},{tag:m.heading1,fontWeight:"bold",color:Ve},{tag:[m.heading2,m.heading3,m.heading4],fontWeight:"bold",color:Ve},{tag:[m.heading5,m.heading6],color:Ve},{tag:[m.atom,m.bool,m.special(m.variableName)],color:bs},{tag:[m.processingInstruction,m.inserted],color:bs},{tag:[m.contentSeparator],color:hi},{tag:m.invalid,color:ki,borderBottom:`1px dotted ${Sl}`}]),Mg=[Cg,Nr(Ag)],Ml="#2e3440",zr="#3b4252",Dl="#434c5e",sn="#4c566a",Tl="#e5e9f0",yr="#eceff4",ks="#8fbcbb",Ol="#88c0d0",Dg="#81a1c1",Pe="#5e81ac",Tg="#bf616a",_t="#d08770",vs="#ebcb8b",Bl="#a3be8c",Og="#b48ead",Pl="#d30102",qr=yr,xs=qr,Bg="#ffffff",Ss=zr,Pg=qr,El=zr,Eg=O.theme({"&":{color:Ml,backgroundColor:Bg},".cm-content":{caretColor:El},".cm-cursor, .cm-dropCursor":{borderLeftColor:El},"&.cm-focused .cm-selectionBackground, .cm-selectionBackground, .cm-content ::selection":{backgroundColor:Pg},".cm-panels":{backgroundColor:qr,color:sn},".cm-panels.cm-panels-top":{borderBottom:"2px solid black"},".cm-panels.cm-panels-bottom":{borderTop:"2px solid black"},".cm-searchMatch":{backgroundColor:"#72a1ff59",outline:`1px solid ${sn}`},".cm-searchMatch.cm-searchMatch-selected":{backgroundColor:Tl},".cm-activeLine":{backgroundColor:xs},".cm-selectionMatch":{backgroundColor:Tl},"&.cm-focused .cm-matchingBracket, &.cm-focused .cm-nonmatchingBracket":{outline:`1px solid ${sn}`},"&.cm-focused .cm-matchingBracket":{backgroundColor:yr},".cm-gutters":{backgroundColor:yr,color:Ml,border:"none"},".cm-activeLineGutter":{backgroundColor:xs},".cm-foldPlaceholder":{backgroundColor:"transparent",border:"none",color:"#ddd"},".cm-tooltip":{border:"none",backgroundColor:Ss},".cm-tooltip .cm-tooltip-arrow:before":{borderTopColor:"transparent",borderBottomColor:"transparent"},".cm-tooltip .cm-tooltip-arrow:after":{borderTopColor:Ss,borderBottomColor:Ss},".cm-tooltip-autocomplete":{"& > ul > li[aria-selected]":{backgroundColor:xs,color:sn}}},{dark:!1}),Rg=ei.define([{tag:m.keyword,color:Pe},{tag:[m.name,m.deleted,m.character,m.propertyName,m.macroName],color:_t},{tag:[m.variableName],color:_t},{tag:[m.function(m.variableName)],color:Pe},{tag:[m.labelName],color:Dg},{tag:[m.color,m.constant(m.name),m.standard(m.name)],color:Pe},{tag:[m.definition(m.name),m.separator],color:Bl},{tag:[m.brace],color:ks},{tag:[m.annotation],color:Pl},{tag:[m.number,m.changed,m.annotation,m.modifier,m.self,m.namespace],color:Ol},{tag:[m.typeName,m.className],color:vs},{tag:[m.operator,m.operatorKeyword],color:Bl},{tag:[m.tagName],color:Og},{tag:[m.squareBracket],color:Tg},{tag:[m.angleBracket],color:_t},{tag:[m.attributeName],color:vs},{tag:[m.regexp],color:Pe},{tag:[m.quote],color:zr},{tag:[m.string],color:_t},{tag:m.link,color:ks,textDecoration:"underline",textUnderlinePosition:"under"},{tag:[m.url,m.escape,m.special(m.string)],color:_t},{tag:[m.meta],color:Ol},{tag:[m.comment],color:Dl,fontStyle:"italic"},{tag:m.strong,fontWeight:"bold",color:Pe},{tag:m.emphasis,fontStyle:"italic",color:Pe},{tag:m.strikethrough,textDecoration:"line-through"},{tag:m.heading,fontWeight:"bold",color:Pe},{tag:m.special(m.heading1),fontWeight:"bold",color:Pe},{tag:m.heading1,fontWeight:"bold",color:Pe},{tag:[m.heading2,m.heading3,m.heading4],fontWeight:"bold",color:Pe},{tag:[m.heading5,m.heading6],color:Pe},{tag:[m.atom,m.bool,m.special(m.variableName)],color:_t},{tag:[m.processingInstruction,m.inserted],color:ks},{tag:[m.contentSeparator],color:vs},{tag:m.invalid,color:Dl,borderBottom:`1px dotted ${Pl}`}]),Lg=[Eg,Nr(Rg)];function Rl(n){let e=Object.keys(n).join(""),t=/\w/.test(e);return t&&(e=e.replace(/\w/g,"")),`[${t?"\\w":""}${e.replace(/[^\w\s]/g,"\\$&")}]`}function Ig(n){let e=Object.create(null),t=Object.create(null);for(let{label:s}of n){e[s[0]]=!0;for(let r=1;rtypeof s=="string"?{label:s}:s),[t,i]=e.every(s=>/^\w+$/.test(s.label))?[/\w*$/,/\w+$/]:Ig(e);return s=>{let r=s.matchBefore(i);return r||s.explicit?{from:r?r.from:s.pos,options:e,validFor:t}:null}}function vb(n,e){return t=>{for(let i=ae(t.state).resolveInner(t.pos,-1);i;i=i.parent)if(n.indexOf(i.name)>-1)return null;return e(t)}}class Ll{constructor(e,t,i){this.completion=e,this.source=t,this.match=i}}function br(n){return n.selection.main.head}function _g(n,e,t,i){return Object.assign(Object.assign({},n.changeByRange(s=>{if(s==n.selection.main)return{changes:{from:t,to:i,insert:e},range:w.cursor(t+e.length)};let r=i-t;return!s.empty||r&&n.sliceDoc(s.from-r,s.from)!=n.sliceDoc(t,i)?{range:s}:{changes:{from:s.from-r,to:s.from,insert:e},range:w.cursor(s.from-r+e.length)}})),{userEvent:"input.complete"})}function Pc(n,e){const t=e.completion.apply||e.completion.label;let i=e.source;typeof t=="string"?n.dispatch(_g(n.state,t,i.from,i.to)):t(n,e.completion,i.from,i.to)}const Il=new WeakMap;function Vg(n){if(!Array.isArray(n))return n;let e=Il.get(n);return e||Il.set(n,e=Ng(n)),e}class Fg{constructor(e){this.pattern=e,this.chars=[],this.folded=[],this.any=[],this.precise=[],this.byWord=[];for(let t=0;t=48&&C<=57||C>=97&&C<=122?2:C>=65&&C<=90?1:0:(T=fa(C))!=T.toLowerCase()?1:T!=T.toUpperCase()?2:0;(!v||B==1&&y||k==0&&B!=0)&&(t[f]==C||i[f]==C&&(u=!0)?o[f++]=v:o.length&&(b=!1)),k=B,v+=Ce(C)}return f==a&&o[0]==0&&b?this.result(-100+(u?-200:0),o,e):d==a&&p==0?[-200-e.length,0,g]:l>-1?[-700-e.length,l,l+this.pattern.length]:d==a?[-200+-700-e.length,p,g]:f==a?this.result(-100+(u?-200:0)+-700+(b?0:-1100),o,e):t.length==2?null:this.result((s[0]?-700:0)+-200+-1100,s,e)}result(e,t,i){let s=[e-i.length],r=1;for(let o of t){let l=o+(this.astral?Ce(ce(i,o)):1);r>1&&s[r-1]==o?s[r-1]=l:(s[r++]=o,s[r++]=l)}return s}}const Tt=D.define({combine(n){return Rt(n,{activateOnTyping:!0,selectOnOpen:!0,override:null,closeOnBlur:!0,maxRenderedOptions:100,defaultKeymap:!0,optionClass:()=>"",aboveCursor:!1,icons:!0,addToOptions:[],compareCompletions:(e,t)=>e.label.localeCompare(t.label),interactionDelay:75},{defaultKeymap:(e,t)=>e&&t,closeOnBlur:(e,t)=>e&&t,icons:(e,t)=>e&&t,optionClass:(e,t)=>i=>Hg(e(i),t(i)),addToOptions:(e,t)=>e.concat(t)})}});function Hg(n,e){return n?e?n+" "+e:n:e}function Wg(n){let e=n.addToOptions.slice();return n.icons&&e.push({render(t){let i=document.createElement("div");return i.classList.add("cm-completionIcon"),t.type&&i.classList.add(...t.type.split(/\s+/g).map(s=>"cm-completionIcon-"+s)),i.setAttribute("aria-hidden","true"),i},position:20}),e.push({render(t,i,s){let r=document.createElement("span");r.className="cm-completionLabel";let{label:o}=t,l=0;for(let a=1;al&&r.appendChild(document.createTextNode(o.slice(l,h)));let f=r.appendChild(document.createElement("span"));f.appendChild(document.createTextNode(o.slice(h,c))),f.className="cm-completionMatchedText",l=c}return lt.position-i.position).map(t=>t.render)}function Nl(n,e,t){if(n<=t)return{from:0,to:n};if(e<0&&(e=0),e<=n>>1){let s=Math.floor(e/t);return{from:s*t,to:(s+1)*t}}let i=Math.floor((n-e)/t);return{from:n-(i+1)*t,to:n-i*t}}class zg{constructor(e,t){this.view=e,this.stateField=t,this.info=null,this.placeInfo={read:()=>this.measureInfo(),write:l=>this.positionInfo(l),key:this};let i=e.state.field(t),{options:s,selected:r}=i.open,o=e.state.facet(Tt);this.optionContent=Wg(o),this.optionClass=o.optionClass,this.range=Nl(s.length,r,o.maxRenderedOptions),this.dom=document.createElement("div"),this.dom.className="cm-tooltip-autocomplete",this.dom.addEventListener("mousedown",l=>{for(let a=l.target,h;a&&a!=this.dom;a=a.parentNode)if(a.nodeName=="LI"&&(h=/-(\d+)$/.exec(a.id))&&+h[1]{this.info&&this.view.requestMeasure(this.placeInfo)})}mount(){this.updateSel()}update(e){e.state.field(this.stateField)!=e.startState.field(this.stateField)&&this.updateSel()}positioned(){this.info&&this.view.requestMeasure(this.placeInfo)}updateSel(){let e=this.view.state.field(this.stateField),t=e.open;if((t.selected>-1&&t.selected=this.range.to)&&(this.range=Nl(t.options.length,t.selected,this.view.state.facet(Tt).maxRenderedOptions),this.list.remove(),this.list=this.dom.appendChild(this.createListBox(t.options,e.id,this.range)),this.list.addEventListener("scroll",()=>{this.info&&this.view.requestMeasure(this.placeInfo)})),this.updateSelectedOption(t.selected)){this.info&&(this.info.remove(),this.info=null);let{completion:i}=t.options[t.selected],{info:s}=i;if(!s)return;let r=typeof s=="string"?document.createTextNode(s):s(i);if(!r)return;"then"in r?r.then(o=>{o&&this.view.state.field(this.stateField,!1)==e&&this.addInfoPane(o)}).catch(o=>Ee(this.view.state,o,"completion info")):this.addInfoPane(r)}}addInfoPane(e){let t=this.info=document.createElement("div");t.className="cm-tooltip cm-completionInfo",t.appendChild(e),this.dom.appendChild(t),this.view.requestMeasure(this.placeInfo)}updateSelectedOption(e){let t=null;for(let i=this.list.firstChild,s=this.range.from;i;i=i.nextSibling,s++)s==e?i.hasAttribute("aria-selected")||(i.setAttribute("aria-selected","true"),t=i):i.hasAttribute("aria-selected")&&i.removeAttribute("aria-selected");return t&&jg(this.list,t),t}measureInfo(){let e=this.dom.querySelector("[aria-selected]");if(!e||!this.info)return null;let t=this.dom.ownerDocument.defaultView||window,i=this.dom.getBoundingClientRect(),s=this.info.getBoundingClientRect(),r=e.getBoundingClientRect();if(r.top>Math.min(t.innerHeight,i.bottom)-10||r.bottom=s.height||p>i.top?c=r.bottom-i.top+"px":f=i.bottom-r.top+"px"}return{top:c,bottom:f,maxWidth:h,class:a?o?"left-narrow":"right-narrow":l?"left":"right"}}positionInfo(e){this.info&&(e?(this.info.style.top=e.top,this.info.style.bottom=e.bottom,this.info.style.maxWidth=e.maxWidth,this.info.className="cm-tooltip cm-completionInfo cm-completionInfo-"+e.class):this.info.style.top="-1e6px")}createListBox(e,t,i){const s=document.createElement("ul");s.id=t,s.setAttribute("role","listbox"),s.setAttribute("aria-expanded","true"),s.setAttribute("aria-label",this.view.state.phrase("Completions"));for(let r=i.from;rnew zg(e,n)}function jg(n,e){let t=n.getBoundingClientRect(),i=e.getBoundingClientRect();i.topt.bottom&&(n.scrollTop+=i.bottom-t.bottom)}function _l(n){return(n.boost||0)*100+(n.apply?10:0)+(n.info?5:0)+(n.type?1:0)}function Kg(n,e){let t=[],i=0;for(let l of n)if(l.hasResult())if(l.result.filter===!1){let a=l.result.getMatch;for(let h of l.result.options){let c=[1e9-i++];if(a)for(let f of a(h))c.push(f);t.push(new Ll(h,l,c))}}else{let a=new Fg(e.sliceDoc(l.from,l.to)),h;for(let c of l.result.options)(h=a.match(c.label))&&(c.boost!=null&&(h[0]+=c.boost),t.push(new Ll(c,l,h)))}let s=[],r=null,o=e.facet(Tt).compareCompletions;for(let l of t.sort((a,h)=>h.match[0]-a.match[0]||o(a.completion,h.completion)))!r||r.label!=l.completion.label||r.detail!=l.completion.detail||r.type!=null&&l.completion.type!=null&&r.type!=l.completion.type||r.apply!=l.completion.apply?s.push(l):_l(l.completion)>_l(r)&&(s[s.length-1]=l),r=l.completion;return s}class vi{constructor(e,t,i,s,r){this.options=e,this.attrs=t,this.tooltip=i,this.timestamp=s,this.selected=r}setSelected(e,t){return e==this.selected||e>=this.options.length?this:new vi(this.options,Vl(t,e),this.tooltip,this.timestamp,e)}static build(e,t,i,s,r){let o=Kg(e,t);if(!o.length)return null;let l=t.facet(Tt).selectOnOpen?0:-1;if(s&&s.selected!=l&&s.selected!=-1){let a=s.options[s.selected].completion;for(let h=0;hh.hasResult()?Math.min(a,h.from):a,1e8),create:qg(_i),above:r.aboveCursor},s?s.timestamp:Date.now(),l)}map(e){return new vi(this.options,this.attrs,Object.assign(Object.assign({},this.tooltip),{pos:e.mapPos(this.tooltip.pos)}),this.timestamp,this.selected)}}class Rn{constructor(e,t,i){this.active=e,this.id=t,this.open=i}static start(){return new Rn(Gg,"cm-ac-"+Math.floor(Math.random()*2e6).toString(36),null)}update(e){let{state:t}=e,i=t.facet(Tt),r=(i.override||t.languageDataAt("autocomplete",br(t)).map(Vg)).map(l=>(this.active.find(h=>h.source==l)||new Ye(l,this.active.some(h=>h.state!=0)?1:0)).update(e,i));r.length==this.active.length&&r.every((l,a)=>l==this.active[a])&&(r=this.active);let o=e.selection||r.some(l=>l.hasResult()&&e.changes.touchesRange(l.from,l.to))||!$g(r,this.active)?vi.build(r,t,this.id,this.open,i):this.open&&e.docChanged?this.open.map(e.changes):this.open;!o&&r.every(l=>l.state!=1)&&r.some(l=>l.hasResult())&&(r=r.map(l=>l.hasResult()?new Ye(l.source,0):l));for(let l of e.effects)l.is(Lc)&&(o=o&&o.setSelected(l.value,this.id));return r==this.active&&o==this.open?this:new Rn(r,this.id,o)}get tooltip(){return this.open?this.open.tooltip:null}get attrs(){return this.open?this.open.attrs:Ug}}function $g(n,e){if(n==e)return!0;for(let t=0,i=0;;){for(;t-1&&(t["aria-activedescendant"]=n+"-"+e),t}const Gg=[];function Jg(n){return n.isUserEvent("input.type")?"input":n.isUserEvent("delete.backward")?"delete":null}class Ye{constructor(e,t,i=-1){this.source=e,this.state=t,this.explicitPos=i}hasResult(){return!1}update(e,t){let i=Jg(e),s=this;i?s=s.handleUserEvent(e,i,t):e.docChanged?s=s.handleChange(e):e.selection&&s.state!=0&&(s=new Ye(s.source,0));for(let r of e.effects)if(r.is(Ec))s=new Ye(s.source,1,r.value?br(e.state):-1);else if(r.is(Rc))s=new Ye(s.source,0);else if(r.is(Yg))for(let o of r.value)o.source==s.source&&(s=o);return s}handleUserEvent(e,t,i){return t=="delete"||!i.activateOnTyping?this.map(e.changes):new Ye(this.source,1)}handleChange(e){return e.changes.touchesRange(br(e.startState))?new Ye(this.source,0):this.map(e.changes)}map(e){return e.empty||this.explicitPos<0?this:new Ye(this.source,this.state,e.mapPos(this.explicitPos))}}const Ec=R.define(),Rc=R.define(),Yg=R.define({map(n,e){return n.map(t=>t.map(e))}}),Lc=R.define(),_i=be.define({create(){return Rn.start()},update(n,e){return n.update(e)},provide:n=>[Tr.from(n,e=>e.tooltip),O.contentAttributes.from(n,e=>e.attrs)]});function rn(n,e="option"){return t=>{let i=t.state.field(_i,!1);if(!i||!i.open||Date.now()-i.open.timestamp-1?i.open.selected+s*(n?1:-1):n?0:o-1;return l<0?l=e=="page"?0:o-1:l>=o&&(l=e=="page"?o-1:0),t.dispatch({effects:Lc.of(l)}),!0}}const Xg=n=>{let e=n.state.field(_i,!1);return n.state.readOnly||!e||!e.open||e.open.selected<0||Date.now()-e.open.timestampn.state.field(_i,!1)?(n.dispatch({effects:Ec.of(!0)}),!0):!1,Qg=n=>{let e=n.state.field(_i,!1);return!e||!e.active.some(t=>t.state!=0)?!1:(n.dispatch({effects:Rc.of(null)}),!0)},e0=O.baseTheme({".cm-tooltip.cm-tooltip-autocomplete":{"& > ul":{fontFamily:"monospace",whiteSpace:"nowrap",overflow:"hidden auto",maxWidth_fallback:"700px",maxWidth:"min(700px, 95vw)",minWidth:"250px",maxHeight:"10em",listStyle:"none",margin:0,padding:0,"& > li":{overflowX:"hidden",textOverflow:"ellipsis",cursor:"pointer",padding:"1px 3px",lineHeight:1.2}}},"&light .cm-tooltip-autocomplete ul li[aria-selected]":{background:"#17c",color:"white"},"&dark .cm-tooltip-autocomplete ul li[aria-selected]":{background:"#347",color:"white"},".cm-completionListIncompleteTop:before, .cm-completionListIncompleteBottom:after":{content:'"···"',opacity:.5,display:"block",textAlign:"center"},".cm-tooltip.cm-completionInfo":{position:"absolute",padding:"3px 9px",width:"max-content",maxWidth:"400px",boxSizing:"border-box"},".cm-completionInfo.cm-completionInfo-left":{right:"100%"},".cm-completionInfo.cm-completionInfo-right":{left:"100%"},".cm-completionInfo.cm-completionInfo-left-narrow":{right:"30px"},".cm-completionInfo.cm-completionInfo-right-narrow":{left:"30px"},"&light .cm-snippetField":{backgroundColor:"#00000022"},"&dark .cm-snippetField":{backgroundColor:"#ffffff22"},".cm-snippetFieldPosition":{verticalAlign:"text-top",width:0,height:"1.15em",display:"inline-block",margin:"0 -0.7px -.7em",borderLeft:"1.4px dotted #888"},".cm-completionMatchedText":{textDecoration:"underline"},".cm-completionDetail":{marginLeft:"0.5em",fontStyle:"italic"},".cm-completionIcon":{fontSize:"90%",width:".8em",display:"inline-block",textAlign:"center",paddingRight:".6em",opacity:"0.6"},".cm-completionIcon-function, .cm-completionIcon-method":{"&:after":{content:"'ƒ'"}},".cm-completionIcon-class":{"&:after":{content:"'○'"}},".cm-completionIcon-interface":{"&:after":{content:"'◌'"}},".cm-completionIcon-variable":{"&:after":{content:"'𝑥'"}},".cm-completionIcon-constant":{"&:after":{content:"'𝐶'"}},".cm-completionIcon-type":{"&:after":{content:"'𝑡'"}},".cm-completionIcon-enum":{"&:after":{content:"'∪'"}},".cm-completionIcon-property":{"&:after":{content:"'□'"}},".cm-completionIcon-keyword":{"&:after":{content:"'🔑︎'"}},".cm-completionIcon-namespace":{"&:after":{content:"'▢'"}},".cm-completionIcon-text":{"&:after":{content:"'abc'",fontSize:"50%",verticalAlign:"middle"}}});class t0{constructor(e,t,i,s){this.field=e,this.line=t,this.from=i,this.to=s}}class jr{constructor(e,t,i){this.field=e,this.from=t,this.to=i}map(e){let t=e.mapPos(this.from,-1,le.TrackDel),i=e.mapPos(this.to,1,le.TrackDel);return t==null||i==null?null:new jr(this.field,t,i)}}class Kr{constructor(e,t){this.lines=e,this.fieldPositions=t}instantiate(e,t){let i=[],s=[t],r=e.doc.lineAt(t),o=/^\s*/.exec(r.text)[0];for(let a of this.lines){if(i.length){let h=o,c=/^\t*/.exec(a)[0].length;for(let f=0;fnew jr(a.field,s[a.line]+a.from,s[a.line]+a.to));return{text:i,ranges:l}}static parse(e){let t=[],i=[],s=[],r;for(let o of e.split(/\r\n?|\n/)){for(;r=/[#$]\{(?:(\d+)(?::([^}]*))?|([^}]*))\}/.exec(o);){let l=r[1]?+r[1]:null,a=r[2]||r[3]||"",h=-1;for(let c=0;c=h&&f.field++}s.push(new t0(h,i.length,r.index,r.index+a.length)),o=o.slice(0,r.index)+a+o.slice(r.index+r[0].length)}for(let l;l=/([$#])\\{/.exec(o);){o=o.slice(0,l.index)+l[1]+"{"+o.slice(l.index+l[0].length);for(let a of s)a.line==i.length&&a.from>l.index&&(a.from--,a.to--)}i.push(o)}return new Kr(i,s)}}let i0=E.widget({widget:new class extends Ue{toDOM(){let n=document.createElement("span");return n.className="cm-snippetFieldPosition",n}ignoreEvent(){return!1}}}),n0=E.mark({class:"cm-snippetField"});class ii{constructor(e,t){this.ranges=e,this.active=t,this.deco=E.set(e.map(i=>(i.from==i.to?i0:n0).range(i.from,i.to)))}map(e){let t=[];for(let i of this.ranges){let s=i.map(e);if(!s)return null;t.push(s)}return new ii(t,this.active)}selectionInsideField(e){return e.ranges.every(t=>this.ranges.some(i=>i.field==this.active&&i.from<=t.from&&i.to>=t.to))}}const Vi=R.define({map(n,e){return n&&n.map(e)}}),s0=R.define(),Pi=be.define({create(){return null},update(n,e){for(let t of e.effects){if(t.is(Vi))return t.value;if(t.is(s0)&&n)return new ii(n.ranges,t.value)}return n&&e.docChanged&&(n=n.map(e.changes)),n&&e.selection&&!n.selectionInsideField(e.selection)&&(n=null),n},provide:n=>O.decorations.from(n,e=>e?e.deco:E.none)});function $r(n,e){return w.create(n.filter(t=>t.field==e).map(t=>w.range(t.from,t.to)))}function r0(n){let e=Kr.parse(n);return(t,i,s,r)=>{let{text:o,ranges:l}=e.instantiate(t.state,s),a={changes:{from:s,to:r,insert:_.of(o)},scrollIntoView:!0};if(l.length&&(a.selection=$r(l,0)),l.length>1){let h=new ii(l,0),c=a.effects=[Vi.of(h)];t.state.field(Pi,!1)===void 0&&c.push(R.appendConfig.of([Pi,c0,f0,e0]))}t.dispatch(t.state.update(a))}}function Ic(n){return({state:e,dispatch:t})=>{let i=e.field(Pi,!1);if(!i||n<0&&i.active==0)return!1;let s=i.active+n,r=n>0&&!i.ranges.some(o=>o.field==s+n);return t(e.update({selection:$r(i.ranges,s),effects:Vi.of(r?null:new ii(i.ranges,s))})),!0}}const o0=({state:n,dispatch:e})=>n.field(Pi,!1)?(e(n.update({effects:Vi.of(null)})),!0):!1,l0=Ic(1),a0=Ic(-1),h0=[{key:"Tab",run:l0,shift:a0},{key:"Escape",run:o0}],Fl=D.define({combine(n){return n.length?n[0]:h0}}),c0=Ri.highest(Vn.compute([Fl],n=>n.facet(Fl)));function xb(n,e){return Object.assign(Object.assign({},e),{apply:r0(n)})}const f0=O.domEventHandlers({mousedown(n,e){let t=e.state.field(Pi,!1),i;if(!t||(i=e.posAtCoords({x:n.clientX,y:n.clientY}))==null)return!1;let s=t.ranges.find(r=>r.from<=i&&r.to>=i);return!s||s.field==t.active?!1:(e.dispatch({selection:$r(t.ranges,s.field),effects:Vi.of(t.ranges.some(r=>r.field>s.field)?new ii(t.ranges,s.field):null)}),!0)}}),Ei={brackets:["(","[","{","'",'"'],before:")]}:;>",stringPrefixes:[]},kt=R.define({map(n,e){let t=e.mapPos(n,-1,le.TrackAfter);return t??void 0}}),Ur=R.define({map(n,e){return e.mapPos(n)}}),Gr=new class extends St{};Gr.startSide=1;Gr.endSide=-1;const Nc=be.define({create(){return F.empty},update(n,e){if(e.selection){let t=e.state.doc.lineAt(e.selection.main.head).from,i=e.startState.doc.lineAt(e.startState.selection.main.head).from;t!=e.changes.mapPos(i,-1)&&(n=F.empty)}n=n.map(e.changes);for(let t of e.effects)t.is(kt)?n=n.update({add:[Gr.range(t.value,t.value+1)]}):t.is(Ur)&&(n=n.update({filter:i=>i!=t.value}));return n}});function u0(){return[p0,Nc]}const Cs="()[]{}<>";function _c(n){for(let e=0;e{if((d0?n.composing:n.compositionStarted)||n.state.readOnly)return!1;let s=n.state.selection.main;if(i.length>2||i.length==2&&Ce(ce(i,0))==1||e!=s.from||t!=s.to)return!1;let r=y0(n.state,i);return r?(n.dispatch(r),!0):!1}),m0=({state:n,dispatch:e})=>{if(n.readOnly)return!1;let i=Vc(n,n.selection.main.head).brackets||Ei.brackets,s=null,r=n.changeByRange(o=>{if(o.empty){let l=b0(n.doc,o.head);for(let a of i)if(a==l&&Un(n.doc,o.head)==_c(ce(a,0)))return{changes:{from:o.head-a.length,to:o.head+a.length},range:w.cursor(o.head-a.length)}}return{range:s=o}});return s||e(n.update(r,{scrollIntoView:!0,userEvent:"delete.backward"})),!s},g0=[{key:"Backspace",run:m0}];function y0(n,e){let t=Vc(n,n.selection.main.head),i=t.brackets||Ei.brackets;for(let s of i){let r=_c(ce(s,0));if(e==s)return r==s?v0(n,s,i.indexOf(s+s+s)>-1,t):w0(n,s,r,t.before||Ei.before);if(e==r&&Fc(n,n.selection.main.from))return k0(n,s,r)}return null}function Fc(n,e){let t=!1;return n.field(Nc).between(0,n.doc.length,i=>{i==e&&(t=!0)}),t}function Un(n,e){let t=n.sliceString(e,e+2);return t.slice(0,Ce(ce(t,0)))}function b0(n,e){let t=n.sliceString(e-2,e);return Ce(ce(t,0))==t.length?t:t.slice(1)}function w0(n,e,t,i){let s=null,r=n.changeByRange(o=>{if(!o.empty)return{changes:[{insert:e,from:o.from},{insert:t,from:o.to}],effects:kt.of(o.to+e.length),range:w.range(o.anchor+e.length,o.head+e.length)};let l=Un(n.doc,o.head);return!l||/\s/.test(l)||i.indexOf(l)>-1?{changes:{insert:e+t,from:o.head},effects:kt.of(o.head+e.length),range:w.cursor(o.head+e.length)}:{range:s=o}});return s?null:n.update(r,{scrollIntoView:!0,userEvent:"input.type"})}function k0(n,e,t){let i=null,s=n.selection.ranges.map(r=>r.empty&&Un(n.doc,r.head)==t?w.cursor(r.head+t.length):i=r);return i?null:n.update({selection:w.create(s,n.selection.mainIndex),scrollIntoView:!0,effects:n.selection.ranges.map(({from:r})=>Ur.of(r))})}function v0(n,e,t,i){let s=i.stringPrefixes||Ei.stringPrefixes,r=null,o=n.changeByRange(l=>{if(!l.empty)return{changes:[{insert:e,from:l.from},{insert:e,from:l.to}],effects:kt.of(l.to+e.length),range:w.range(l.anchor+e.length,l.head+e.length)};let a=l.head,h=Un(n.doc,a),c;if(h==e){if(Hl(n,a))return{changes:{insert:e+e,from:a},effects:kt.of(a+e.length),range:w.cursor(a+e.length)};if(Fc(n,a)){let f=t&&n.sliceDoc(a,a+e.length*3)==e+e+e;return{range:w.cursor(a+e.length*(f?3:1)),effects:Ur.of(a)}}}else{if(t&&n.sliceDoc(a-2*e.length,a)==e+e&&(c=Wl(n,a-2*e.length,s))>-1&&Hl(n,c))return{changes:{insert:e+e+e+e,from:a},effects:kt.of(a+e.length),range:w.cursor(a+e.length)};if(n.charCategorizer(a)(h)!=Ae.Word&&Wl(n,a,s)>-1&&!x0(n,a,e,s))return{changes:{insert:e+e,from:a},effects:kt.of(a+e.length),range:w.cursor(a+e.length)}}return{range:r=l}});return r?null:n.update(o,{scrollIntoView:!0,userEvent:"input.type"})}function Hl(n,e){let t=ae(n).resolveInner(e+1);return t.parent&&t.from==e}function x0(n,e,t,i){let s=ae(n).resolveInner(e,-1),r=i.reduce((o,l)=>Math.max(o,l.length),0);for(let o=0;o<5;o++){let l=n.sliceDoc(s.from,Math.min(s.to,s.from+t.length+r)),a=l.indexOf(t);if(!a||a>-1&&i.indexOf(l.slice(0,a))>-1){let c=s.firstChild;for(;c&&c.from==s.from&&c.to-c.from>t.length+a;){if(n.sliceDoc(c.to-t.length,c.to)==t)return!1;c=c.firstChild}return!0}let h=s.to==e&&s.parent;if(!h)break;s=h}return!1}function Wl(n,e,t){let i=n.charCategorizer(e);if(i(n.sliceDoc(e-1,e))!=Ae.Word)return e;for(let s of t){let r=e-s.length;if(n.sliceDoc(r,e)==s&&i(n.sliceDoc(r-1,r))!=Ae.Word)return r}return-1}const S0=[{key:"Ctrl-Space",run:Zg},{key:"Escape",run:Qg},{key:"ArrowDown",run:rn(!0)},{key:"ArrowUp",run:rn(!1)},{key:"PageDown",run:rn(!0,"page")},{key:"PageUp",run:rn(!1,"page")},{key:"Enter",run:Xg}];function We(){var n=arguments[0];typeof n=="string"&&(n=document.createElement(n));var e=1,t=arguments[1];if(t&&typeof t=="object"&&t.nodeType==null&&!Array.isArray(t)){for(var i in t)if(Object.prototype.hasOwnProperty.call(t,i)){var s=t[i];typeof s=="string"?n.setAttribute(i,s):s!=null&&(n[i]=s)}e++}for(;el.from==l.to||l.from==l.to-1&&i.doc.lineAt(l.from).to==l.from?E.widget({widget:new L0(l),diagnostic:l}).range(l.from):E.mark({attributes:{class:"cm-lintRange cm-lintRange-"+l.severity},diagnostic:l}).range(l.from,l.to)),!0);return new yt(o,t,Qt(o))}}function Qt(n,e=null,t=0){let i=null;return n.between(t,1e9,(s,r,{spec:o})=>{if(!(e&&o.diagnostic!=e))return i=new C0(s,r,o.diagnostic),!1}),i}function A0(n,e){return!!(n.effects.some(t=>t.is(Jr))||n.changes.touchesRange(e.pos))}function Wc(n,e){return n.field(xe,!1)?e:e.concat(R.appendConfig.of([xe,O.decorations.compute([xe],t=>{let{selected:i,panel:s}=t.field(xe);return!i||!s||i.from==i.to?E.none:E.set([D0.range(i.from,i.to)])}),Wd(T0,{hideOn:A0}),N0]))}function M0(n,e){return{effects:Wc(n,[Jr.of(e)])}}const Jr=R.define(),Yr=R.define(),zc=R.define(),xe=be.define({create(){return new yt(E.none,null,null)},update(n,e){if(e.docChanged){let t=n.diagnostics.map(e.changes),i=null;if(n.selected){let s=e.changes.mapPos(n.selected.from,1);i=Qt(t,n.selected.diagnostic,s)||Qt(t,null,s)}n=new yt(t,n.panel,i)}for(let t of e.effects)t.is(Jr)?n=yt.init(t.value,n.panel,e.state):t.is(Yr)?n=new yt(n.diagnostics,t.value?Gn.open:null,n.selected):t.is(zc)&&(n=new yt(n.diagnostics,n.panel,t.value));return n},provide:n=>[rr.from(n,e=>e.panel),O.decorations.from(n,e=>e.diagnostics)]}),D0=E.mark({class:"cm-lintRange cm-lintRange-active"});function T0(n,e,t){let{diagnostics:i}=n.state.field(xe),s=[],r=2e8,o=0;i.between(e-(t<0?1:0),e+(t>0?1:0),(a,h,{spec:c})=>{e>=a&&e<=h&&(a==h||(e>a||t>0)&&(ejc(n,t,!1)))}const B0=n=>{let e=n.state.field(xe,!1);(!e||!e.panel)&&n.dispatch({effects:Wc(n.state,[Yr.of(!0)])});let t=jd(n,Gn.open);return t&&t.dom.querySelector(".cm-panel-lint ul").focus(),!0},zl=n=>{let e=n.state.field(xe,!1);return!e||!e.panel?!1:(n.dispatch({effects:Yr.of(!1)}),!0)},P0=n=>{let e=n.state.field(xe,!1);if(!e)return!1;let t=n.state.selection.main,i=e.diagnostics.iter(t.to+1);return!i.value&&(i=e.diagnostics.iter(0),!i.value||i.from==t.from&&i.to==t.to)?!1:(n.dispatch({selection:{anchor:i.from,head:i.to},scrollIntoView:!0}),!0)},E0=[{key:"Mod-Shift-m",run:B0},{key:"F8",run:P0}],R0=ue.fromClass(class{constructor(n){this.view=n,this.timeout=-1,this.set=!0;let{delay:e}=n.state.facet(zt);this.lintTime=Date.now()+e,this.run=this.run.bind(this),this.timeout=setTimeout(this.run,e)}run(){let n=Date.now();if(nPromise.resolve(i(this.view)))).then(i=>{let s=i.reduce((r,o)=>r.concat(o));this.view.state.doc==e.doc&&this.view.dispatch(M0(this.view.state,s))},i=>{Ee(this.view.state,i)})}}update(n){let e=n.state.facet(zt);(n.docChanged||e!=n.startState.facet(zt))&&(this.lintTime=Date.now()+e.delay,this.set||(this.set=!0,this.timeout=setTimeout(this.run,e.delay)))}force(){this.set&&(this.lintTime=Date.now(),this.run())}destroy(){clearTimeout(this.timeout)}}),zt=D.define({combine(n){return Object.assign({sources:n.map(e=>e.source)},Rt(n.map(e=>e.config),{delay:750,markerFilter:null,tooltipFilter:null}))},enables:R0});function qc(n){let e=[];if(n)e:for(let{name:t}of n){for(let i=0;ir.toLowerCase()==s.toLowerCase())){e.push(s);continue e}}e.push("")}return e}function jc(n,e,t){var i;let s=t?qc(e.actions):[];return We("li",{class:"cm-diagnostic cm-diagnostic-"+e.severity},We("span",{class:"cm-diagnosticText"},e.renderMessage?e.renderMessage():e.message),(i=e.actions)===null||i===void 0?void 0:i.map((r,o)=>{let l=f=>{f.preventDefault();let u=Qt(n.state.field(xe).diagnostics,e);u&&r.apply(n,u.from,u.to)},{name:a}=r,h=s[o]?a.indexOf(s[o]):-1,c=h<0?a:[a.slice(0,h),We("u",a.slice(h,h+1)),a.slice(h+1)];return We("button",{type:"button",class:"cm-diagnosticAction",onclick:l,onmousedown:l,"aria-label":` Action: ${a}${h<0?"":` (access key "${s[o]})"`}.`},c)}),e.source&&We("div",{class:"cm-diagnosticSource"},e.source))}class L0 extends Ue{constructor(e){super(),this.diagnostic=e}eq(e){return e.diagnostic==this.diagnostic}toDOM(){return We("span",{class:"cm-lintPoint cm-lintPoint-"+this.diagnostic.severity})}}class ql{constructor(e,t){this.diagnostic=t,this.id="item_"+Math.floor(Math.random()*4294967295).toString(16),this.dom=jc(e,t,!0),this.dom.id=this.id,this.dom.setAttribute("role","option")}}class Gn{constructor(e){this.view=e,this.items=[];let t=s=>{if(s.keyCode==27)zl(this.view),this.view.focus();else if(s.keyCode==38||s.keyCode==33)this.moveSelection((this.selectedIndex-1+this.items.length)%this.items.length);else if(s.keyCode==40||s.keyCode==34)this.moveSelection((this.selectedIndex+1)%this.items.length);else if(s.keyCode==36)this.moveSelection(0);else if(s.keyCode==35)this.moveSelection(this.items.length-1);else if(s.keyCode==13)this.view.focus();else if(s.keyCode>=65&&s.keyCode<=90&&this.selectedIndex>=0){let{diagnostic:r}=this.items[this.selectedIndex],o=qc(r.actions);for(let l=0;l{for(let r=0;rzl(this.view)},"×")),this.update()}get selectedIndex(){let e=this.view.state.field(xe).selected;if(!e)return-1;for(let t=0;t{let h=-1,c;for(let f=i;fi&&(this.items.splice(i,h-i),s=!0)),t&&c.diagnostic==t.diagnostic?c.dom.hasAttribute("aria-selected")||(c.dom.setAttribute("aria-selected","true"),r=c):c.dom.hasAttribute("aria-selected")&&c.dom.removeAttribute("aria-selected"),i++});i({sel:r.dom.getBoundingClientRect(),panel:this.list.getBoundingClientRect()}),write:({sel:o,panel:l})=>{o.topl.bottom&&(this.list.scrollTop+=o.bottom-l.bottom)}})):this.selectedIndex<0&&this.list.removeAttribute("aria-activedescendant"),s&&this.sync()}sync(){let e=this.list.firstChild;function t(){let i=e;e=i.nextSibling,i.remove()}for(let i of this.items)if(i.dom.parentNode==this.list){for(;e!=i.dom;)t();e=i.dom.nextSibling}else this.list.insertBefore(i.dom,e);for(;e;)t()}moveSelection(e){if(this.selectedIndex<0)return;let t=this.view.state.field(xe),i=Qt(t.diagnostics,this.items[e].diagnostic);i&&this.view.dispatch({selection:{anchor:i.from,head:i.to},scrollIntoView:!0,effects:zc.of(i)})}static open(e){return new Gn(e)}}function I0(n,e='viewBox="0 0 40 40"'){return`url('data:image/svg+xml,${encodeURIComponent(n)}')`}function As(n){return I0(``,'width="6" height="3"')}const N0=O.baseTheme({".cm-diagnostic":{padding:"3px 6px 3px 8px",marginLeft:"-1px",display:"block",whiteSpace:"pre-wrap"},".cm-diagnostic-error":{borderLeft:"5px solid #d11"},".cm-diagnostic-warning":{borderLeft:"5px solid orange"},".cm-diagnostic-info":{borderLeft:"5px solid #999"},".cm-diagnosticAction":{font:"inherit",border:"none",padding:"2px 4px",backgroundColor:"#444",color:"white",borderRadius:"3px",marginLeft:"8px"},".cm-diagnosticSource":{fontSize:"70%",opacity:.7},".cm-lintRange":{backgroundPosition:"left bottom",backgroundRepeat:"repeat-x",paddingBottom:"0.7px"},".cm-lintRange-error":{backgroundImage:As("#d11")},".cm-lintRange-warning":{backgroundImage:As("orange")},".cm-lintRange-info":{backgroundImage:As("#999")},".cm-lintRange-active":{backgroundColor:"#ffdd9980"},".cm-tooltip-lint":{padding:0,margin:0},".cm-lintPoint":{position:"relative","&:after":{content:'""',position:"absolute",bottom:0,left:"-2px",borderLeft:"3px solid transparent",borderRight:"3px solid transparent",borderBottom:"4px solid #d11"}},".cm-lintPoint-warning":{"&:after":{borderBottomColor:"orange"}},".cm-lintPoint-info":{"&:after":{borderBottomColor:"#999"}},".cm-panel.cm-panel-lint":{position:"relative","& ul":{maxHeight:"100px",overflowY:"auto","& [aria-selected]":{backgroundColor:"#ddd","& u":{textDecoration:"underline"}},"&:focus [aria-selected]":{background_fallback:"#bdf",backgroundColor:"Highlight",color_fallback:"white",color:"HighlightText"},"& u":{textDecoration:"none"},padding:0,margin:0},"& [name=close]":{position:"absolute",top:"0",right:"2px",background:"inherit",border:"none",font:"inherit",padding:0,margin:0}}}),_0=(()=>[Zd(),wd(),bm(),Kp(),hd(),N.allowMultipleSelections.of(!0),Ep(),Nr(Jp,{fallback:!0}),u0(),Bd(),Rd(),Vn.of([...g0,...gg,...Dm,...zp,...S0,...E0])])(),jl={python:()=>Se(()=>import("./index-fe5a6d0b.js"),["assets/index-fe5a6d0b.js","assets/index-043aba05.js","assets/index-0526d562.js","assets/index-02e0d00d.css","assets/Button-89057c03.js","assets/Index-37584f50.js","assets/Index-5cf1892e.css","assets/Button-8a6aeb2c.css","assets/Copy-1b5c0932.js","assets/Download-696bd40c.js","assets/BlockLabel-e3b0d1c3.js","assets/Empty-937365d8.js","assets/Example-e03fb3b4.js","assets/Example-f75cba10.css"]).then(n=>n.python()),markdown:async()=>{const[n,e]=await Promise.all([Se(()=>import("./index-b4c39f65.js"),["assets/index-b4c39f65.js","assets/index-c9080bb1.js","assets/index-043aba05.js","assets/index-485ddedd.js","assets/index-0526d562.js","assets/index-02e0d00d.css","assets/Button-89057c03.js","assets/Index-37584f50.js","assets/Index-5cf1892e.css","assets/Button-8a6aeb2c.css","assets/Copy-1b5c0932.js","assets/Download-696bd40c.js","assets/BlockLabel-e3b0d1c3.js","assets/Empty-937365d8.js","assets/Example-e03fb3b4.js","assets/Example-f75cba10.css","assets/index-e50b5d95.js"]),Se(()=>import("./frontmatter-11de9c32.js"),["assets/frontmatter-11de9c32.js","assets/yaml-95012b83.js","assets/index-0526d562.js","assets/index-02e0d00d.css","assets/Button-89057c03.js","assets/Index-37584f50.js","assets/Index-5cf1892e.css","assets/Button-8a6aeb2c.css","assets/Copy-1b5c0932.js","assets/Download-696bd40c.js","assets/BlockLabel-e3b0d1c3.js","assets/Empty-937365d8.js","assets/Example-e03fb3b4.js","assets/Example-f75cba10.css"])]);return n.markdown({extensions:[e.frontmatter]})},json:()=>Se(()=>import("./index-d09913b2.js"),["assets/index-d09913b2.js","assets/index-043aba05.js","assets/index-0526d562.js","assets/index-02e0d00d.css","assets/Button-89057c03.js","assets/Index-37584f50.js","assets/Index-5cf1892e.css","assets/Button-8a6aeb2c.css","assets/Copy-1b5c0932.js","assets/Download-696bd40c.js","assets/BlockLabel-e3b0d1c3.js","assets/Empty-937365d8.js","assets/Example-e03fb3b4.js","assets/Example-f75cba10.css"]).then(n=>n.json()),html:()=>Se(()=>import("./index-c9080bb1.js"),["assets/index-c9080bb1.js","assets/index-043aba05.js","assets/index-485ddedd.js","assets/index-0526d562.js","assets/index-02e0d00d.css","assets/Button-89057c03.js","assets/Index-37584f50.js","assets/Index-5cf1892e.css","assets/Button-8a6aeb2c.css","assets/Copy-1b5c0932.js","assets/Download-696bd40c.js","assets/BlockLabel-e3b0d1c3.js","assets/Empty-937365d8.js","assets/Example-e03fb3b4.js","assets/Example-f75cba10.css","assets/index-e50b5d95.js"]).then(n=>n.html()),css:()=>Se(()=>import("./index-485ddedd.js"),["assets/index-485ddedd.js","assets/index-043aba05.js","assets/index-0526d562.js","assets/index-02e0d00d.css","assets/Button-89057c03.js","assets/Index-37584f50.js","assets/Index-5cf1892e.css","assets/Button-8a6aeb2c.css","assets/Copy-1b5c0932.js","assets/Download-696bd40c.js","assets/BlockLabel-e3b0d1c3.js","assets/Empty-937365d8.js","assets/Example-e03fb3b4.js","assets/Example-f75cba10.css"]).then(n=>n.css()),javascript:()=>Se(()=>import("./index-e50b5d95.js"),["assets/index-e50b5d95.js","assets/index-043aba05.js","assets/index-0526d562.js","assets/index-02e0d00d.css","assets/Button-89057c03.js","assets/Index-37584f50.js","assets/Index-5cf1892e.css","assets/Button-8a6aeb2c.css","assets/Copy-1b5c0932.js","assets/Download-696bd40c.js","assets/BlockLabel-e3b0d1c3.js","assets/Empty-937365d8.js","assets/Example-e03fb3b4.js","assets/Example-f75cba10.css"]).then(n=>n.javascript()),typescript:()=>Se(()=>import("./index-e50b5d95.js"),["assets/index-e50b5d95.js","assets/index-043aba05.js","assets/index-0526d562.js","assets/index-02e0d00d.css","assets/Button-89057c03.js","assets/Index-37584f50.js","assets/Index-5cf1892e.css","assets/Button-8a6aeb2c.css","assets/Copy-1b5c0932.js","assets/Download-696bd40c.js","assets/BlockLabel-e3b0d1c3.js","assets/Empty-937365d8.js","assets/Example-e03fb3b4.js","assets/Example-f75cba10.css"]).then(n=>n.javascript({typescript:!0})),yaml:()=>Se(()=>import("./yaml-95012b83.js"),[]).then(n=>Wt.define(n.yaml)),dockerfile:()=>Se(()=>import("./dockerfile-d67bbd50.js"),[]).then(n=>Wt.define(n.dockerFile)),shell:()=>Se(()=>import("./shell-86dd1d99.js"),[]).then(n=>Wt.define(n.shell)),r:()=>Se(()=>import("./r-3ca97919.js"),[]).then(n=>Wt.define(n.r))},V0={py:"python",md:"markdown",js:"javascript",ts:"typescript",sh:"shell"};async function F0(n){const e=jl[n]||jl[V0[n]]||void 0;if(e)return e()}const{SvelteComponent:H0,append:W0,attr:Ms,binding_callbacks:z0,detach:q0,element:Kl,init:j0,insert:K0,noop:$l,safe_not_equal:$0}=window.__gradio__svelte__internal,{createEventDispatcher:U0,onMount:G0}=window.__gradio__svelte__internal;function J0(n){let e,t,i;return{c(){e=Kl("div"),t=Kl("div"),Ms(t,"class",i="codemirror-wrapper "+n[0]+" svelte-1sc8eck"),Ms(e,"class","wrap svelte-1sc8eck")},m(s,r){K0(s,e,r),W0(e,t),n[12](t)},p(s,[r]){r&1&&i!==(i="codemirror-wrapper "+s[0]+" svelte-1sc8eck")&&Ms(t,"class",i)},i:$l,o:$l,d(s){s&&q0(e),n[12](null)}}}function Y0(n){let e=n.dom.querySelectorAll(".cm-gutterElement");if(e.length===0)return null;for(var t=0;t(y=k(),()=>y?.destroy()));function X(M){z0[M?"unshift":"push"](()=>{g=M,t(1,g)})}return n.$$set=M=>{"classNames"in M&&t(0,i=M.classNames),"value"in M&&t(2,s=M.value),"dark_mode"in M&&t(3,r=M.dark_mode),"basic"in M&&t(4,o=M.basic),"language"in M&&t(5,l=M.language),"lines"in M&&t(6,a=M.lines),"extensions"in M&&t(7,h=M.extensions),"useTab"in M&&t(8,c=M.useTab),"readonly"in M&&t(9,f=M.readonly),"placeholder"in M&&t(10,u=M.placeholder)},n.$$.update=()=>{n.$$.dirty&32&&b(l),n.$$.dirty&2048&&K(),n.$$.dirty&4&&v(s)},S(),[i,g,s,r,o,l,a,h,c,f,u,p,X]}class Z0 extends H0{constructor(e){super(),j0(this,e,X0,J0,$0,{classNames:0,value:2,dark_mode:3,basic:4,language:5,lines:6,extensions:7,useTab:8,readonly:9,placeholder:10})}}const Kc=Z0;const{SvelteComponent:Q0,add_render_callback:ey,append:ty,attr:vt,check_outros:iy,create_bidirectional_transition:Ul,create_component:$c,destroy_component:Uc,detach:Gc,element:Jc,group_outros:ny,init:sy,insert:Yc,listen:ry,mount_component:Xc,safe_not_equal:oy,space:ly,toggle_class:Gl,transition_in:mi,transition_out:un}=window.__gradio__svelte__internal,{onDestroy:ay}=window.__gradio__svelte__internal;function Jl(n){let e,t,i,s;return t=new sa({}),{c(){e=Jc("span"),$c(t.$$.fragment),vt(e,"class","check svelte-qi7jcw"),vt(e,"aria-roledescription","Value copied"),vt(e,"aria-label","Copied")},m(r,o){Yc(r,e,o),Xc(t,e,null),s=!0},i(r){s||(mi(t.$$.fragment,r),r&&ey(()=>{s&&(i||(i=Ul(e,gn,{},!0)),i.run(1))}),s=!0)},o(r){un(t.$$.fragment,r),r&&(i||(i=Ul(e,gn,{},!1)),i.run(0)),s=!1},d(r){r&&Gc(e),Uc(t),r&&i&&i.end()}}}function hy(n){let e,t,i,s,r,o;t=new hf({});let l=n[0]&&Jl();return{c(){e=Jc("button"),$c(t.$$.fragment),i=ly(),l&&l.c(),vt(e,"title","copy"),vt(e,"aria-roledescription","Copy value"),vt(e,"aria-label","Copy"),vt(e,"class","svelte-qi7jcw"),Gl(e,"copied",n[0])},m(a,h){Yc(a,e,h),Xc(t,e,null),ty(e,i),l&&l.m(e,null),s=!0,r||(o=ry(e,"click",n[1]),r=!0)},p(a,[h]){a[0]?l?h&1&&mi(l,1):(l=Jl(),l.c(),mi(l,1),l.m(e,null)):l&&(ny(),un(l,1,1,()=>{l=null}),iy()),(!s||h&1)&&Gl(e,"copied",a[0])},i(a){s||(mi(t.$$.fragment,a),mi(l),s=!0)},o(a){un(t.$$.fragment,a),un(l),s=!1},d(a){a&&Gc(e),Uc(t),l&&l.d(),r=!1,o()}}}function cy(n,e,t){let i=!1,{value:s}=e,r;function o(){t(0,i=!0),r&&clearTimeout(r),r=setTimeout(()=>{t(0,i=!1)},2e3)}async function l(){"clipboard"in navigator&&(await navigator.clipboard.writeText(s),o())}return ay(()=>{r&&clearTimeout(r)}),n.$$set=a=>{"value"in a&&t(2,s=a.value)},[i,l,s]}class fy extends Q0{constructor(e){super(),sy(this,e,cy,hy,oy,{value:2})}}const Zc=fy;const{SvelteComponent:uy,add_render_callback:dy,append:py,attr:Vt,check_outros:my,create_bidirectional_transition:Yl,create_component:Qc,destroy_component:ef,detach:tf,element:nf,group_outros:gy,init:yy,insert:sf,listen:by,mount_component:rf,safe_not_equal:wy,space:ky,toggle_class:Xl,transition_in:gi,transition_out:dn}=window.__gradio__svelte__internal,{onDestroy:vy}=window.__gradio__svelte__internal;function Zl(n){let e,t,i,s;return t=new sa({}),{c(){e=nf("span"),Qc(t.$$.fragment),Vt(e,"class","check svelte-14d303a")},m(r,o){sf(r,e,o),rf(t,e,null),s=!0},i(r){s||(gi(t.$$.fragment,r),r&&dy(()=>{s&&(i||(i=Yl(e,gn,{},!0)),i.run(1))}),s=!0)},o(r){dn(t.$$.fragment,r),r&&(i||(i=Yl(e,gn,{},!1)),i.run(0)),s=!1},d(r){r&&tf(e),ef(t),r&&i&&i.end()}}}function xy(n){let e,t,i,s,r,o,l;t=new ff({});let a=n[0]&&Zl();return{c(){e=nf("a"),Qc(t.$$.fragment),i=ky(),a&&a.c(),Vt(e,"download",s="file."+n[2]),Vt(e,"href",n[1]),Vt(e,"class","svelte-14d303a"),Xl(e,"copied",n[0])},m(h,c){sf(h,e,c),rf(t,e,null),py(e,i),a&&a.m(e,null),r=!0,o||(l=by(e,"click",n[3]),o=!0)},p(h,[c]){h[0]?a?c&1&&gi(a,1):(a=Zl(),a.c(),gi(a,1),a.m(e,null)):a&&(gy(),dn(a,1,1,()=>{a=null}),my()),(!r||c&4&&s!==(s="file."+h[2]))&&Vt(e,"download",s),(!r||c&2)&&Vt(e,"href",h[1]),(!r||c&1)&&Xl(e,"copied",h[0])},i(h){r||(gi(t.$$.fragment,h),gi(a),r=!0)},o(h){dn(t.$$.fragment,h),dn(a),r=!1},d(h){h&&tf(e),ef(t),a&&a.d(),o=!1,l()}}}function Sy(n){return{py:"py",python:"py",md:"md",markdown:"md",json:"json",html:"html",css:"css",js:"js",javascript:"js",ts:"ts",typescript:"ts",yaml:"yaml",yml:"yml",dockerfile:"dockerfile",sh:"sh",shell:"sh",r:"r"}[n]||"txt"}function Cy(n,e,t){let i,s,{value:r}=e,{language:o}=e,l=!1,a;function h(){t(0,l=!0),a&&clearTimeout(a),a=setTimeout(()=>{t(0,l=!1)},2e3)}return vy(()=>{a&&clearTimeout(a)}),n.$$set=c=>{"value"in c&&t(4,r=c.value),"language"in c&&t(5,o=c.language)},n.$$.update=()=>{n.$$.dirty&32&&t(2,i=Sy(o)),n.$$.dirty&16&&t(1,s=URL.createObjectURL(new Blob([r])))},[l,s,i,h,r,o]}class Ay extends uy{constructor(e){super(),yy(this,e,Cy,xy,wy,{value:4,language:5})}}const of=Ay;const{SvelteComponent:My,append:Dy,attr:Ty,create_component:Ql,destroy_component:ea,detach:Oy,element:By,init:Py,insert:Ey,mount_component:ta,safe_not_equal:Ry,space:Ly,transition_in:ia,transition_out:na}=window.__gradio__svelte__internal;function Iy(n){let e,t,i,s,r;return t=new of({props:{value:n[0],language:n[1]}}),s=new Zc({props:{value:n[0]}}),{c(){e=By("div"),Ql(t.$$.fragment),i=Ly(),Ql(s.$$.fragment),Ty(e,"class","svelte-1yin446")},m(o,l){Ey(o,e,l),ta(t,e,null),Dy(e,i),ta(s,e,null),r=!0},p(o,[l]){const a={};l&1&&(a.value=o[0]),l&2&&(a.language=o[1]),t.$set(a);const h={};l&1&&(h.value=o[0]),s.$set(h)},i(o){r||(ia(t.$$.fragment,o),ia(s.$$.fragment,o),r=!0)},o(o){na(t.$$.fragment,o),na(s.$$.fragment,o),r=!1},d(o){o&&Oy(e),ea(t),ea(s)}}}function Ny(n,e,t){let{value:i}=e,{language:s}=e;return n.$$set=r=>{"value"in r&&t(0,i=r.value),"language"in r&&t(1,s=r.language)},[i,s]}class _y extends My{constructor(e){super(),Py(this,e,Ny,Iy,Ry,{value:0,language:1})}}const lf=_y,{SvelteComponent:Vy,add_flush_callback:Fy,assign:Hy,bind:Wy,binding_callbacks:zy,check_outros:qy,create_component:Ot,destroy_component:Bt,detach:pn,empty:jy,get_spread_object:Ky,get_spread_update:$y,group_outros:Uy,init:Gy,insert:mn,mount_component:Pt,safe_not_equal:Jy,space:wr,transition_in:Xe,transition_out:Ze}=window.__gradio__svelte__internal,{afterUpdate:Yy}=window.__gradio__svelte__internal;function Xy(n){let e,t,i,s,r;e=new lf({props:{language:n[2],value:n[0]}});function o(a){n[15](a)}let l={language:n[2],lines:n[3],dark_mode:n[12],readonly:!n[11]};return n[0]!==void 0&&(l.value=n[0]),i=new Kc({props:l}),zy.push(()=>Wy(i,"value",o)),{c(){Ot(e.$$.fragment),t=wr(),Ot(i.$$.fragment)},m(a,h){Pt(e,a,h),mn(a,t,h),Pt(i,a,h),r=!0},p(a,h){const c={};h&4&&(c.language=a[2]),h&1&&(c.value=a[0]),e.$set(c);const f={};h&4&&(f.language=a[2]),h&8&&(f.lines=a[3]),h&2048&&(f.readonly=!a[11]),!s&&h&1&&(s=!0,f.value=a[0],Fy(()=>s=!1)),i.$set(f)},i(a){r||(Xe(e.$$.fragment,a),Xe(i.$$.fragment,a),r=!0)},o(a){Ze(e.$$.fragment,a),Ze(i.$$.fragment,a),r=!1},d(a){a&&pn(t),Bt(e,a),Bt(i,a)}}}function Zy(n){let e,t;return e=new df({props:{unpadded_box:!0,size:"large",$$slots:{default:[Qy]},$$scope:{ctx:n}}}),{c(){Ot(e.$$.fragment)},m(i,s){Pt(e,i,s),t=!0},p(i,s){const r={};s&131072&&(r.$$scope={dirty:s,ctx:i}),e.$set(r)},i(i){t||(Xe(e.$$.fragment,i),t=!0)},o(i){Ze(e.$$.fragment,i),t=!1},d(i){Bt(e,i)}}}function Qy(n){let e,t;return e=new ra({}),{c(){Ot(e.$$.fragment)},m(i,s){Pt(e,i,s),t=!0},i(i){t||(Xe(e.$$.fragment,i),t=!0)},o(i){Ze(e.$$.fragment,i),t=!1},d(i){Bt(e,i)}}}function eb(n){let e,t,i,s,r,o,l,a;const h=[{autoscroll:n[1].autoscroll},{i18n:n[1].i18n},n[9]];let c={};for(let p=0;p{u[v]=null}),qy(),o=u[r],o?o.p(p,g):(o=u[r]=f[r](p),o.c()),Xe(o,1),o.m(l.parentNode,l))},i(p){a||(Xe(e.$$.fragment,p),Xe(i.$$.fragment,p),Xe(o),a=!0)},o(p){Ze(e.$$.fragment,p),Ze(i.$$.fragment,p),Ze(o),a=!1},d(p){p&&(pn(t),pn(s),pn(l)),Bt(e,p),Bt(i,p),u[r].d(p)}}}function tb(n){let e,t;return e=new af({props:{variant:"solid",padding:!1,elem_id:n[4],elem_classes:n[5],visible:n[6],scale:n[10],$$slots:{default:[eb]},$$scope:{ctx:n}}}),{c(){Ot(e.$$.fragment)},m(i,s){Pt(e,i,s),t=!0},p(i,[s]){const r={};s&16&&(r.elem_id=i[4]),s&32&&(r.elem_classes=i[5]),s&64&&(r.visible=i[6]),s&1024&&(r.scale=i[10]),s&134031&&(r.$$scope={dirty:s,ctx:i}),e.$set(r)},i(i){t||(Xe(e.$$.fragment,i),t=!0)},o(i){Ze(e.$$.fragment,i),t=!1},d(i){Bt(e,i)}}}function ib(n,e,t){let{gradio:i}=e,{value:s=""}=e,{value_is_output:r=!1}=e,{language:o=""}=e,{lines:l=5}=e,{target:a}=e,{elem_id:h=""}=e,{elem_classes:c=[]}=e,{visible:f=!0}=e,{label:u=i.i18n("code.code")}=e,{show_label:d=!0}=e,{loading_status:p}=e,{scale:g=null}=e,{interactive:y}=e,b=a.classList.contains("dark");function v(){i.dispatch("change",s),r||i.dispatch("input")}Yy(()=>{t(13,r=!1)});function S(k){s=k,t(0,s)}return n.$$set=k=>{"gradio"in k&&t(1,i=k.gradio),"value"in k&&t(0,s=k.value),"value_is_output"in k&&t(13,r=k.value_is_output),"language"in k&&t(2,o=k.language),"lines"in k&&t(3,l=k.lines),"target"in k&&t(14,a=k.target),"elem_id"in k&&t(4,h=k.elem_id),"elem_classes"in k&&t(5,c=k.elem_classes),"visible"in k&&t(6,f=k.visible),"label"in k&&t(7,u=k.label),"show_label"in k&&t(8,d=k.show_label),"loading_status"in k&&t(9,p=k.loading_status),"scale"in k&&t(10,g=k.scale),"interactive"in k&&t(11,y=k.interactive)},n.$$.update=()=>{n.$$.dirty&1&&v()},[s,i,o,l,h,c,f,u,d,p,g,y,b,r,a,S]}class nb extends Vy{constructor(e){super(),Gy(this,e,ib,tb,Jy,{gradio:1,value:0,value_is_output:13,language:2,lines:3,target:14,elem_id:4,elem_classes:5,visible:6,label:7,show_label:8,loading_status:9,scale:10,interactive:11})}}const Sb=Object.freeze(Object.defineProperty({__proto__:null,BaseCode:Kc,BaseCopy:Zc,BaseDownload:of,BaseExample:pf,BaseWidget:lf,default:nb},Symbol.toStringTag,{value:"Module"}));export{vp as A,vb as B,Ng as C,Qd as D,w as E,Sb as F,Z as I,hr as L,Br as N,Th as P,Wt as S,W as T,ge as a,L as b,kb as c,yb as d,gb as e,Lp as f,Fe as g,ae as h,Cp as i,Ri as j,Vn as k,De as l,Ph as m,wt as n,Rp as o,pb as p,Rh as q,Xt as r,gp as s,m as t,Zp as u,O as v,wb as w,db as x,xb as y,bb as z}; -//# sourceMappingURL=Index-7b3f6002.js.map diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/h11/_state.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/h11/_state.py deleted file mode 100644 index 3593430a74f21f6e0c2faf495e1627551eebfc30..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/h11/_state.py +++ /dev/null @@ -1,367 +0,0 @@ -################################################################ -# The core state machine -################################################################ -# -# Rule 1: everything that affects the state machine and state transitions must -# live here in this file. As much as possible goes into the table-based -# representation, but for the bits that don't quite fit, the actual code and -# state must nonetheless live here. -# -# Rule 2: this file does not know about what role we're playing; it only knows -# about HTTP request/response cycles in the abstract. This ensures that we -# don't cheat and apply different rules to local and remote parties. -# -# -# Theory of operation -# =================== -# -# Possibly the simplest way to think about this is that we actually have 5 -# different state machines here. Yes, 5. These are: -# -# 1) The client state, with its complicated automaton (see the docs) -# 2) The server state, with its complicated automaton (see the docs) -# 3) The keep-alive state, with possible states {True, False} -# 4) The SWITCH_CONNECT state, with possible states {False, True} -# 5) The SWITCH_UPGRADE state, with possible states {False, True} -# -# For (3)-(5), the first state listed is the initial state. -# -# (1)-(3) are stored explicitly in member variables. The last -# two are stored implicitly in the pending_switch_proposals set as: -# (state of 4) == (_SWITCH_CONNECT in pending_switch_proposals) -# (state of 5) == (_SWITCH_UPGRADE in pending_switch_proposals) -# -# And each of these machines has two different kinds of transitions: -# -# a) Event-triggered -# b) State-triggered -# -# Event triggered is the obvious thing that you'd think it is: some event -# happens, and if it's the right event at the right time then a transition -# happens. But there are somewhat complicated rules for which machines can -# "see" which events. (As a rule of thumb, if a machine "sees" an event, this -# means two things: the event can affect the machine, and if the machine is -# not in a state where it expects that event then it's an error.) These rules -# are: -# -# 1) The client machine sees all h11.events objects emitted by the client. -# -# 2) The server machine sees all h11.events objects emitted by the server. -# -# It also sees the client's Request event. -# -# And sometimes, server events are annotated with a _SWITCH_* event. For -# example, we can have a (Response, _SWITCH_CONNECT) event, which is -# different from a regular Response event. -# -# 3) The keep-alive machine sees the process_keep_alive_disabled() event -# (which is derived from Request/Response events), and this event -# transitions it from True -> False, or from False -> False. There's no way -# to transition back. -# -# 4&5) The _SWITCH_* machines transition from False->True when we get a -# Request that proposes the relevant type of switch (via -# process_client_switch_proposals), and they go from True->False when we -# get a Response that has no _SWITCH_* annotation. -# -# So that's event-triggered transitions. -# -# State-triggered transitions are less standard. What they do here is couple -# the machines together. The way this works is, when certain *joint* -# configurations of states are achieved, then we automatically transition to a -# new *joint* state. So, for example, if we're ever in a joint state with -# -# client: DONE -# keep-alive: False -# -# then the client state immediately transitions to: -# -# client: MUST_CLOSE -# -# This is fundamentally different from an event-based transition, because it -# doesn't matter how we arrived at the {client: DONE, keep-alive: False} state -# -- maybe the client transitioned SEND_BODY -> DONE, or keep-alive -# transitioned True -> False. Either way, once this precondition is satisfied, -# this transition is immediately triggered. -# -# What if two conflicting state-based transitions get enabled at the same -# time? In practice there's only one case where this arises (client DONE -> -# MIGHT_SWITCH_PROTOCOL versus DONE -> MUST_CLOSE), and we resolve it by -# explicitly prioritizing the DONE -> MIGHT_SWITCH_PROTOCOL transition. -# -# Implementation -# -------------- -# -# The event-triggered transitions for the server and client machines are all -# stored explicitly in a table. Ditto for the state-triggered transitions that -# involve just the server and client state. -# -# The transitions for the other machines, and the state-triggered transitions -# that involve the other machines, are written out as explicit Python code. -# -# It'd be nice if there were some cleaner way to do all this. This isn't -# *too* terrible, but I feel like it could probably be better. -# -# WARNING -# ------- -# -# The script that generates the state machine diagrams for the docs knows how -# to read out the EVENT_TRIGGERED_TRANSITIONS and STATE_TRIGGERED_TRANSITIONS -# tables. But it can't automatically read the transitions that are written -# directly in Python code. So if you touch those, you need to also update the -# script to keep it in sync! -from typing import cast, Dict, Optional, Set, Tuple, Type, Union - -from ._events import * -from ._util import LocalProtocolError, Sentinel - -# Everything in __all__ gets re-exported as part of the h11 public API. -__all__ = [ - "CLIENT", - "SERVER", - "IDLE", - "SEND_RESPONSE", - "SEND_BODY", - "DONE", - "MUST_CLOSE", - "CLOSED", - "MIGHT_SWITCH_PROTOCOL", - "SWITCHED_PROTOCOL", - "ERROR", -] - - -class CLIENT(Sentinel, metaclass=Sentinel): - pass - - -class SERVER(Sentinel, metaclass=Sentinel): - pass - - -# States -class IDLE(Sentinel, metaclass=Sentinel): - pass - - -class SEND_RESPONSE(Sentinel, metaclass=Sentinel): - pass - - -class SEND_BODY(Sentinel, metaclass=Sentinel): - pass - - -class DONE(Sentinel, metaclass=Sentinel): - pass - - -class MUST_CLOSE(Sentinel, metaclass=Sentinel): - pass - - -class CLOSED(Sentinel, metaclass=Sentinel): - pass - - -class ERROR(Sentinel, metaclass=Sentinel): - pass - - -# Switch types -class MIGHT_SWITCH_PROTOCOL(Sentinel, metaclass=Sentinel): - pass - - -class SWITCHED_PROTOCOL(Sentinel, metaclass=Sentinel): - pass - - -class _SWITCH_UPGRADE(Sentinel, metaclass=Sentinel): - pass - - -class _SWITCH_CONNECT(Sentinel, metaclass=Sentinel): - pass - - -EventTransitionType = Dict[ - Type[Sentinel], - Dict[ - Type[Sentinel], - Dict[Union[Type[Event], Tuple[Type[Event], Type[Sentinel]]], Type[Sentinel]], - ], -] - -EVENT_TRIGGERED_TRANSITIONS: EventTransitionType = { - CLIENT: { - IDLE: {Request: SEND_BODY, ConnectionClosed: CLOSED}, - SEND_BODY: {Data: SEND_BODY, EndOfMessage: DONE}, - DONE: {ConnectionClosed: CLOSED}, - MUST_CLOSE: {ConnectionClosed: CLOSED}, - CLOSED: {ConnectionClosed: CLOSED}, - MIGHT_SWITCH_PROTOCOL: {}, - SWITCHED_PROTOCOL: {}, - ERROR: {}, - }, - SERVER: { - IDLE: { - ConnectionClosed: CLOSED, - Response: SEND_BODY, - # Special case: server sees client Request events, in this form - (Request, CLIENT): SEND_RESPONSE, - }, - SEND_RESPONSE: { - InformationalResponse: SEND_RESPONSE, - Response: SEND_BODY, - (InformationalResponse, _SWITCH_UPGRADE): SWITCHED_PROTOCOL, - (Response, _SWITCH_CONNECT): SWITCHED_PROTOCOL, - }, - SEND_BODY: {Data: SEND_BODY, EndOfMessage: DONE}, - DONE: {ConnectionClosed: CLOSED}, - MUST_CLOSE: {ConnectionClosed: CLOSED}, - CLOSED: {ConnectionClosed: CLOSED}, - SWITCHED_PROTOCOL: {}, - ERROR: {}, - }, -} - -StateTransitionType = Dict[ - Tuple[Type[Sentinel], Type[Sentinel]], Dict[Type[Sentinel], Type[Sentinel]] -] - -# NB: there are also some special-case state-triggered transitions hard-coded -# into _fire_state_triggered_transitions below. -STATE_TRIGGERED_TRANSITIONS: StateTransitionType = { - # (Client state, Server state) -> new states - # Protocol negotiation - (MIGHT_SWITCH_PROTOCOL, SWITCHED_PROTOCOL): {CLIENT: SWITCHED_PROTOCOL}, - # Socket shutdown - (CLOSED, DONE): {SERVER: MUST_CLOSE}, - (CLOSED, IDLE): {SERVER: MUST_CLOSE}, - (ERROR, DONE): {SERVER: MUST_CLOSE}, - (DONE, CLOSED): {CLIENT: MUST_CLOSE}, - (IDLE, CLOSED): {CLIENT: MUST_CLOSE}, - (DONE, ERROR): {CLIENT: MUST_CLOSE}, -} - - -class ConnectionState: - def __init__(self) -> None: - # Extra bits of state that don't quite fit into the state model. - - # If this is False then it enables the automatic DONE -> MUST_CLOSE - # transition. Don't set this directly; call .keep_alive_disabled() - self.keep_alive = True - - # This is a subset of {UPGRADE, CONNECT}, containing the proposals - # made by the client for switching protocols. - self.pending_switch_proposals: Set[Type[Sentinel]] = set() - - self.states: Dict[Type[Sentinel], Type[Sentinel]] = {CLIENT: IDLE, SERVER: IDLE} - - def process_error(self, role: Type[Sentinel]) -> None: - self.states[role] = ERROR - self._fire_state_triggered_transitions() - - def process_keep_alive_disabled(self) -> None: - self.keep_alive = False - self._fire_state_triggered_transitions() - - def process_client_switch_proposal(self, switch_event: Type[Sentinel]) -> None: - self.pending_switch_proposals.add(switch_event) - self._fire_state_triggered_transitions() - - def process_event( - self, - role: Type[Sentinel], - event_type: Type[Event], - server_switch_event: Optional[Type[Sentinel]] = None, - ) -> None: - _event_type: Union[Type[Event], Tuple[Type[Event], Type[Sentinel]]] = event_type - if server_switch_event is not None: - assert role is SERVER - if server_switch_event not in self.pending_switch_proposals: - raise LocalProtocolError( - "Received server {} event without a pending proposal".format( - server_switch_event - ) - ) - _event_type = (event_type, server_switch_event) - if server_switch_event is None and _event_type is Response: - self.pending_switch_proposals = set() - self._fire_event_triggered_transitions(role, _event_type) - # Special case: the server state does get to see Request - # events. - if _event_type is Request: - assert role is CLIENT - self._fire_event_triggered_transitions(SERVER, (Request, CLIENT)) - self._fire_state_triggered_transitions() - - def _fire_event_triggered_transitions( - self, - role: Type[Sentinel], - event_type: Union[Type[Event], Tuple[Type[Event], Type[Sentinel]]], - ) -> None: - state = self.states[role] - try: - new_state = EVENT_TRIGGERED_TRANSITIONS[role][state][event_type] - except KeyError: - event_type = cast(Type[Event], event_type) - raise LocalProtocolError( - "can't handle event type {} when role={} and state={}".format( - event_type.__name__, role, self.states[role] - ) - ) from None - self.states[role] = new_state - - def _fire_state_triggered_transitions(self) -> None: - # We apply these rules repeatedly until converging on a fixed point - while True: - start_states = dict(self.states) - - # It could happen that both these special-case transitions are - # enabled at the same time: - # - # DONE -> MIGHT_SWITCH_PROTOCOL - # DONE -> MUST_CLOSE - # - # For example, this will always be true of a HTTP/1.0 client - # requesting CONNECT. If this happens, the protocol switch takes - # priority. From there the client will either go to - # SWITCHED_PROTOCOL, in which case it's none of our business when - # they close the connection, or else the server will deny the - # request, in which case the client will go back to DONE and then - # from there to MUST_CLOSE. - if self.pending_switch_proposals: - if self.states[CLIENT] is DONE: - self.states[CLIENT] = MIGHT_SWITCH_PROTOCOL - - if not self.pending_switch_proposals: - if self.states[CLIENT] is MIGHT_SWITCH_PROTOCOL: - self.states[CLIENT] = DONE - - if not self.keep_alive: - for role in (CLIENT, SERVER): - if self.states[role] is DONE: - self.states[role] = MUST_CLOSE - - # Tabular state-triggered transitions - joint_state = (self.states[CLIENT], self.states[SERVER]) - changes = STATE_TRIGGERED_TRANSITIONS.get(joint_state, {}) - self.states.update(changes) - - if self.states == start_states: - # Fixed point reached - return - - def start_next_cycle(self) -> None: - if self.states != {CLIENT: DONE, SERVER: DONE}: - raise LocalProtocolError( - "not in a reusable state. self.states={}".format(self.states) - ) - # Can't reach DONE/DONE with any of these active, but still, let's be - # sure. - assert self.keep_alive - assert not self.pending_switch_proposals - self.states = {CLIENT: IDLE, SERVER: IDLE} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/lib/_iotools.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/lib/_iotools.py deleted file mode 100644 index 534d1b3eea636d4f68151531945ea9132d304872..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/lib/_iotools.py +++ /dev/null @@ -1,897 +0,0 @@ -"""A collection of functions designed to help I/O with ascii files. - -""" -__docformat__ = "restructuredtext en" - -import numpy as np -import numpy.core.numeric as nx -from numpy.compat import asbytes, asunicode - - -def _decode_line(line, encoding=None): - """Decode bytes from binary input streams. - - Defaults to decoding from 'latin1'. That differs from the behavior of - np.compat.asunicode that decodes from 'ascii'. - - Parameters - ---------- - line : str or bytes - Line to be decoded. - encoding : str - Encoding used to decode `line`. - - Returns - ------- - decoded_line : str - - """ - if type(line) is bytes: - if encoding is None: - encoding = "latin1" - line = line.decode(encoding) - - return line - - -def _is_string_like(obj): - """ - Check whether obj behaves like a string. - """ - try: - obj + '' - except (TypeError, ValueError): - return False - return True - - -def _is_bytes_like(obj): - """ - Check whether obj behaves like a bytes object. - """ - try: - obj + b'' - except (TypeError, ValueError): - return False - return True - - -def has_nested_fields(ndtype): - """ - Returns whether one or several fields of a dtype are nested. - - Parameters - ---------- - ndtype : dtype - Data-type of a structured array. - - Raises - ------ - AttributeError - If `ndtype` does not have a `names` attribute. - - Examples - -------- - >>> dt = np.dtype([('name', 'S4'), ('x', float), ('y', float)]) - >>> np.lib._iotools.has_nested_fields(dt) - False - - """ - for name in ndtype.names or (): - if ndtype[name].names is not None: - return True - return False - - -def flatten_dtype(ndtype, flatten_base=False): - """ - Unpack a structured data-type by collapsing nested fields and/or fields - with a shape. - - Note that the field names are lost. - - Parameters - ---------- - ndtype : dtype - The datatype to collapse - flatten_base : bool, optional - If True, transform a field with a shape into several fields. Default is - False. - - Examples - -------- - >>> dt = np.dtype([('name', 'S4'), ('x', float), ('y', float), - ... ('block', int, (2, 3))]) - >>> np.lib._iotools.flatten_dtype(dt) - [dtype('S4'), dtype('float64'), dtype('float64'), dtype('int64')] - >>> np.lib._iotools.flatten_dtype(dt, flatten_base=True) - [dtype('S4'), - dtype('float64'), - dtype('float64'), - dtype('int64'), - dtype('int64'), - dtype('int64'), - dtype('int64'), - dtype('int64'), - dtype('int64')] - - """ - names = ndtype.names - if names is None: - if flatten_base: - return [ndtype.base] * int(np.prod(ndtype.shape)) - return [ndtype.base] - else: - types = [] - for field in names: - info = ndtype.fields[field] - flat_dt = flatten_dtype(info[0], flatten_base) - types.extend(flat_dt) - return types - - -class LineSplitter: - """ - Object to split a string at a given delimiter or at given places. - - Parameters - ---------- - delimiter : str, int, or sequence of ints, optional - If a string, character used to delimit consecutive fields. - If an integer or a sequence of integers, width(s) of each field. - comments : str, optional - Character used to mark the beginning of a comment. Default is '#'. - autostrip : bool, optional - Whether to strip each individual field. Default is True. - - """ - - def autostrip(self, method): - """ - Wrapper to strip each member of the output of `method`. - - Parameters - ---------- - method : function - Function that takes a single argument and returns a sequence of - strings. - - Returns - ------- - wrapped : function - The result of wrapping `method`. `wrapped` takes a single input - argument and returns a list of strings that are stripped of - white-space. - - """ - return lambda input: [_.strip() for _ in method(input)] - - def __init__(self, delimiter=None, comments='#', autostrip=True, - encoding=None): - delimiter = _decode_line(delimiter) - comments = _decode_line(comments) - - self.comments = comments - - # Delimiter is a character - if (delimiter is None) or isinstance(delimiter, str): - delimiter = delimiter or None - _handyman = self._delimited_splitter - # Delimiter is a list of field widths - elif hasattr(delimiter, '__iter__'): - _handyman = self._variablewidth_splitter - idx = np.cumsum([0] + list(delimiter)) - delimiter = [slice(i, j) for (i, j) in zip(idx[:-1], idx[1:])] - # Delimiter is a single integer - elif int(delimiter): - (_handyman, delimiter) = ( - self._fixedwidth_splitter, int(delimiter)) - else: - (_handyman, delimiter) = (self._delimited_splitter, None) - self.delimiter = delimiter - if autostrip: - self._handyman = self.autostrip(_handyman) - else: - self._handyman = _handyman - self.encoding = encoding - - def _delimited_splitter(self, line): - """Chop off comments, strip, and split at delimiter. """ - if self.comments is not None: - line = line.split(self.comments)[0] - line = line.strip(" \r\n") - if not line: - return [] - return line.split(self.delimiter) - - def _fixedwidth_splitter(self, line): - if self.comments is not None: - line = line.split(self.comments)[0] - line = line.strip("\r\n") - if not line: - return [] - fixed = self.delimiter - slices = [slice(i, i + fixed) for i in range(0, len(line), fixed)] - return [line[s] for s in slices] - - def _variablewidth_splitter(self, line): - if self.comments is not None: - line = line.split(self.comments)[0] - if not line: - return [] - slices = self.delimiter - return [line[s] for s in slices] - - def __call__(self, line): - return self._handyman(_decode_line(line, self.encoding)) - - -class NameValidator: - """ - Object to validate a list of strings to use as field names. - - The strings are stripped of any non alphanumeric character, and spaces - are replaced by '_'. During instantiation, the user can define a list - of names to exclude, as well as a list of invalid characters. Names in - the exclusion list are appended a '_' character. - - Once an instance has been created, it can be called with a list of - names, and a list of valid names will be created. The `__call__` - method accepts an optional keyword "default" that sets the default name - in case of ambiguity. By default this is 'f', so that names will - default to `f0`, `f1`, etc. - - Parameters - ---------- - excludelist : sequence, optional - A list of names to exclude. This list is appended to the default - list ['return', 'file', 'print']. Excluded names are appended an - underscore: for example, `file` becomes `file_` if supplied. - deletechars : str, optional - A string combining invalid characters that must be deleted from the - names. - case_sensitive : {True, False, 'upper', 'lower'}, optional - * If True, field names are case-sensitive. - * If False or 'upper', field names are converted to upper case. - * If 'lower', field names are converted to lower case. - - The default value is True. - replace_space : '_', optional - Character(s) used in replacement of white spaces. - - Notes - ----- - Calling an instance of `NameValidator` is the same as calling its - method `validate`. - - Examples - -------- - >>> validator = np.lib._iotools.NameValidator() - >>> validator(['file', 'field2', 'with space', 'CaSe']) - ('file_', 'field2', 'with_space', 'CaSe') - - >>> validator = np.lib._iotools.NameValidator(excludelist=['excl'], - ... deletechars='q', - ... case_sensitive=False) - >>> validator(['excl', 'field2', 'no_q', 'with space', 'CaSe']) - ('EXCL', 'FIELD2', 'NO_Q', 'WITH_SPACE', 'CASE') - - """ - - defaultexcludelist = ['return', 'file', 'print'] - defaultdeletechars = set(r"""~!@#$%^&*()-=+~\|]}[{';: /?.>,<""") - - def __init__(self, excludelist=None, deletechars=None, - case_sensitive=None, replace_space='_'): - # Process the exclusion list .. - if excludelist is None: - excludelist = [] - excludelist.extend(self.defaultexcludelist) - self.excludelist = excludelist - # Process the list of characters to delete - if deletechars is None: - delete = self.defaultdeletechars - else: - delete = set(deletechars) - delete.add('"') - self.deletechars = delete - # Process the case option ..... - if (case_sensitive is None) or (case_sensitive is True): - self.case_converter = lambda x: x - elif (case_sensitive is False) or case_sensitive.startswith('u'): - self.case_converter = lambda x: x.upper() - elif case_sensitive.startswith('l'): - self.case_converter = lambda x: x.lower() - else: - msg = 'unrecognized case_sensitive value %s.' % case_sensitive - raise ValueError(msg) - - self.replace_space = replace_space - - def validate(self, names, defaultfmt="f%i", nbfields=None): - """ - Validate a list of strings as field names for a structured array. - - Parameters - ---------- - names : sequence of str - Strings to be validated. - defaultfmt : str, optional - Default format string, used if validating a given string - reduces its length to zero. - nbfields : integer, optional - Final number of validated names, used to expand or shrink the - initial list of names. - - Returns - ------- - validatednames : list of str - The list of validated field names. - - Notes - ----- - A `NameValidator` instance can be called directly, which is the - same as calling `validate`. For examples, see `NameValidator`. - - """ - # Initial checks .............. - if (names is None): - if (nbfields is None): - return None - names = [] - if isinstance(names, str): - names = [names, ] - if nbfields is not None: - nbnames = len(names) - if (nbnames < nbfields): - names = list(names) + [''] * (nbfields - nbnames) - elif (nbnames > nbfields): - names = names[:nbfields] - # Set some shortcuts ........... - deletechars = self.deletechars - excludelist = self.excludelist - case_converter = self.case_converter - replace_space = self.replace_space - # Initializes some variables ... - validatednames = [] - seen = dict() - nbempty = 0 - - for item in names: - item = case_converter(item).strip() - if replace_space: - item = item.replace(' ', replace_space) - item = ''.join([c for c in item if c not in deletechars]) - if item == '': - item = defaultfmt % nbempty - while item in names: - nbempty += 1 - item = defaultfmt % nbempty - nbempty += 1 - elif item in excludelist: - item += '_' - cnt = seen.get(item, 0) - if cnt > 0: - validatednames.append(item + '_%d' % cnt) - else: - validatednames.append(item) - seen[item] = cnt + 1 - return tuple(validatednames) - - def __call__(self, names, defaultfmt="f%i", nbfields=None): - return self.validate(names, defaultfmt=defaultfmt, nbfields=nbfields) - - -def str2bool(value): - """ - Tries to transform a string supposed to represent a boolean to a boolean. - - Parameters - ---------- - value : str - The string that is transformed to a boolean. - - Returns - ------- - boolval : bool - The boolean representation of `value`. - - Raises - ------ - ValueError - If the string is not 'True' or 'False' (case independent) - - Examples - -------- - >>> np.lib._iotools.str2bool('TRUE') - True - >>> np.lib._iotools.str2bool('false') - False - - """ - value = value.upper() - if value == 'TRUE': - return True - elif value == 'FALSE': - return False - else: - raise ValueError("Invalid boolean") - - -class ConverterError(Exception): - """ - Exception raised when an error occurs in a converter for string values. - - """ - pass - - -class ConverterLockError(ConverterError): - """ - Exception raised when an attempt is made to upgrade a locked converter. - - """ - pass - - -class ConversionWarning(UserWarning): - """ - Warning issued when a string converter has a problem. - - Notes - ----- - In `genfromtxt` a `ConversionWarning` is issued if raising exceptions - is explicitly suppressed with the "invalid_raise" keyword. - - """ - pass - - -class StringConverter: - """ - Factory class for function transforming a string into another object - (int, float). - - After initialization, an instance can be called to transform a string - into another object. If the string is recognized as representing a - missing value, a default value is returned. - - Attributes - ---------- - func : function - Function used for the conversion. - default : any - Default value to return when the input corresponds to a missing - value. - type : type - Type of the output. - _status : int - Integer representing the order of the conversion. - _mapper : sequence of tuples - Sequence of tuples (dtype, function, default value) to evaluate in - order. - _locked : bool - Holds `locked` parameter. - - Parameters - ---------- - dtype_or_func : {None, dtype, function}, optional - If a `dtype`, specifies the input data type, used to define a basic - function and a default value for missing data. For example, when - `dtype` is float, the `func` attribute is set to `float` and the - default value to `np.nan`. If a function, this function is used to - convert a string to another object. In this case, it is recommended - to give an associated default value as input. - default : any, optional - Value to return by default, that is, when the string to be - converted is flagged as missing. If not given, `StringConverter` - tries to supply a reasonable default value. - missing_values : {None, sequence of str}, optional - ``None`` or sequence of strings indicating a missing value. If ``None`` - then missing values are indicated by empty entries. The default is - ``None``. - locked : bool, optional - Whether the StringConverter should be locked to prevent automatic - upgrade or not. Default is False. - - """ - _mapper = [(nx.bool_, str2bool, False), - (nx.int_, int, -1),] - - # On 32-bit systems, we need to make sure that we explicitly include - # nx.int64 since ns.int_ is nx.int32. - if nx.dtype(nx.int_).itemsize < nx.dtype(nx.int64).itemsize: - _mapper.append((nx.int64, int, -1)) - - _mapper.extend([(nx.float64, float, nx.nan), - (nx.complex128, complex, nx.nan + 0j), - (nx.longdouble, nx.longdouble, nx.nan), - # If a non-default dtype is passed, fall back to generic - # ones (should only be used for the converter) - (nx.integer, int, -1), - (nx.floating, float, nx.nan), - (nx.complexfloating, complex, nx.nan + 0j), - # Last, try with the string types (must be last, because - # `_mapper[-1]` is used as default in some cases) - (nx.str_, asunicode, '???'), - (nx.bytes_, asbytes, '???'), - ]) - - @classmethod - def _getdtype(cls, val): - """Returns the dtype of the input variable.""" - return np.array(val).dtype - - @classmethod - def _getsubdtype(cls, val): - """Returns the type of the dtype of the input variable.""" - return np.array(val).dtype.type - - @classmethod - def _dtypeortype(cls, dtype): - """Returns dtype for datetime64 and type of dtype otherwise.""" - - # This is a bit annoying. We want to return the "general" type in most - # cases (ie. "string" rather than "S10"), but we want to return the - # specific type for datetime64 (ie. "datetime64[us]" rather than - # "datetime64"). - if dtype.type == np.datetime64: - return dtype - return dtype.type - - @classmethod - def upgrade_mapper(cls, func, default=None): - """ - Upgrade the mapper of a StringConverter by adding a new function and - its corresponding default. - - The input function (or sequence of functions) and its associated - default value (if any) is inserted in penultimate position of the - mapper. The corresponding type is estimated from the dtype of the - default value. - - Parameters - ---------- - func : var - Function, or sequence of functions - - Examples - -------- - >>> import dateutil.parser - >>> import datetime - >>> dateparser = dateutil.parser.parse - >>> defaultdate = datetime.date(2000, 1, 1) - >>> StringConverter.upgrade_mapper(dateparser, default=defaultdate) - """ - # Func is a single functions - if hasattr(func, '__call__'): - cls._mapper.insert(-1, (cls._getsubdtype(default), func, default)) - return - elif hasattr(func, '__iter__'): - if isinstance(func[0], (tuple, list)): - for _ in func: - cls._mapper.insert(-1, _) - return - if default is None: - default = [None] * len(func) - else: - default = list(default) - default.append([None] * (len(func) - len(default))) - for fct, dft in zip(func, default): - cls._mapper.insert(-1, (cls._getsubdtype(dft), fct, dft)) - - @classmethod - def _find_map_entry(cls, dtype): - # if a converter for the specific dtype is available use that - for i, (deftype, func, default_def) in enumerate(cls._mapper): - if dtype.type == deftype: - return i, (deftype, func, default_def) - - # otherwise find an inexact match - for i, (deftype, func, default_def) in enumerate(cls._mapper): - if np.issubdtype(dtype.type, deftype): - return i, (deftype, func, default_def) - - raise LookupError - - def __init__(self, dtype_or_func=None, default=None, missing_values=None, - locked=False): - # Defines a lock for upgrade - self._locked = bool(locked) - # No input dtype: minimal initialization - if dtype_or_func is None: - self.func = str2bool - self._status = 0 - self.default = default or False - dtype = np.dtype('bool') - else: - # Is the input a np.dtype ? - try: - self.func = None - dtype = np.dtype(dtype_or_func) - except TypeError: - # dtype_or_func must be a function, then - if not hasattr(dtype_or_func, '__call__'): - errmsg = ("The input argument `dtype` is neither a" - " function nor a dtype (got '%s' instead)") - raise TypeError(errmsg % type(dtype_or_func)) - # Set the function - self.func = dtype_or_func - # If we don't have a default, try to guess it or set it to - # None - if default is None: - try: - default = self.func('0') - except ValueError: - default = None - dtype = self._getdtype(default) - - # find the best match in our mapper - try: - self._status, (_, func, default_def) = self._find_map_entry(dtype) - except LookupError: - # no match - self.default = default - _, func, _ = self._mapper[-1] - self._status = 0 - else: - # use the found default only if we did not already have one - if default is None: - self.default = default_def - else: - self.default = default - - # If the input was a dtype, set the function to the last we saw - if self.func is None: - self.func = func - - # If the status is 1 (int), change the function to - # something more robust. - if self.func == self._mapper[1][1]: - if issubclass(dtype.type, np.uint64): - self.func = np.uint64 - elif issubclass(dtype.type, np.int64): - self.func = np.int64 - else: - self.func = lambda x: int(float(x)) - # Store the list of strings corresponding to missing values. - if missing_values is None: - self.missing_values = {''} - else: - if isinstance(missing_values, str): - missing_values = missing_values.split(",") - self.missing_values = set(list(missing_values) + ['']) - - self._callingfunction = self._strict_call - self.type = self._dtypeortype(dtype) - self._checked = False - self._initial_default = default - - def _loose_call(self, value): - try: - return self.func(value) - except ValueError: - return self.default - - def _strict_call(self, value): - try: - - # We check if we can convert the value using the current function - new_value = self.func(value) - - # In addition to having to check whether func can convert the - # value, we also have to make sure that we don't get overflow - # errors for integers. - if self.func is int: - try: - np.array(value, dtype=self.type) - except OverflowError: - raise ValueError - - # We're still here so we can now return the new value - return new_value - - except ValueError: - if value.strip() in self.missing_values: - if not self._status: - self._checked = False - return self.default - raise ValueError("Cannot convert string '%s'" % value) - - def __call__(self, value): - return self._callingfunction(value) - - def _do_upgrade(self): - # Raise an exception if we locked the converter... - if self._locked: - errmsg = "Converter is locked and cannot be upgraded" - raise ConverterLockError(errmsg) - _statusmax = len(self._mapper) - # Complains if we try to upgrade by the maximum - _status = self._status - if _status == _statusmax: - errmsg = "Could not find a valid conversion function" - raise ConverterError(errmsg) - elif _status < _statusmax - 1: - _status += 1 - self.type, self.func, default = self._mapper[_status] - self._status = _status - if self._initial_default is not None: - self.default = self._initial_default - else: - self.default = default - - def upgrade(self, value): - """ - Find the best converter for a given string, and return the result. - - The supplied string `value` is converted by testing different - converters in order. First the `func` method of the - `StringConverter` instance is tried, if this fails other available - converters are tried. The order in which these other converters - are tried is determined by the `_status` attribute of the instance. - - Parameters - ---------- - value : str - The string to convert. - - Returns - ------- - out : any - The result of converting `value` with the appropriate converter. - - """ - self._checked = True - try: - return self._strict_call(value) - except ValueError: - self._do_upgrade() - return self.upgrade(value) - - def iterupgrade(self, value): - self._checked = True - if not hasattr(value, '__iter__'): - value = (value,) - _strict_call = self._strict_call - try: - for _m in value: - _strict_call(_m) - except ValueError: - self._do_upgrade() - self.iterupgrade(value) - - def update(self, func, default=None, testing_value=None, - missing_values='', locked=False): - """ - Set StringConverter attributes directly. - - Parameters - ---------- - func : function - Conversion function. - default : any, optional - Value to return by default, that is, when the string to be - converted is flagged as missing. If not given, - `StringConverter` tries to supply a reasonable default value. - testing_value : str, optional - A string representing a standard input value of the converter. - This string is used to help defining a reasonable default - value. - missing_values : {sequence of str, None}, optional - Sequence of strings indicating a missing value. If ``None``, then - the existing `missing_values` are cleared. The default is `''`. - locked : bool, optional - Whether the StringConverter should be locked to prevent - automatic upgrade or not. Default is False. - - Notes - ----- - `update` takes the same parameters as the constructor of - `StringConverter`, except that `func` does not accept a `dtype` - whereas `dtype_or_func` in the constructor does. - - """ - self.func = func - self._locked = locked - - # Don't reset the default to None if we can avoid it - if default is not None: - self.default = default - self.type = self._dtypeortype(self._getdtype(default)) - else: - try: - tester = func(testing_value or '1') - except (TypeError, ValueError): - tester = None - self.type = self._dtypeortype(self._getdtype(tester)) - - # Add the missing values to the existing set or clear it. - if missing_values is None: - # Clear all missing values even though the ctor initializes it to - # set(['']) when the argument is None. - self.missing_values = set() - else: - if not np.iterable(missing_values): - missing_values = [missing_values] - if not all(isinstance(v, str) for v in missing_values): - raise TypeError("missing_values must be strings or unicode") - self.missing_values.update(missing_values) - - -def easy_dtype(ndtype, names=None, defaultfmt="f%i", **validationargs): - """ - Convenience function to create a `np.dtype` object. - - The function processes the input `dtype` and matches it with the given - names. - - Parameters - ---------- - ndtype : var - Definition of the dtype. Can be any string or dictionary recognized - by the `np.dtype` function, or a sequence of types. - names : str or sequence, optional - Sequence of strings to use as field names for a structured dtype. - For convenience, `names` can be a string of a comma-separated list - of names. - defaultfmt : str, optional - Format string used to define missing names, such as ``"f%i"`` - (default) or ``"fields_%02i"``. - validationargs : optional - A series of optional arguments used to initialize a - `NameValidator`. - - Examples - -------- - >>> np.lib._iotools.easy_dtype(float) - dtype('float64') - >>> np.lib._iotools.easy_dtype("i4, f8") - dtype([('f0', '>> np.lib._iotools.easy_dtype("i4, f8", defaultfmt="field_%03i") - dtype([('field_000', '>> np.lib._iotools.easy_dtype((int, float, float), names="a,b,c") - dtype([('a', '>> np.lib._iotools.easy_dtype(float, names="a,b,c") - dtype([('a', ' ArrayLike: - """ - Compute the quantiles of the given values for each quantile in `qs`. - - Parameters - ---------- - values : np.ndarray or ExtensionArray - qs : np.ndarray[float64] - interpolation : str - - Returns - ------- - np.ndarray or ExtensionArray - """ - if isinstance(values, np.ndarray): - fill_value = na_value_for_dtype(values.dtype, compat=False) - mask = isna(values) - return quantile_with_mask(values, mask, fill_value, qs, interpolation) - else: - return values._quantile(qs, interpolation) - - -def quantile_with_mask( - values: np.ndarray, - mask: npt.NDArray[np.bool_], - fill_value, - qs: npt.NDArray[np.float64], - interpolation: str, -) -> np.ndarray: - """ - Compute the quantiles of the given values for each quantile in `qs`. - - Parameters - ---------- - values : np.ndarray - For ExtensionArray, this is _values_for_factorize()[0] - mask : np.ndarray[bool] - mask = isna(values) - For ExtensionArray, this is computed before calling _value_for_factorize - fill_value : Scalar - The value to interpret fill NA entries with - For ExtensionArray, this is _values_for_factorize()[1] - qs : np.ndarray[float64] - interpolation : str - Type of interpolation - - Returns - ------- - np.ndarray - - Notes - ----- - Assumes values is already 2D. For ExtensionArray this means np.atleast_2d - has been called on _values_for_factorize()[0] - - Quantile is computed along axis=1. - """ - assert values.shape == mask.shape - if values.ndim == 1: - # unsqueeze, operate, re-squeeze - values = np.atleast_2d(values) - mask = np.atleast_2d(mask) - res_values = quantile_with_mask(values, mask, fill_value, qs, interpolation) - return res_values[0] - - assert values.ndim == 2 - - is_empty = values.shape[1] == 0 - - if is_empty: - # create the array of na_values - # 2d len(values) * len(qs) - flat = np.array([fill_value] * len(qs)) - result = np.repeat(flat, len(values)).reshape(len(values), len(qs)) - else: - result = _nanpercentile( - values, - qs * 100.0, - na_value=fill_value, - mask=mask, - interpolation=interpolation, - ) - - result = np.array(result, copy=False) - result = result.T - - return result - - -def _nanpercentile_1d( - values: np.ndarray, - mask: npt.NDArray[np.bool_], - qs: npt.NDArray[np.float64], - na_value: Scalar, - interpolation: str, -) -> Scalar | np.ndarray: - """ - Wrapper for np.percentile that skips missing values, specialized to - 1-dimensional case. - - Parameters - ---------- - values : array over which to find quantiles - mask : ndarray[bool] - locations in values that should be considered missing - qs : np.ndarray[float64] of quantile indices to find - na_value : scalar - value to return for empty or all-null values - interpolation : str - - Returns - ------- - quantiles : scalar or array - """ - # mask is Union[ExtensionArray, ndarray] - values = values[~mask] - - if len(values) == 0: - # Can't pass dtype=values.dtype here bc we might have na_value=np.nan - # with values.dtype=int64 see test_quantile_empty - # equiv: 'np.array([na_value] * len(qs))' but much faster - return np.full(len(qs), na_value) - - return np.percentile( - values, - qs, - # error: No overload variant of "percentile" matches argument - # types "ndarray[Any, Any]", "ndarray[Any, dtype[floating[_64Bit]]]" - # , "Dict[str, str]" [call-overload] - method=interpolation, # type: ignore[call-overload] - ) - - -def _nanpercentile( - values: np.ndarray, - qs: npt.NDArray[np.float64], - *, - na_value, - mask: npt.NDArray[np.bool_], - interpolation: str, -): - """ - Wrapper for np.percentile that skips missing values. - - Parameters - ---------- - values : np.ndarray[ndim=2] over which to find quantiles - qs : np.ndarray[float64] of quantile indices to find - na_value : scalar - value to return for empty or all-null values - mask : np.ndarray[bool] - locations in values that should be considered missing - interpolation : str - - Returns - ------- - quantiles : scalar or array - """ - - if values.dtype.kind in "mM": - # need to cast to integer to avoid rounding errors in numpy - result = _nanpercentile( - values.view("i8"), - qs=qs, - na_value=na_value.view("i8"), - mask=mask, - interpolation=interpolation, - ) - - # Note: we have to do `astype` and not view because in general we - # have float result at this point, not i8 - return result.astype(values.dtype) - - if mask.any(): - # Caller is responsible for ensuring mask shape match - assert mask.shape == values.shape - result = [ - _nanpercentile_1d(val, m, qs, na_value, interpolation=interpolation) - for (val, m) in zip(list(values), list(mask)) - ] - if values.dtype.kind == "f": - # preserve itemsize - result = np.array(result, dtype=values.dtype, copy=False).T - else: - result = np.array(result, copy=False).T - if ( - result.dtype != values.dtype - and not mask.all() - and (result == result.astype(values.dtype, copy=False)).all() - ): - # mask.all() will never get cast back to int - # e.g. values id integer dtype and result is floating dtype, - # only cast back to integer dtype if result values are all-integer. - result = result.astype(values.dtype, copy=False) - return result - else: - return np.percentile( - values, - qs, - axis=1, - # error: No overload variant of "percentile" matches argument types - # "ndarray[Any, Any]", "ndarray[Any, dtype[floating[_64Bit]]]", - # "int", "Dict[str, str]" [call-overload] - method=interpolation, # type: ignore[call-overload] - ) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/methods/selectn.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/methods/selectn.py deleted file mode 100644 index 894791cb46371b6a6a3ddc0266dc463b74921924..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/methods/selectn.py +++ /dev/null @@ -1,265 +0,0 @@ -""" -Implementation of nlargest and nsmallest. -""" - -from __future__ import annotations - -from collections.abc import ( - Hashable, - Sequence, -) -from typing import ( - TYPE_CHECKING, - cast, - final, -) - -import numpy as np - -from pandas._libs import algos as libalgos - -from pandas.core.dtypes.common import ( - is_bool_dtype, - is_complex_dtype, - is_integer_dtype, - is_list_like, - is_numeric_dtype, - needs_i8_conversion, -) -from pandas.core.dtypes.dtypes import BaseMaskedDtype - -if TYPE_CHECKING: - from pandas._typing import ( - DtypeObj, - IndexLabel, - ) - - from pandas import ( - DataFrame, - Series, - ) - - -class SelectN: - def __init__(self, obj, n: int, keep: str) -> None: - self.obj = obj - self.n = n - self.keep = keep - - if self.keep not in ("first", "last", "all"): - raise ValueError('keep must be either "first", "last" or "all"') - - def compute(self, method: str) -> DataFrame | Series: - raise NotImplementedError - - @final - def nlargest(self): - return self.compute("nlargest") - - @final - def nsmallest(self): - return self.compute("nsmallest") - - @final - @staticmethod - def is_valid_dtype_n_method(dtype: DtypeObj) -> bool: - """ - Helper function to determine if dtype is valid for - nsmallest/nlargest methods - """ - if is_numeric_dtype(dtype): - return not is_complex_dtype(dtype) - return needs_i8_conversion(dtype) - - -class SelectNSeries(SelectN): - """ - Implement n largest/smallest for Series - - Parameters - ---------- - obj : Series - n : int - keep : {'first', 'last'}, default 'first' - - Returns - ------- - nordered : Series - """ - - def compute(self, method: str) -> Series: - from pandas.core.reshape.concat import concat - - n = self.n - dtype = self.obj.dtype - if not self.is_valid_dtype_n_method(dtype): - raise TypeError(f"Cannot use method '{method}' with dtype {dtype}") - - if n <= 0: - return self.obj[[]] - - dropped = self.obj.dropna() - nan_index = self.obj.drop(dropped.index) - - # slow method - if n >= len(self.obj): - ascending = method == "nsmallest" - return self.obj.sort_values(ascending=ascending).head(n) - - # fast method - new_dtype = dropped.dtype - - # Similar to algorithms._ensure_data - arr = dropped._values - if needs_i8_conversion(arr.dtype): - arr = arr.view("i8") - elif isinstance(arr.dtype, BaseMaskedDtype): - arr = arr._data - else: - arr = np.asarray(arr) - if arr.dtype.kind == "b": - arr = arr.view(np.uint8) - - if method == "nlargest": - arr = -arr - if is_integer_dtype(new_dtype): - # GH 21426: ensure reverse ordering at boundaries - arr -= 1 - - elif is_bool_dtype(new_dtype): - # GH 26154: ensure False is smaller than True - arr = 1 - (-arr) - - if self.keep == "last": - arr = arr[::-1] - - nbase = n - narr = len(arr) - n = min(n, narr) - - # arr passed into kth_smallest must be contiguous. We copy - # here because kth_smallest will modify its input - kth_val = libalgos.kth_smallest(arr.copy(order="C"), n - 1) - (ns,) = np.nonzero(arr <= kth_val) - inds = ns[arr[ns].argsort(kind="mergesort")] - - if self.keep != "all": - inds = inds[:n] - findex = nbase - else: - if len(inds) < nbase <= len(nan_index) + len(inds): - findex = len(nan_index) + len(inds) - else: - findex = len(inds) - - if self.keep == "last": - # reverse indices - inds = narr - 1 - inds - - return concat([dropped.iloc[inds], nan_index]).iloc[:findex] - - -class SelectNFrame(SelectN): - """ - Implement n largest/smallest for DataFrame - - Parameters - ---------- - obj : DataFrame - n : int - keep : {'first', 'last'}, default 'first' - columns : list or str - - Returns - ------- - nordered : DataFrame - """ - - def __init__(self, obj: DataFrame, n: int, keep: str, columns: IndexLabel) -> None: - super().__init__(obj, n, keep) - if not is_list_like(columns) or isinstance(columns, tuple): - columns = [columns] - - columns = cast(Sequence[Hashable], columns) - columns = list(columns) - self.columns = columns - - def compute(self, method: str) -> DataFrame: - from pandas.core.api import Index - - n = self.n - frame = self.obj - columns = self.columns - - for column in columns: - dtype = frame[column].dtype - if not self.is_valid_dtype_n_method(dtype): - raise TypeError( - f"Column {repr(column)} has dtype {dtype}, " - f"cannot use method {repr(method)} with this dtype" - ) - - def get_indexer(current_indexer, other_indexer): - """ - Helper function to concat `current_indexer` and `other_indexer` - depending on `method` - """ - if method == "nsmallest": - return current_indexer.append(other_indexer) - else: - return other_indexer.append(current_indexer) - - # Below we save and reset the index in case index contains duplicates - original_index = frame.index - cur_frame = frame = frame.reset_index(drop=True) - cur_n = n - indexer = Index([], dtype=np.int64) - - for i, column in enumerate(columns): - # For each column we apply method to cur_frame[column]. - # If it's the last column or if we have the number of - # results desired we are done. - # Otherwise there are duplicates of the largest/smallest - # value and we need to look at the rest of the columns - # to determine which of the rows with the largest/smallest - # value in the column to keep. - series = cur_frame[column] - is_last_column = len(columns) - 1 == i - values = getattr(series, method)( - cur_n, keep=self.keep if is_last_column else "all" - ) - - if is_last_column or len(values) <= cur_n: - indexer = get_indexer(indexer, values.index) - break - - # Now find all values which are equal to - # the (nsmallest: largest)/(nlargest: smallest) - # from our series. - border_value = values == values[values.index[-1]] - - # Some of these values are among the top-n - # some aren't. - unsafe_values = values[border_value] - - # These values are definitely among the top-n - safe_values = values[~border_value] - indexer = get_indexer(indexer, safe_values.index) - - # Go on and separate the unsafe_values on the remaining - # columns. - cur_frame = cur_frame.loc[unsafe_values.index] - cur_n = n - len(indexer) - - frame = frame.take(indexer) - - # Restore the index on frame - frame.index = original_index.take(indexer) - - # If there is only one column, the frame is already sorted. - if len(columns) == 1: - return frame - - ascending = method == "nsmallest" - - return frame.sort_values(columns, ascending=ascending, kind="mergesort") diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_quantile.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_quantile.py deleted file mode 100644 index 61b253b24a7ecc6dfe43008a484b8adfb49ec8ce..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_quantile.py +++ /dev/null @@ -1,981 +0,0 @@ -import numpy as np -import pytest - -import pandas as pd -from pandas import ( - DataFrame, - Index, - Series, - Timestamp, -) -import pandas._testing as tm - - -@pytest.fixture( - params=[["linear", "single"], ["nearest", "table"]], ids=lambda x: "-".join(x) -) -def interp_method(request): - """(interpolation, method) arguments for quantile""" - return request.param - - -class TestDataFrameQuantile: - @pytest.mark.parametrize( - "df,expected", - [ - [ - DataFrame( - { - 0: Series(pd.arrays.SparseArray([1, 2])), - 1: Series(pd.arrays.SparseArray([3, 4])), - } - ), - Series([1.5, 3.5], name=0.5), - ], - [ - DataFrame(Series([0.0, None, 1.0, 2.0], dtype="Sparse[float]")), - Series([1.0], name=0.5), - ], - ], - ) - def test_quantile_sparse(self, df, expected): - # GH#17198 - # GH#24600 - result = df.quantile() - expected = expected.astype("Sparse[float]") - tm.assert_series_equal(result, expected) - - def test_quantile( - self, datetime_frame, interp_method, using_array_manager, request - ): - interpolation, method = interp_method - df = datetime_frame - result = df.quantile( - 0.1, axis=0, numeric_only=True, interpolation=interpolation, method=method - ) - expected = Series( - [np.percentile(df[col], 10) for col in df.columns], - index=df.columns, - name=0.1, - ) - if interpolation == "linear": - # np.percentile values only comparable to linear interpolation - tm.assert_series_equal(result, expected) - else: - tm.assert_index_equal(result.index, expected.index) - request.node.add_marker( - pytest.mark.xfail( - using_array_manager, reason="Name set incorrectly for arraymanager" - ) - ) - assert result.name == expected.name - - result = df.quantile( - 0.9, axis=1, numeric_only=True, interpolation=interpolation, method=method - ) - expected = Series( - [np.percentile(df.loc[date], 90) for date in df.index], - index=df.index, - name=0.9, - ) - if interpolation == "linear": - # np.percentile values only comparable to linear interpolation - tm.assert_series_equal(result, expected) - else: - tm.assert_index_equal(result.index, expected.index) - request.node.add_marker( - pytest.mark.xfail( - using_array_manager, reason="Name set incorrectly for arraymanager" - ) - ) - assert result.name == expected.name - - def test_empty(self, interp_method): - interpolation, method = interp_method - q = DataFrame({"x": [], "y": []}).quantile( - 0.1, axis=0, numeric_only=True, interpolation=interpolation, method=method - ) - assert np.isnan(q["x"]) and np.isnan(q["y"]) - - def test_non_numeric_exclusion(self, interp_method, request, using_array_manager): - interpolation, method = interp_method - df = DataFrame({"col1": ["A", "A", "B", "B"], "col2": [1, 2, 3, 4]}) - rs = df.quantile( - 0.5, numeric_only=True, interpolation=interpolation, method=method - ) - xp = df.median(numeric_only=True).rename(0.5) - if interpolation == "nearest": - xp = (xp + 0.5).astype(np.int64) - if method == "table" and using_array_manager: - request.node.add_marker( - pytest.mark.xfail(reason="Axis name incorrectly set.") - ) - tm.assert_series_equal(rs, xp) - - def test_axis(self, interp_method, request, using_array_manager): - # axis - interpolation, method = interp_method - df = DataFrame({"A": [1, 2, 3], "B": [2, 3, 4]}, index=[1, 2, 3]) - result = df.quantile(0.5, axis=1, interpolation=interpolation, method=method) - expected = Series([1.5, 2.5, 3.5], index=[1, 2, 3], name=0.5) - if interpolation == "nearest": - expected = expected.astype(np.int64) - if method == "table" and using_array_manager: - request.node.add_marker( - pytest.mark.xfail(reason="Axis name incorrectly set.") - ) - tm.assert_series_equal(result, expected) - - result = df.quantile( - [0.5, 0.75], axis=1, interpolation=interpolation, method=method - ) - expected = DataFrame( - {1: [1.5, 1.75], 2: [2.5, 2.75], 3: [3.5, 3.75]}, index=[0.5, 0.75] - ) - if interpolation == "nearest": - expected.iloc[0, :] -= 0.5 - expected.iloc[1, :] += 0.25 - expected = expected.astype(np.int64) - tm.assert_frame_equal(result, expected, check_index_type=True) - - def test_axis_numeric_only_true(self, interp_method, request, using_array_manager): - # We may want to break API in the future to change this - # so that we exclude non-numeric along the same axis - # See GH #7312 - interpolation, method = interp_method - df = DataFrame([[1, 2, 3], ["a", "b", 4]]) - result = df.quantile( - 0.5, axis=1, numeric_only=True, interpolation=interpolation, method=method - ) - expected = Series([3.0, 4.0], index=[0, 1], name=0.5) - if interpolation == "nearest": - expected = expected.astype(np.int64) - if method == "table" and using_array_manager: - request.node.add_marker( - pytest.mark.xfail(reason="Axis name incorrectly set.") - ) - tm.assert_series_equal(result, expected) - - def test_quantile_date_range(self, interp_method, request, using_array_manager): - # GH 2460 - interpolation, method = interp_method - dti = pd.date_range("2016-01-01", periods=3, tz="US/Pacific") - ser = Series(dti) - df = DataFrame(ser) - - result = df.quantile( - numeric_only=False, interpolation=interpolation, method=method - ) - expected = Series( - ["2016-01-02 00:00:00"], name=0.5, dtype="datetime64[ns, US/Pacific]" - ) - if method == "table" and using_array_manager: - request.node.add_marker( - pytest.mark.xfail(reason="Axis name incorrectly set.") - ) - - tm.assert_series_equal(result, expected) - - def test_quantile_axis_mixed(self, interp_method, request, using_array_manager): - # mixed on axis=1 - interpolation, method = interp_method - df = DataFrame( - { - "A": [1, 2, 3], - "B": [2.0, 3.0, 4.0], - "C": pd.date_range("20130101", periods=3), - "D": ["foo", "bar", "baz"], - } - ) - result = df.quantile( - 0.5, axis=1, numeric_only=True, interpolation=interpolation, method=method - ) - expected = Series([1.5, 2.5, 3.5], name=0.5) - if interpolation == "nearest": - expected -= 0.5 - if method == "table" and using_array_manager: - request.node.add_marker( - pytest.mark.xfail(reason="Axis name incorrectly set.") - ) - tm.assert_series_equal(result, expected) - - # must raise - msg = "'<' not supported between instances of 'Timestamp' and 'float'" - with pytest.raises(TypeError, match=msg): - df.quantile(0.5, axis=1, numeric_only=False) - - def test_quantile_axis_parameter(self, interp_method, request, using_array_manager): - # GH 9543/9544 - interpolation, method = interp_method - if method == "table" and using_array_manager: - request.node.add_marker( - pytest.mark.xfail(reason="Axis name incorrectly set.") - ) - df = DataFrame({"A": [1, 2, 3], "B": [2, 3, 4]}, index=[1, 2, 3]) - - result = df.quantile(0.5, axis=0, interpolation=interpolation, method=method) - - expected = Series([2.0, 3.0], index=["A", "B"], name=0.5) - if interpolation == "nearest": - expected = expected.astype(np.int64) - tm.assert_series_equal(result, expected) - - expected = df.quantile( - 0.5, axis="index", interpolation=interpolation, method=method - ) - if interpolation == "nearest": - expected = expected.astype(np.int64) - tm.assert_series_equal(result, expected) - - result = df.quantile(0.5, axis=1, interpolation=interpolation, method=method) - - expected = Series([1.5, 2.5, 3.5], index=[1, 2, 3], name=0.5) - if interpolation == "nearest": - expected = expected.astype(np.int64) - tm.assert_series_equal(result, expected) - - result = df.quantile( - 0.5, axis="columns", interpolation=interpolation, method=method - ) - tm.assert_series_equal(result, expected) - - msg = "No axis named -1 for object type DataFrame" - with pytest.raises(ValueError, match=msg): - df.quantile(0.1, axis=-1, interpolation=interpolation, method=method) - msg = "No axis named column for object type DataFrame" - with pytest.raises(ValueError, match=msg): - df.quantile(0.1, axis="column") - - def test_quantile_interpolation(self): - # see gh-10174 - - # interpolation method other than default linear - df = DataFrame({"A": [1, 2, 3], "B": [2, 3, 4]}, index=[1, 2, 3]) - result = df.quantile(0.5, axis=1, interpolation="nearest") - expected = Series([1, 2, 3], index=[1, 2, 3], name=0.5) - tm.assert_series_equal(result, expected) - - # cross-check interpolation=nearest results in original dtype - exp = np.percentile( - np.array([[1, 2, 3], [2, 3, 4]]), - 0.5, - axis=0, - method="nearest", - ) - expected = Series(exp, index=[1, 2, 3], name=0.5, dtype="int64") - tm.assert_series_equal(result, expected) - - # float - df = DataFrame({"A": [1.0, 2.0, 3.0], "B": [2.0, 3.0, 4.0]}, index=[1, 2, 3]) - result = df.quantile(0.5, axis=1, interpolation="nearest") - expected = Series([1.0, 2.0, 3.0], index=[1, 2, 3], name=0.5) - tm.assert_series_equal(result, expected) - exp = np.percentile( - np.array([[1.0, 2.0, 3.0], [2.0, 3.0, 4.0]]), - 0.5, - axis=0, - method="nearest", - ) - expected = Series(exp, index=[1, 2, 3], name=0.5, dtype="float64") - tm.assert_series_equal(result, expected) - - # axis - result = df.quantile([0.5, 0.75], axis=1, interpolation="lower") - expected = DataFrame( - {1: [1.0, 1.0], 2: [2.0, 2.0], 3: [3.0, 3.0]}, index=[0.5, 0.75] - ) - tm.assert_frame_equal(result, expected) - - # test degenerate case - df = DataFrame({"x": [], "y": []}) - q = df.quantile(0.1, axis=0, interpolation="higher") - assert np.isnan(q["x"]) and np.isnan(q["y"]) - - # multi - df = DataFrame([[1, 1, 1], [2, 2, 2], [3, 3, 3]], columns=["a", "b", "c"]) - result = df.quantile([0.25, 0.5], interpolation="midpoint") - - # https://github.com/numpy/numpy/issues/7163 - expected = DataFrame( - [[1.5, 1.5, 1.5], [2.0, 2.0, 2.0]], - index=[0.25, 0.5], - columns=["a", "b", "c"], - ) - tm.assert_frame_equal(result, expected) - - def test_quantile_interpolation_datetime(self, datetime_frame): - # see gh-10174 - - # interpolation = linear (default case) - df = datetime_frame - q = df.quantile(0.1, axis=0, numeric_only=True, interpolation="linear") - assert q["A"] == np.percentile(df["A"], 10) - - def test_quantile_interpolation_int(self, int_frame): - # see gh-10174 - - df = int_frame - # interpolation = linear (default case) - q = df.quantile(0.1) - assert q["A"] == np.percentile(df["A"], 10) - - # test with and without interpolation keyword - q1 = df.quantile(0.1, axis=0, interpolation="linear") - assert q1["A"] == np.percentile(df["A"], 10) - tm.assert_series_equal(q, q1) - - def test_quantile_multi(self, interp_method, request, using_array_manager): - interpolation, method = interp_method - df = DataFrame([[1, 1, 1], [2, 2, 2], [3, 3, 3]], columns=["a", "b", "c"]) - result = df.quantile([0.25, 0.5], interpolation=interpolation, method=method) - expected = DataFrame( - [[1.5, 1.5, 1.5], [2.0, 2.0, 2.0]], - index=[0.25, 0.5], - columns=["a", "b", "c"], - ) - if interpolation == "nearest": - expected = expected.astype(np.int64) - if method == "table" and using_array_manager: - request.node.add_marker( - pytest.mark.xfail(reason="Axis name incorrectly set.") - ) - tm.assert_frame_equal(result, expected) - - def test_quantile_multi_axis_1(self, interp_method, request, using_array_manager): - interpolation, method = interp_method - df = DataFrame([[1, 1, 1], [2, 2, 2], [3, 3, 3]], columns=["a", "b", "c"]) - result = df.quantile( - [0.25, 0.5], axis=1, interpolation=interpolation, method=method - ) - expected = DataFrame( - [[1.0, 2.0, 3.0]] * 2, index=[0.25, 0.5], columns=[0, 1, 2] - ) - if interpolation == "nearest": - expected = expected.astype(np.int64) - if method == "table" and using_array_manager: - request.node.add_marker( - pytest.mark.xfail(reason="Axis name incorrectly set.") - ) - tm.assert_frame_equal(result, expected) - - def test_quantile_multi_empty(self, interp_method): - interpolation, method = interp_method - result = DataFrame({"x": [], "y": []}).quantile( - [0.1, 0.9], axis=0, interpolation=interpolation, method=method - ) - expected = DataFrame( - {"x": [np.nan, np.nan], "y": [np.nan, np.nan]}, index=[0.1, 0.9] - ) - tm.assert_frame_equal(result, expected) - - def test_quantile_datetime(self): - df = DataFrame({"a": pd.to_datetime(["2010", "2011"]), "b": [0, 5]}) - - # exclude datetime - result = df.quantile(0.5, numeric_only=True) - expected = Series([2.5], index=["b"], name=0.5) - tm.assert_series_equal(result, expected) - - # datetime - result = df.quantile(0.5, numeric_only=False) - expected = Series( - [Timestamp("2010-07-02 12:00:00"), 2.5], index=["a", "b"], name=0.5 - ) - tm.assert_series_equal(result, expected) - - # datetime w/ multi - result = df.quantile([0.5], numeric_only=False) - expected = DataFrame( - [[Timestamp("2010-07-02 12:00:00"), 2.5]], index=[0.5], columns=["a", "b"] - ) - tm.assert_frame_equal(result, expected) - - # axis = 1 - df["c"] = pd.to_datetime(["2011", "2012"]) - result = df[["a", "c"]].quantile(0.5, axis=1, numeric_only=False) - expected = Series( - [Timestamp("2010-07-02 12:00:00"), Timestamp("2011-07-02 12:00:00")], - index=[0, 1], - name=0.5, - ) - tm.assert_series_equal(result, expected) - - result = df[["a", "c"]].quantile([0.5], axis=1, numeric_only=False) - expected = DataFrame( - [[Timestamp("2010-07-02 12:00:00"), Timestamp("2011-07-02 12:00:00")]], - index=[0.5], - columns=[0, 1], - ) - tm.assert_frame_equal(result, expected) - - # empty when numeric_only=True - result = df[["a", "c"]].quantile(0.5, numeric_only=True) - expected = Series([], index=[], dtype=np.float64, name=0.5) - tm.assert_series_equal(result, expected) - - result = df[["a", "c"]].quantile([0.5], numeric_only=True) - expected = DataFrame(index=[0.5], columns=[]) - tm.assert_frame_equal(result, expected) - - @pytest.mark.parametrize( - "dtype", - [ - "datetime64[ns]", - "datetime64[ns, US/Pacific]", - "timedelta64[ns]", - "Period[D]", - ], - ) - def test_quantile_dt64_empty(self, dtype, interp_method): - # GH#41544 - interpolation, method = interp_method - df = DataFrame(columns=["a", "b"], dtype=dtype) - - res = df.quantile( - 0.5, axis=1, numeric_only=False, interpolation=interpolation, method=method - ) - expected = Series([], index=[], name=0.5, dtype=dtype) - tm.assert_series_equal(res, expected) - - # no columns in result, so no dtype preservation - res = df.quantile( - [0.5], - axis=1, - numeric_only=False, - interpolation=interpolation, - method=method, - ) - expected = DataFrame(index=[0.5], columns=[]) - tm.assert_frame_equal(res, expected) - - @pytest.mark.parametrize("invalid", [-1, 2, [0.5, -1], [0.5, 2]]) - def test_quantile_invalid(self, invalid, datetime_frame, interp_method): - msg = "percentiles should all be in the interval \\[0, 1\\]" - interpolation, method = interp_method - with pytest.raises(ValueError, match=msg): - datetime_frame.quantile(invalid, interpolation=interpolation, method=method) - - def test_quantile_box(self, interp_method, request, using_array_manager): - interpolation, method = interp_method - if method == "table" and using_array_manager: - request.node.add_marker( - pytest.mark.xfail(reason="Axis name incorrectly set.") - ) - df = DataFrame( - { - "A": [ - Timestamp("2011-01-01"), - Timestamp("2011-01-02"), - Timestamp("2011-01-03"), - ], - "B": [ - Timestamp("2011-01-01", tz="US/Eastern"), - Timestamp("2011-01-02", tz="US/Eastern"), - Timestamp("2011-01-03", tz="US/Eastern"), - ], - "C": [ - pd.Timedelta("1 days"), - pd.Timedelta("2 days"), - pd.Timedelta("3 days"), - ], - } - ) - - res = df.quantile( - 0.5, numeric_only=False, interpolation=interpolation, method=method - ) - - exp = Series( - [ - Timestamp("2011-01-02"), - Timestamp("2011-01-02", tz="US/Eastern"), - pd.Timedelta("2 days"), - ], - name=0.5, - index=["A", "B", "C"], - ) - tm.assert_series_equal(res, exp) - - res = df.quantile( - [0.5], numeric_only=False, interpolation=interpolation, method=method - ) - exp = DataFrame( - [ - [ - Timestamp("2011-01-02"), - Timestamp("2011-01-02", tz="US/Eastern"), - pd.Timedelta("2 days"), - ] - ], - index=[0.5], - columns=["A", "B", "C"], - ) - tm.assert_frame_equal(res, exp) - - def test_quantile_box_nat(self): - # DatetimeLikeBlock may be consolidated and contain NaT in different loc - df = DataFrame( - { - "A": [ - Timestamp("2011-01-01"), - pd.NaT, - Timestamp("2011-01-02"), - Timestamp("2011-01-03"), - ], - "a": [ - Timestamp("2011-01-01"), - Timestamp("2011-01-02"), - pd.NaT, - Timestamp("2011-01-03"), - ], - "B": [ - Timestamp("2011-01-01", tz="US/Eastern"), - pd.NaT, - Timestamp("2011-01-02", tz="US/Eastern"), - Timestamp("2011-01-03", tz="US/Eastern"), - ], - "b": [ - Timestamp("2011-01-01", tz="US/Eastern"), - Timestamp("2011-01-02", tz="US/Eastern"), - pd.NaT, - Timestamp("2011-01-03", tz="US/Eastern"), - ], - "C": [ - pd.Timedelta("1 days"), - pd.Timedelta("2 days"), - pd.Timedelta("3 days"), - pd.NaT, - ], - "c": [ - pd.NaT, - pd.Timedelta("1 days"), - pd.Timedelta("2 days"), - pd.Timedelta("3 days"), - ], - }, - columns=list("AaBbCc"), - ) - - res = df.quantile(0.5, numeric_only=False) - exp = Series( - [ - Timestamp("2011-01-02"), - Timestamp("2011-01-02"), - Timestamp("2011-01-02", tz="US/Eastern"), - Timestamp("2011-01-02", tz="US/Eastern"), - pd.Timedelta("2 days"), - pd.Timedelta("2 days"), - ], - name=0.5, - index=list("AaBbCc"), - ) - tm.assert_series_equal(res, exp) - - res = df.quantile([0.5], numeric_only=False) - exp = DataFrame( - [ - [ - Timestamp("2011-01-02"), - Timestamp("2011-01-02"), - Timestamp("2011-01-02", tz="US/Eastern"), - Timestamp("2011-01-02", tz="US/Eastern"), - pd.Timedelta("2 days"), - pd.Timedelta("2 days"), - ] - ], - index=[0.5], - columns=list("AaBbCc"), - ) - tm.assert_frame_equal(res, exp) - - def test_quantile_nan(self, interp_method, request, using_array_manager): - interpolation, method = interp_method - if method == "table" and using_array_manager: - request.node.add_marker( - pytest.mark.xfail(reason="Axis name incorrectly set.") - ) - # GH 14357 - float block where some cols have missing values - df = DataFrame({"a": np.arange(1, 6.0), "b": np.arange(1, 6.0)}) - df.iloc[-1, 1] = np.nan - - res = df.quantile(0.5, interpolation=interpolation, method=method) - exp = Series( - [3.0, 2.5 if interpolation == "linear" else 3.0], index=["a", "b"], name=0.5 - ) - tm.assert_series_equal(res, exp) - - res = df.quantile([0.5, 0.75], interpolation=interpolation, method=method) - exp = DataFrame( - { - "a": [3.0, 4.0], - "b": [2.5, 3.25] if interpolation == "linear" else [3.0, 4.0], - }, - index=[0.5, 0.75], - ) - tm.assert_frame_equal(res, exp) - - res = df.quantile(0.5, axis=1, interpolation=interpolation, method=method) - exp = Series(np.arange(1.0, 6.0), name=0.5) - tm.assert_series_equal(res, exp) - - res = df.quantile( - [0.5, 0.75], axis=1, interpolation=interpolation, method=method - ) - exp = DataFrame([np.arange(1.0, 6.0)] * 2, index=[0.5, 0.75]) - if interpolation == "nearest": - exp.iloc[1, -1] = np.nan - tm.assert_frame_equal(res, exp) - - # full-nan column - df["b"] = np.nan - - res = df.quantile(0.5, interpolation=interpolation, method=method) - exp = Series([3.0, np.nan], index=["a", "b"], name=0.5) - tm.assert_series_equal(res, exp) - - res = df.quantile([0.5, 0.75], interpolation=interpolation, method=method) - exp = DataFrame({"a": [3.0, 4.0], "b": [np.nan, np.nan]}, index=[0.5, 0.75]) - tm.assert_frame_equal(res, exp) - - def test_quantile_nat(self, interp_method, request, using_array_manager): - interpolation, method = interp_method - if method == "table" and using_array_manager: - request.node.add_marker( - pytest.mark.xfail(reason="Axis name incorrectly set.") - ) - # full NaT column - df = DataFrame({"a": [pd.NaT, pd.NaT, pd.NaT]}) - - res = df.quantile( - 0.5, numeric_only=False, interpolation=interpolation, method=method - ) - exp = Series([pd.NaT], index=["a"], name=0.5) - tm.assert_series_equal(res, exp) - - res = df.quantile( - [0.5], numeric_only=False, interpolation=interpolation, method=method - ) - exp = DataFrame({"a": [pd.NaT]}, index=[0.5]) - tm.assert_frame_equal(res, exp) - - # mixed non-null / full null column - df = DataFrame( - { - "a": [ - Timestamp("2012-01-01"), - Timestamp("2012-01-02"), - Timestamp("2012-01-03"), - ], - "b": [pd.NaT, pd.NaT, pd.NaT], - } - ) - - res = df.quantile( - 0.5, numeric_only=False, interpolation=interpolation, method=method - ) - exp = Series([Timestamp("2012-01-02"), pd.NaT], index=["a", "b"], name=0.5) - tm.assert_series_equal(res, exp) - - res = df.quantile( - [0.5], numeric_only=False, interpolation=interpolation, method=method - ) - exp = DataFrame( - [[Timestamp("2012-01-02"), pd.NaT]], index=[0.5], columns=["a", "b"] - ) - tm.assert_frame_equal(res, exp) - - def test_quantile_empty_no_rows_floats(self, interp_method): - interpolation, method = interp_method - - df = DataFrame(columns=["a", "b"], dtype="float64") - - res = df.quantile(0.5, interpolation=interpolation, method=method) - exp = Series([np.nan, np.nan], index=["a", "b"], name=0.5) - tm.assert_series_equal(res, exp) - - res = df.quantile([0.5], interpolation=interpolation, method=method) - exp = DataFrame([[np.nan, np.nan]], columns=["a", "b"], index=[0.5]) - tm.assert_frame_equal(res, exp) - - res = df.quantile(0.5, axis=1, interpolation=interpolation, method=method) - exp = Series([], index=[], dtype="float64", name=0.5) - tm.assert_series_equal(res, exp) - - res = df.quantile([0.5], axis=1, interpolation=interpolation, method=method) - exp = DataFrame(columns=[], index=[0.5]) - tm.assert_frame_equal(res, exp) - - def test_quantile_empty_no_rows_ints(self, interp_method): - interpolation, method = interp_method - df = DataFrame(columns=["a", "b"], dtype="int64") - - res = df.quantile(0.5, interpolation=interpolation, method=method) - exp = Series([np.nan, np.nan], index=["a", "b"], name=0.5) - tm.assert_series_equal(res, exp) - - def test_quantile_empty_no_rows_dt64(self, interp_method): - interpolation, method = interp_method - # datetimes - df = DataFrame(columns=["a", "b"], dtype="datetime64[ns]") - - res = df.quantile( - 0.5, numeric_only=False, interpolation=interpolation, method=method - ) - exp = Series( - [pd.NaT, pd.NaT], index=["a", "b"], dtype="datetime64[ns]", name=0.5 - ) - tm.assert_series_equal(res, exp) - - # Mixed dt64/dt64tz - df["a"] = df["a"].dt.tz_localize("US/Central") - res = df.quantile( - 0.5, numeric_only=False, interpolation=interpolation, method=method - ) - exp = exp.astype(object) - if interpolation == "nearest": - # GH#18463 TODO: would we prefer NaTs here? - msg = "The 'downcast' keyword in fillna is deprecated" - with tm.assert_produces_warning(FutureWarning, match=msg): - exp = exp.fillna(np.nan, downcast=False) - tm.assert_series_equal(res, exp) - - # both dt64tz - df["b"] = df["b"].dt.tz_localize("US/Central") - res = df.quantile( - 0.5, numeric_only=False, interpolation=interpolation, method=method - ) - exp = exp.astype(df["b"].dtype) - tm.assert_series_equal(res, exp) - - def test_quantile_empty_no_columns(self, interp_method): - # GH#23925 _get_numeric_data may drop all columns - interpolation, method = interp_method - df = DataFrame(pd.date_range("1/1/18", periods=5)) - df.columns.name = "captain tightpants" - result = df.quantile( - 0.5, numeric_only=True, interpolation=interpolation, method=method - ) - expected = Series([], index=[], name=0.5, dtype=np.float64) - expected.index.name = "captain tightpants" - tm.assert_series_equal(result, expected) - - result = df.quantile( - [0.5], numeric_only=True, interpolation=interpolation, method=method - ) - expected = DataFrame([], index=[0.5], columns=[]) - expected.columns.name = "captain tightpants" - tm.assert_frame_equal(result, expected) - - def test_quantile_item_cache( - self, using_array_manager, interp_method, using_copy_on_write - ): - # previous behavior incorrect retained an invalid _item_cache entry - interpolation, method = interp_method - df = DataFrame( - np.random.default_rng(2).standard_normal((4, 3)), columns=["A", "B", "C"] - ) - df["D"] = df["A"] * 2 - ser = df["A"] - if not using_array_manager: - assert len(df._mgr.blocks) == 2 - - df.quantile(numeric_only=False, interpolation=interpolation, method=method) - - if using_copy_on_write: - ser.iloc[0] = 99 - assert df.iloc[0, 0] == df["A"][0] - assert df.iloc[0, 0] != 99 - else: - ser.values[0] = 99 - assert df.iloc[0, 0] == df["A"][0] - assert df.iloc[0, 0] == 99 - - def test_invalid_method(self): - with pytest.raises(ValueError, match="Invalid method: foo"): - DataFrame(range(1)).quantile(0.5, method="foo") - - def test_table_invalid_interpolation(self): - with pytest.raises(ValueError, match="Invalid interpolation: foo"): - DataFrame(range(1)).quantile(0.5, method="table", interpolation="foo") - - -class TestQuantileExtensionDtype: - # TODO: tests for axis=1? - # TODO: empty case? - - @pytest.fixture( - params=[ - pytest.param( - pd.IntervalIndex.from_breaks(range(10)), - marks=pytest.mark.xfail(reason="raises when trying to add Intervals"), - ), - pd.period_range("2016-01-01", periods=9, freq="D"), - pd.date_range("2016-01-01", periods=9, tz="US/Pacific"), - pd.timedelta_range("1 Day", periods=9), - pd.array(np.arange(9), dtype="Int64"), - pd.array(np.arange(9), dtype="Float64"), - ], - ids=lambda x: str(x.dtype), - ) - def index(self, request): - # NB: not actually an Index object - idx = request.param - idx.name = "A" - return idx - - @pytest.fixture - def obj(self, index, frame_or_series): - # bc index is not always an Index (yet), we need to re-patch .name - obj = frame_or_series(index).copy() - - if frame_or_series is Series: - obj.name = "A" - else: - obj.columns = ["A"] - return obj - - def compute_quantile(self, obj, qs): - if isinstance(obj, Series): - result = obj.quantile(qs) - else: - result = obj.quantile(qs, numeric_only=False) - return result - - def test_quantile_ea(self, request, obj, index): - # result should be invariant to shuffling - indexer = np.arange(len(index), dtype=np.intp) - np.random.default_rng(2).shuffle(indexer) - obj = obj.iloc[indexer] - - qs = [0.5, 0, 1] - result = self.compute_quantile(obj, qs) - - exp_dtype = index.dtype - if index.dtype == "Int64": - # match non-nullable casting behavior - exp_dtype = "Float64" - - # expected here assumes len(index) == 9 - expected = Series( - [index[4], index[0], index[-1]], dtype=exp_dtype, index=qs, name="A" - ) - expected = type(obj)(expected) - - tm.assert_equal(result, expected) - - def test_quantile_ea_with_na(self, obj, index): - obj.iloc[0] = index._na_value - obj.iloc[-1] = index._na_value - - # result should be invariant to shuffling - indexer = np.arange(len(index), dtype=np.intp) - np.random.default_rng(2).shuffle(indexer) - obj = obj.iloc[indexer] - - qs = [0.5, 0, 1] - result = self.compute_quantile(obj, qs) - - # expected here assumes len(index) == 9 - expected = Series( - [index[4], index[1], index[-2]], dtype=index.dtype, index=qs, name="A" - ) - expected = type(obj)(expected) - tm.assert_equal(result, expected) - - def test_quantile_ea_all_na(self, request, obj, index): - obj.iloc[:] = index._na_value - # Check dtypes were preserved; this was once a problem see GH#39763 - assert np.all(obj.dtypes == index.dtype) - - # result should be invariant to shuffling - indexer = np.arange(len(index), dtype=np.intp) - np.random.default_rng(2).shuffle(indexer) - obj = obj.iloc[indexer] - - qs = [0.5, 0, 1] - result = self.compute_quantile(obj, qs) - - expected = index.take([-1, -1, -1], allow_fill=True, fill_value=index._na_value) - expected = Series(expected, index=qs, name="A") - expected = type(obj)(expected) - tm.assert_equal(result, expected) - - def test_quantile_ea_scalar(self, request, obj, index): - # scalar qs - - # result should be invariant to shuffling - indexer = np.arange(len(index), dtype=np.intp) - np.random.default_rng(2).shuffle(indexer) - obj = obj.iloc[indexer] - - qs = 0.5 - result = self.compute_quantile(obj, qs) - - exp_dtype = index.dtype - if index.dtype == "Int64": - exp_dtype = "Float64" - - expected = Series({"A": index[4]}, dtype=exp_dtype, name=0.5) - if isinstance(obj, Series): - expected = expected["A"] - assert result == expected - else: - tm.assert_series_equal(result, expected) - - @pytest.mark.parametrize( - "dtype, expected_data, expected_index, axis", - [ - ["float64", [], [], 1], - ["int64", [], [], 1], - ["float64", [np.nan, np.nan], ["a", "b"], 0], - ["int64", [np.nan, np.nan], ["a", "b"], 0], - ], - ) - def test_empty_numeric(self, dtype, expected_data, expected_index, axis): - # GH 14564 - df = DataFrame(columns=["a", "b"], dtype=dtype) - result = df.quantile(0.5, axis=axis) - expected = Series( - expected_data, name=0.5, index=Index(expected_index), dtype="float64" - ) - tm.assert_series_equal(result, expected) - - @pytest.mark.parametrize( - "dtype, expected_data, expected_index, axis, expected_dtype", - [ - ["datetime64[ns]", [], [], 1, "datetime64[ns]"], - ["datetime64[ns]", [pd.NaT, pd.NaT], ["a", "b"], 0, "datetime64[ns]"], - ], - ) - def test_empty_datelike( - self, dtype, expected_data, expected_index, axis, expected_dtype - ): - # GH 14564 - df = DataFrame(columns=["a", "b"], dtype=dtype) - result = df.quantile(0.5, axis=axis, numeric_only=False) - expected = Series( - expected_data, name=0.5, index=Index(expected_index), dtype=expected_dtype - ) - tm.assert_series_equal(result, expected) - - @pytest.mark.parametrize( - "expected_data, expected_index, axis", - [ - [[np.nan, np.nan], range(2), 1], - [[], [], 0], - ], - ) - def test_datelike_numeric_only(self, expected_data, expected_index, axis): - # GH 14564 - df = DataFrame( - { - "a": pd.to_datetime(["2010", "2011"]), - "b": [0, 5], - "c": pd.to_datetime(["2011", "2012"]), - } - ) - result = df[["a", "c"]].quantile(0.5, axis=axis, numeric_only=True) - expected = Series( - expected_data, name=0.5, index=Index(expected_index), dtype=np.float64 - ) - tm.assert_series_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/test_arithmetic.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/test_arithmetic.py deleted file mode 100644 index e5a8feb7a89d31b8f4c20f3d21aca946547e739a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/test_arithmetic.py +++ /dev/null @@ -1,2129 +0,0 @@ -from collections import deque -from datetime import ( - datetime, - timezone, -) -from enum import Enum -import functools -import operator -import re - -import numpy as np -import pytest - -import pandas.util._test_decorators as td - -import pandas as pd -from pandas import ( - DataFrame, - Index, - MultiIndex, - Series, -) -import pandas._testing as tm -from pandas.core.computation import expressions as expr -from pandas.core.computation.expressions import _MIN_ELEMENTS -from pandas.tests.frame.common import ( - _check_mixed_float, - _check_mixed_int, -) -from pandas.util.version import Version - - -@pytest.fixture(autouse=True, params=[0, 1000000], ids=["numexpr", "python"]) -def switch_numexpr_min_elements(request): - _MIN_ELEMENTS = expr._MIN_ELEMENTS - expr._MIN_ELEMENTS = request.param - yield request.param - expr._MIN_ELEMENTS = _MIN_ELEMENTS - - -class DummyElement: - def __init__(self, value, dtype) -> None: - self.value = value - self.dtype = np.dtype(dtype) - - def __array__(self): - return np.array(self.value, dtype=self.dtype) - - def __str__(self) -> str: - return f"DummyElement({self.value}, {self.dtype})" - - def __repr__(self) -> str: - return str(self) - - def astype(self, dtype, copy=False): - self.dtype = dtype - return self - - def view(self, dtype): - return type(self)(self.value.view(dtype), dtype) - - def any(self, axis=None): - return bool(self.value) - - -# ------------------------------------------------------------------- -# Comparisons - - -class TestFrameComparisons: - # Specifically _not_ flex-comparisons - - def test_comparison_with_categorical_dtype(self): - # GH#12564 - - df = DataFrame({"A": ["foo", "bar", "baz"]}) - exp = DataFrame({"A": [True, False, False]}) - - res = df == "foo" - tm.assert_frame_equal(res, exp) - - # casting to categorical shouldn't affect the result - df["A"] = df["A"].astype("category") - - res = df == "foo" - tm.assert_frame_equal(res, exp) - - def test_frame_in_list(self): - # GH#12689 this should raise at the DataFrame level, not blocks - df = DataFrame( - np.random.default_rng(2).standard_normal((6, 4)), columns=list("ABCD") - ) - msg = "The truth value of a DataFrame is ambiguous" - with pytest.raises(ValueError, match=msg): - df in [None] - - @pytest.mark.parametrize( - "arg, arg2", - [ - [ - { - "a": np.random.default_rng(2).integers(10, size=10), - "b": pd.date_range("20010101", periods=10), - }, - { - "a": np.random.default_rng(2).integers(10, size=10), - "b": np.random.default_rng(2).integers(10, size=10), - }, - ], - [ - { - "a": np.random.default_rng(2).integers(10, size=10), - "b": np.random.default_rng(2).integers(10, size=10), - }, - { - "a": np.random.default_rng(2).integers(10, size=10), - "b": pd.date_range("20010101", periods=10), - }, - ], - [ - { - "a": pd.date_range("20010101", periods=10), - "b": pd.date_range("20010101", periods=10), - }, - { - "a": np.random.default_rng(2).integers(10, size=10), - "b": np.random.default_rng(2).integers(10, size=10), - }, - ], - [ - { - "a": np.random.default_rng(2).integers(10, size=10), - "b": pd.date_range("20010101", periods=10), - }, - { - "a": pd.date_range("20010101", periods=10), - "b": pd.date_range("20010101", periods=10), - }, - ], - ], - ) - def test_comparison_invalid(self, arg, arg2): - # GH4968 - # invalid date/int comparisons - x = DataFrame(arg) - y = DataFrame(arg2) - # we expect the result to match Series comparisons for - # == and !=, inequalities should raise - result = x == y - expected = DataFrame( - {col: x[col] == y[col] for col in x.columns}, - index=x.index, - columns=x.columns, - ) - tm.assert_frame_equal(result, expected) - - result = x != y - expected = DataFrame( - {col: x[col] != y[col] for col in x.columns}, - index=x.index, - columns=x.columns, - ) - tm.assert_frame_equal(result, expected) - - msgs = [ - r"Invalid comparison between dtype=datetime64\[ns\] and ndarray", - "invalid type promotion", - ( - # npdev 1.20.0 - r"The DTypes and " - r" do not have a common DType." - ), - ] - msg = "|".join(msgs) - with pytest.raises(TypeError, match=msg): - x >= y - with pytest.raises(TypeError, match=msg): - x > y - with pytest.raises(TypeError, match=msg): - x < y - with pytest.raises(TypeError, match=msg): - x <= y - - @pytest.mark.parametrize( - "left, right", - [ - ("gt", "lt"), - ("lt", "gt"), - ("ge", "le"), - ("le", "ge"), - ("eq", "eq"), - ("ne", "ne"), - ], - ) - def test_timestamp_compare(self, left, right): - # make sure we can compare Timestamps on the right AND left hand side - # GH#4982 - df = DataFrame( - { - "dates1": pd.date_range("20010101", periods=10), - "dates2": pd.date_range("20010102", periods=10), - "intcol": np.random.default_rng(2).integers(1000000000, size=10), - "floatcol": np.random.default_rng(2).standard_normal(10), - "stringcol": [chr(100 + i) for i in range(10)], - } - ) - df.loc[np.random.default_rng(2).random(len(df)) > 0.5, "dates2"] = pd.NaT - left_f = getattr(operator, left) - right_f = getattr(operator, right) - - # no nats - if left in ["eq", "ne"]: - expected = left_f(df, pd.Timestamp("20010109")) - result = right_f(pd.Timestamp("20010109"), df) - tm.assert_frame_equal(result, expected) - else: - msg = ( - "'(<|>)=?' not supported between " - "instances of 'numpy.ndarray' and 'Timestamp'" - ) - with pytest.raises(TypeError, match=msg): - left_f(df, pd.Timestamp("20010109")) - with pytest.raises(TypeError, match=msg): - right_f(pd.Timestamp("20010109"), df) - # nats - if left in ["eq", "ne"]: - expected = left_f(df, pd.Timestamp("nat")) - result = right_f(pd.Timestamp("nat"), df) - tm.assert_frame_equal(result, expected) - else: - msg = ( - "'(<|>)=?' not supported between " - "instances of 'numpy.ndarray' and 'NaTType'" - ) - with pytest.raises(TypeError, match=msg): - left_f(df, pd.Timestamp("nat")) - with pytest.raises(TypeError, match=msg): - right_f(pd.Timestamp("nat"), df) - - def test_mixed_comparison(self): - # GH#13128, GH#22163 != datetime64 vs non-dt64 should be False, - # not raise TypeError - # (this appears to be fixed before GH#22163, not sure when) - df = DataFrame([["1989-08-01", 1], ["1989-08-01", 2]]) - other = DataFrame([["a", "b"], ["c", "d"]]) - - result = df == other - assert not result.any().any() - - result = df != other - assert result.all().all() - - def test_df_boolean_comparison_error(self): - # GH#4576, GH#22880 - # comparing DataFrame against list/tuple with len(obj) matching - # len(df.columns) is supported as of GH#22800 - df = DataFrame(np.arange(6).reshape((3, 2))) - - expected = DataFrame([[False, False], [True, False], [False, False]]) - - result = df == (2, 2) - tm.assert_frame_equal(result, expected) - - result = df == [2, 2] - tm.assert_frame_equal(result, expected) - - def test_df_float_none_comparison(self): - df = DataFrame( - np.random.default_rng(2).standard_normal((8, 3)), - index=range(8), - columns=["A", "B", "C"], - ) - - result = df.__eq__(None) - assert not result.any().any() - - def test_df_string_comparison(self): - df = DataFrame([{"a": 1, "b": "foo"}, {"a": 2, "b": "bar"}]) - mask_a = df.a > 1 - tm.assert_frame_equal(df[mask_a], df.loc[1:1, :]) - tm.assert_frame_equal(df[-mask_a], df.loc[0:0, :]) - - mask_b = df.b == "foo" - tm.assert_frame_equal(df[mask_b], df.loc[0:0, :]) - tm.assert_frame_equal(df[-mask_b], df.loc[1:1, :]) - - -class TestFrameFlexComparisons: - # TODO: test_bool_flex_frame needs a better name - @pytest.mark.parametrize("op", ["eq", "ne", "gt", "lt", "ge", "le"]) - def test_bool_flex_frame(self, op): - data = np.random.default_rng(2).standard_normal((5, 3)) - other_data = np.random.default_rng(2).standard_normal((5, 3)) - df = DataFrame(data) - other = DataFrame(other_data) - ndim_5 = np.ones(df.shape + (1, 3)) - - # DataFrame - assert df.eq(df).values.all() - assert not df.ne(df).values.any() - f = getattr(df, op) - o = getattr(operator, op) - # No NAs - tm.assert_frame_equal(f(other), o(df, other)) - # Unaligned - part_o = other.loc[3:, 1:].copy() - rs = f(part_o) - xp = o(df, part_o.reindex(index=df.index, columns=df.columns)) - tm.assert_frame_equal(rs, xp) - # ndarray - tm.assert_frame_equal(f(other.values), o(df, other.values)) - # scalar - tm.assert_frame_equal(f(0), o(df, 0)) - # NAs - msg = "Unable to coerce to Series/DataFrame" - tm.assert_frame_equal(f(np.nan), o(df, np.nan)) - with pytest.raises(ValueError, match=msg): - f(ndim_5) - - @pytest.mark.parametrize("box", [np.array, Series]) - def test_bool_flex_series(self, box): - # Series - # list/tuple - data = np.random.default_rng(2).standard_normal((5, 3)) - df = DataFrame(data) - idx_ser = box(np.random.default_rng(2).standard_normal(5)) - col_ser = box(np.random.default_rng(2).standard_normal(3)) - - idx_eq = df.eq(idx_ser, axis=0) - col_eq = df.eq(col_ser) - idx_ne = df.ne(idx_ser, axis=0) - col_ne = df.ne(col_ser) - tm.assert_frame_equal(col_eq, df == Series(col_ser)) - tm.assert_frame_equal(col_eq, -col_ne) - tm.assert_frame_equal(idx_eq, -idx_ne) - tm.assert_frame_equal(idx_eq, df.T.eq(idx_ser).T) - tm.assert_frame_equal(col_eq, df.eq(list(col_ser))) - tm.assert_frame_equal(idx_eq, df.eq(Series(idx_ser), axis=0)) - tm.assert_frame_equal(idx_eq, df.eq(list(idx_ser), axis=0)) - - idx_gt = df.gt(idx_ser, axis=0) - col_gt = df.gt(col_ser) - idx_le = df.le(idx_ser, axis=0) - col_le = df.le(col_ser) - - tm.assert_frame_equal(col_gt, df > Series(col_ser)) - tm.assert_frame_equal(col_gt, -col_le) - tm.assert_frame_equal(idx_gt, -idx_le) - tm.assert_frame_equal(idx_gt, df.T.gt(idx_ser).T) - - idx_ge = df.ge(idx_ser, axis=0) - col_ge = df.ge(col_ser) - idx_lt = df.lt(idx_ser, axis=0) - col_lt = df.lt(col_ser) - tm.assert_frame_equal(col_ge, df >= Series(col_ser)) - tm.assert_frame_equal(col_ge, -col_lt) - tm.assert_frame_equal(idx_ge, -idx_lt) - tm.assert_frame_equal(idx_ge, df.T.ge(idx_ser).T) - - idx_ser = Series(np.random.default_rng(2).standard_normal(5)) - col_ser = Series(np.random.default_rng(2).standard_normal(3)) - - def test_bool_flex_frame_na(self): - df = DataFrame(np.random.default_rng(2).standard_normal((5, 3))) - # NA - df.loc[0, 0] = np.nan - rs = df.eq(df) - assert not rs.loc[0, 0] - rs = df.ne(df) - assert rs.loc[0, 0] - rs = df.gt(df) - assert not rs.loc[0, 0] - rs = df.lt(df) - assert not rs.loc[0, 0] - rs = df.ge(df) - assert not rs.loc[0, 0] - rs = df.le(df) - assert not rs.loc[0, 0] - - def test_bool_flex_frame_complex_dtype(self): - # complex - arr = np.array([np.nan, 1, 6, np.nan]) - arr2 = np.array([2j, np.nan, 7, None]) - df = DataFrame({"a": arr}) - df2 = DataFrame({"a": arr2}) - - msg = "|".join( - [ - "'>' not supported between instances of '.*' and 'complex'", - r"unorderable types: .*complex\(\)", # PY35 - ] - ) - with pytest.raises(TypeError, match=msg): - # inequalities are not well-defined for complex numbers - df.gt(df2) - with pytest.raises(TypeError, match=msg): - # regression test that we get the same behavior for Series - df["a"].gt(df2["a"]) - with pytest.raises(TypeError, match=msg): - # Check that we match numpy behavior here - df.values > df2.values - - rs = df.ne(df2) - assert rs.values.all() - - arr3 = np.array([2j, np.nan, None]) - df3 = DataFrame({"a": arr3}) - - with pytest.raises(TypeError, match=msg): - # inequalities are not well-defined for complex numbers - df3.gt(2j) - with pytest.raises(TypeError, match=msg): - # regression test that we get the same behavior for Series - df3["a"].gt(2j) - with pytest.raises(TypeError, match=msg): - # Check that we match numpy behavior here - df3.values > 2j - - def test_bool_flex_frame_object_dtype(self): - # corner, dtype=object - df1 = DataFrame({"col": ["foo", np.nan, "bar"]}) - df2 = DataFrame({"col": ["foo", datetime.now(), "bar"]}) - result = df1.ne(df2) - exp = DataFrame({"col": [False, True, False]}) - tm.assert_frame_equal(result, exp) - - def test_flex_comparison_nat(self): - # GH 15697, GH 22163 df.eq(pd.NaT) should behave like df == pd.NaT, - # and _definitely_ not be NaN - df = DataFrame([pd.NaT]) - - result = df == pd.NaT - # result.iloc[0, 0] is a np.bool_ object - assert result.iloc[0, 0].item() is False - - result = df.eq(pd.NaT) - assert result.iloc[0, 0].item() is False - - result = df != pd.NaT - assert result.iloc[0, 0].item() is True - - result = df.ne(pd.NaT) - assert result.iloc[0, 0].item() is True - - @pytest.mark.parametrize("opname", ["eq", "ne", "gt", "lt", "ge", "le"]) - def test_df_flex_cmp_constant_return_types(self, opname): - # GH 15077, non-empty DataFrame - df = DataFrame({"x": [1, 2, 3], "y": [1.0, 2.0, 3.0]}) - const = 2 - - result = getattr(df, opname)(const).dtypes.value_counts() - tm.assert_series_equal( - result, Series([2], index=[np.dtype(bool)], name="count") - ) - - @pytest.mark.parametrize("opname", ["eq", "ne", "gt", "lt", "ge", "le"]) - def test_df_flex_cmp_constant_return_types_empty(self, opname): - # GH 15077 empty DataFrame - df = DataFrame({"x": [1, 2, 3], "y": [1.0, 2.0, 3.0]}) - const = 2 - - empty = df.iloc[:0] - result = getattr(empty, opname)(const).dtypes.value_counts() - tm.assert_series_equal( - result, Series([2], index=[np.dtype(bool)], name="count") - ) - - def test_df_flex_cmp_ea_dtype_with_ndarray_series(self): - ii = pd.IntervalIndex.from_breaks([1, 2, 3]) - df = DataFrame({"A": ii, "B": ii}) - - ser = Series([0, 0]) - res = df.eq(ser, axis=0) - - expected = DataFrame({"A": [False, False], "B": [False, False]}) - tm.assert_frame_equal(res, expected) - - ser2 = Series([1, 2], index=["A", "B"]) - res2 = df.eq(ser2, axis=1) - tm.assert_frame_equal(res2, expected) - - -# ------------------------------------------------------------------- -# Arithmetic - - -class TestFrameFlexArithmetic: - def test_floordiv_axis0(self): - # make sure we df.floordiv(ser, axis=0) matches column-wise result - arr = np.arange(3) - ser = Series(arr) - df = DataFrame({"A": ser, "B": ser}) - - result = df.floordiv(ser, axis=0) - - expected = DataFrame({col: df[col] // ser for col in df.columns}) - - tm.assert_frame_equal(result, expected) - - result2 = df.floordiv(ser.values, axis=0) - tm.assert_frame_equal(result2, expected) - - @pytest.mark.parametrize("opname", ["floordiv", "pow"]) - def test_floordiv_axis0_numexpr_path(self, opname, request): - # case that goes through numexpr and has to fall back to masked_arith_op - ne = pytest.importorskip("numexpr") - if ( - Version(ne.__version__) >= Version("2.8.7") - and opname == "pow" - and "python" in request.node.callspec.id - ): - request.node.add_marker( - pytest.mark.xfail(reason="https://github.com/pydata/numexpr/issues/454") - ) - - op = getattr(operator, opname) - - arr = np.arange(_MIN_ELEMENTS + 100).reshape(_MIN_ELEMENTS // 100 + 1, -1) * 100 - df = DataFrame(arr) - df["C"] = 1.0 - - ser = df[0] - result = getattr(df, opname)(ser, axis=0) - - expected = DataFrame({col: op(df[col], ser) for col in df.columns}) - tm.assert_frame_equal(result, expected) - - result2 = getattr(df, opname)(ser.values, axis=0) - tm.assert_frame_equal(result2, expected) - - def test_df_add_td64_columnwise(self): - # GH 22534 Check that column-wise addition broadcasts correctly - dti = pd.date_range("2016-01-01", periods=10) - tdi = pd.timedelta_range("1", periods=10) - tser = Series(tdi) - df = DataFrame({0: dti, 1: tdi}) - - result = df.add(tser, axis=0) - expected = DataFrame({0: dti + tdi, 1: tdi + tdi}) - tm.assert_frame_equal(result, expected) - - def test_df_add_flex_filled_mixed_dtypes(self): - # GH 19611 - dti = pd.date_range("2016-01-01", periods=3) - ser = Series(["1 Day", "NaT", "2 Days"], dtype="timedelta64[ns]") - df = DataFrame({"A": dti, "B": ser}) - other = DataFrame({"A": ser, "B": ser}) - fill = pd.Timedelta(days=1).to_timedelta64() - result = df.add(other, fill_value=fill) - - expected = DataFrame( - { - "A": Series( - ["2016-01-02", "2016-01-03", "2016-01-05"], dtype="datetime64[ns]" - ), - "B": ser * 2, - } - ) - tm.assert_frame_equal(result, expected) - - def test_arith_flex_frame( - self, all_arithmetic_operators, float_frame, mixed_float_frame - ): - # one instance of parametrized fixture - op = all_arithmetic_operators - - def f(x, y): - # r-versions not in operator-stdlib; get op without "r" and invert - if op.startswith("__r"): - return getattr(operator, op.replace("__r", "__"))(y, x) - return getattr(operator, op)(x, y) - - result = getattr(float_frame, op)(2 * float_frame) - expected = f(float_frame, 2 * float_frame) - tm.assert_frame_equal(result, expected) - - # vs mix float - result = getattr(mixed_float_frame, op)(2 * mixed_float_frame) - expected = f(mixed_float_frame, 2 * mixed_float_frame) - tm.assert_frame_equal(result, expected) - _check_mixed_float(result, dtype={"C": None}) - - @pytest.mark.parametrize("op", ["__add__", "__sub__", "__mul__"]) - def test_arith_flex_frame_mixed( - self, - op, - int_frame, - mixed_int_frame, - mixed_float_frame, - switch_numexpr_min_elements, - ): - f = getattr(operator, op) - - # vs mix int - result = getattr(mixed_int_frame, op)(2 + mixed_int_frame) - expected = f(mixed_int_frame, 2 + mixed_int_frame) - - # no overflow in the uint - dtype = None - if op in ["__sub__"]: - dtype = {"B": "uint64", "C": None} - elif op in ["__add__", "__mul__"]: - dtype = {"C": None} - if expr.USE_NUMEXPR and switch_numexpr_min_elements == 0: - # when using numexpr, the casting rules are slightly different: - # in the `2 + mixed_int_frame` operation, int32 column becomes - # and int64 column (not preserving dtype in operation with Python - # scalar), and then the int32/int64 combo results in int64 result - dtype["A"] = (2 + mixed_int_frame)["A"].dtype - tm.assert_frame_equal(result, expected) - _check_mixed_int(result, dtype=dtype) - - # vs mix float - result = getattr(mixed_float_frame, op)(2 * mixed_float_frame) - expected = f(mixed_float_frame, 2 * mixed_float_frame) - tm.assert_frame_equal(result, expected) - _check_mixed_float(result, dtype={"C": None}) - - # vs plain int - result = getattr(int_frame, op)(2 * int_frame) - expected = f(int_frame, 2 * int_frame) - tm.assert_frame_equal(result, expected) - - @pytest.mark.parametrize("dim", range(3, 6)) - def test_arith_flex_frame_raise(self, all_arithmetic_operators, float_frame, dim): - # one instance of parametrized fixture - op = all_arithmetic_operators - - # Check that arrays with dim >= 3 raise - arr = np.ones((1,) * dim) - msg = "Unable to coerce to Series/DataFrame" - with pytest.raises(ValueError, match=msg): - getattr(float_frame, op)(arr) - - def test_arith_flex_frame_corner(self, float_frame): - const_add = float_frame.add(1) - tm.assert_frame_equal(const_add, float_frame + 1) - - # corner cases - result = float_frame.add(float_frame[:0]) - tm.assert_frame_equal(result, float_frame * np.nan) - - result = float_frame[:0].add(float_frame) - tm.assert_frame_equal(result, float_frame * np.nan) - - with pytest.raises(NotImplementedError, match="fill_value"): - float_frame.add(float_frame.iloc[0], fill_value=3) - - with pytest.raises(NotImplementedError, match="fill_value"): - float_frame.add(float_frame.iloc[0], axis="index", fill_value=3) - - @pytest.mark.parametrize("op", ["add", "sub", "mul", "mod"]) - def test_arith_flex_series_ops(self, simple_frame, op): - # after arithmetic refactor, add truediv here - df = simple_frame - - row = df.xs("a") - col = df["two"] - f = getattr(df, op) - op = getattr(operator, op) - tm.assert_frame_equal(f(row), op(df, row)) - tm.assert_frame_equal(f(col, axis=0), op(df.T, col).T) - - def test_arith_flex_series(self, simple_frame): - df = simple_frame - - row = df.xs("a") - col = df["two"] - # special case for some reason - tm.assert_frame_equal(df.add(row, axis=None), df + row) - - # cases which will be refactored after big arithmetic refactor - tm.assert_frame_equal(df.div(row), df / row) - tm.assert_frame_equal(df.div(col, axis=0), (df.T / col).T) - - @pytest.mark.parametrize("dtype", ["int64", "float64"]) - def test_arith_flex_series_broadcasting(self, dtype): - # broadcasting issue in GH 7325 - df = DataFrame(np.arange(3 * 2).reshape((3, 2)), dtype=dtype) - expected = DataFrame([[np.nan, np.inf], [1.0, 1.5], [1.0, 1.25]]) - result = df.div(df[0], axis="index") - tm.assert_frame_equal(result, expected) - - def test_arith_flex_zero_len_raises(self): - # GH 19522 passing fill_value to frame flex arith methods should - # raise even in the zero-length special cases - ser_len0 = Series([], dtype=object) - df_len0 = DataFrame(columns=["A", "B"]) - df = DataFrame([[1, 2], [3, 4]], columns=["A", "B"]) - - with pytest.raises(NotImplementedError, match="fill_value"): - df.add(ser_len0, fill_value="E") - - with pytest.raises(NotImplementedError, match="fill_value"): - df_len0.sub(df["A"], axis=None, fill_value=3) - - def test_flex_add_scalar_fill_value(self): - # GH#12723 - dat = np.array([0, 1, np.nan, 3, 4, 5], dtype="float") - df = DataFrame({"foo": dat}, index=range(6)) - - exp = df.fillna(0).add(2) - res = df.add(2, fill_value=0) - tm.assert_frame_equal(res, exp) - - def test_sub_alignment_with_duplicate_index(self): - # GH#5185 dup aligning operations should work - df1 = DataFrame([1, 2, 3, 4, 5], index=[1, 2, 1, 2, 3]) - df2 = DataFrame([1, 2, 3], index=[1, 2, 3]) - expected = DataFrame([0, 2, 0, 2, 2], index=[1, 1, 2, 2, 3]) - result = df1.sub(df2) - tm.assert_frame_equal(result, expected) - - @pytest.mark.parametrize("op", ["__add__", "__mul__", "__sub__", "__truediv__"]) - def test_arithmetic_with_duplicate_columns(self, op): - # operations - df = DataFrame({"A": np.arange(10), "B": np.random.default_rng(2).random(10)}) - expected = getattr(df, op)(df) - expected.columns = ["A", "A"] - df.columns = ["A", "A"] - result = getattr(df, op)(df) - tm.assert_frame_equal(result, expected) - str(result) - result.dtypes - - @pytest.mark.parametrize("level", [0, None]) - def test_broadcast_multiindex(self, level): - # GH34388 - df1 = DataFrame({"A": [0, 1, 2], "B": [1, 2, 3]}) - df1.columns = df1.columns.set_names("L1") - - df2 = DataFrame({("A", "C"): [0, 0, 0], ("A", "D"): [0, 0, 0]}) - df2.columns = df2.columns.set_names(["L1", "L2"]) - - result = df1.add(df2, level=level) - expected = DataFrame({("A", "C"): [0, 1, 2], ("A", "D"): [0, 1, 2]}) - expected.columns = expected.columns.set_names(["L1", "L2"]) - - tm.assert_frame_equal(result, expected) - - def test_frame_multiindex_operations(self): - # GH 43321 - df = DataFrame( - {2010: [1, 2, 3], 2020: [3, 4, 5]}, - index=MultiIndex.from_product( - [["a"], ["b"], [0, 1, 2]], names=["scen", "mod", "id"] - ), - ) - - series = Series( - [0.4], - index=MultiIndex.from_product([["b"], ["a"]], names=["mod", "scen"]), - ) - - expected = DataFrame( - {2010: [1.4, 2.4, 3.4], 2020: [3.4, 4.4, 5.4]}, - index=MultiIndex.from_product( - [["a"], ["b"], [0, 1, 2]], names=["scen", "mod", "id"] - ), - ) - result = df.add(series, axis=0) - - tm.assert_frame_equal(result, expected) - - def test_frame_multiindex_operations_series_index_to_frame_index(self): - # GH 43321 - df = DataFrame( - {2010: [1], 2020: [3]}, - index=MultiIndex.from_product([["a"], ["b"]], names=["scen", "mod"]), - ) - - series = Series( - [10.0, 20.0, 30.0], - index=MultiIndex.from_product( - [["a"], ["b"], [0, 1, 2]], names=["scen", "mod", "id"] - ), - ) - - expected = DataFrame( - {2010: [11.0, 21, 31.0], 2020: [13.0, 23.0, 33.0]}, - index=MultiIndex.from_product( - [["a"], ["b"], [0, 1, 2]], names=["scen", "mod", "id"] - ), - ) - result = df.add(series, axis=0) - - tm.assert_frame_equal(result, expected) - - def test_frame_multiindex_operations_no_align(self): - df = DataFrame( - {2010: [1, 2, 3], 2020: [3, 4, 5]}, - index=MultiIndex.from_product( - [["a"], ["b"], [0, 1, 2]], names=["scen", "mod", "id"] - ), - ) - - series = Series( - [0.4], - index=MultiIndex.from_product([["c"], ["a"]], names=["mod", "scen"]), - ) - - expected = DataFrame( - {2010: np.nan, 2020: np.nan}, - index=MultiIndex.from_tuples( - [ - ("a", "b", 0), - ("a", "b", 1), - ("a", "b", 2), - ("a", "c", np.nan), - ], - names=["scen", "mod", "id"], - ), - ) - result = df.add(series, axis=0) - - tm.assert_frame_equal(result, expected) - - def test_frame_multiindex_operations_part_align(self): - df = DataFrame( - {2010: [1, 2, 3], 2020: [3, 4, 5]}, - index=MultiIndex.from_tuples( - [ - ("a", "b", 0), - ("a", "b", 1), - ("a", "c", 2), - ], - names=["scen", "mod", "id"], - ), - ) - - series = Series( - [0.4], - index=MultiIndex.from_product([["b"], ["a"]], names=["mod", "scen"]), - ) - - expected = DataFrame( - {2010: [1.4, 2.4, np.nan], 2020: [3.4, 4.4, np.nan]}, - index=MultiIndex.from_tuples( - [ - ("a", "b", 0), - ("a", "b", 1), - ("a", "c", 2), - ], - names=["scen", "mod", "id"], - ), - ) - result = df.add(series, axis=0) - - tm.assert_frame_equal(result, expected) - - -class TestFrameArithmetic: - def test_td64_op_nat_casting(self): - # Make sure we don't accidentally treat timedelta64(NaT) as datetime64 - # when calling dispatch_to_series in DataFrame arithmetic - ser = Series(["NaT", "NaT"], dtype="timedelta64[ns]") - df = DataFrame([[1, 2], [3, 4]]) - - result = df * ser - expected = DataFrame({0: ser, 1: ser}) - tm.assert_frame_equal(result, expected) - - def test_df_add_2d_array_rowlike_broadcasts(self): - # GH#23000 - arr = np.arange(6).reshape(3, 2) - df = DataFrame(arr, columns=[True, False], index=["A", "B", "C"]) - - rowlike = arr[[1], :] # shape --> (1, ncols) - assert rowlike.shape == (1, df.shape[1]) - - expected = DataFrame( - [[2, 4], [4, 6], [6, 8]], - columns=df.columns, - index=df.index, - # specify dtype explicitly to avoid failing - # on 32bit builds - dtype=arr.dtype, - ) - result = df + rowlike - tm.assert_frame_equal(result, expected) - result = rowlike + df - tm.assert_frame_equal(result, expected) - - def test_df_add_2d_array_collike_broadcasts(self): - # GH#23000 - arr = np.arange(6).reshape(3, 2) - df = DataFrame(arr, columns=[True, False], index=["A", "B", "C"]) - - collike = arr[:, [1]] # shape --> (nrows, 1) - assert collike.shape == (df.shape[0], 1) - - expected = DataFrame( - [[1, 2], [5, 6], [9, 10]], - columns=df.columns, - index=df.index, - # specify dtype explicitly to avoid failing - # on 32bit builds - dtype=arr.dtype, - ) - result = df + collike - tm.assert_frame_equal(result, expected) - result = collike + df - tm.assert_frame_equal(result, expected) - - def test_df_arith_2d_array_rowlike_broadcasts( - self, request, all_arithmetic_operators, using_array_manager - ): - # GH#23000 - opname = all_arithmetic_operators - - if using_array_manager and opname in ("__rmod__", "__rfloordiv__"): - # TODO(ArrayManager) decide on dtypes - td.mark_array_manager_not_yet_implemented(request) - - arr = np.arange(6).reshape(3, 2) - df = DataFrame(arr, columns=[True, False], index=["A", "B", "C"]) - - rowlike = arr[[1], :] # shape --> (1, ncols) - assert rowlike.shape == (1, df.shape[1]) - - exvals = [ - getattr(df.loc["A"], opname)(rowlike.squeeze()), - getattr(df.loc["B"], opname)(rowlike.squeeze()), - getattr(df.loc["C"], opname)(rowlike.squeeze()), - ] - - expected = DataFrame(exvals, columns=df.columns, index=df.index) - - result = getattr(df, opname)(rowlike) - tm.assert_frame_equal(result, expected) - - def test_df_arith_2d_array_collike_broadcasts( - self, request, all_arithmetic_operators, using_array_manager - ): - # GH#23000 - opname = all_arithmetic_operators - - if using_array_manager and opname in ("__rmod__", "__rfloordiv__"): - # TODO(ArrayManager) decide on dtypes - td.mark_array_manager_not_yet_implemented(request) - - arr = np.arange(6).reshape(3, 2) - df = DataFrame(arr, columns=[True, False], index=["A", "B", "C"]) - - collike = arr[:, [1]] # shape --> (nrows, 1) - assert collike.shape == (df.shape[0], 1) - - exvals = { - True: getattr(df[True], opname)(collike.squeeze()), - False: getattr(df[False], opname)(collike.squeeze()), - } - - dtype = None - if opname in ["__rmod__", "__rfloordiv__"]: - # Series ops may return mixed int/float dtypes in cases where - # DataFrame op will return all-float. So we upcast `expected` - dtype = np.common_type(*(x.values for x in exvals.values())) - - expected = DataFrame(exvals, columns=df.columns, index=df.index, dtype=dtype) - - result = getattr(df, opname)(collike) - tm.assert_frame_equal(result, expected) - - def test_df_bool_mul_int(self): - # GH 22047, GH 22163 multiplication by 1 should result in int dtype, - # not object dtype - df = DataFrame([[False, True], [False, False]]) - result = df * 1 - - # On appveyor this comes back as np.int32 instead of np.int64, - # so we check dtype.kind instead of just dtype - kinds = result.dtypes.apply(lambda x: x.kind) - assert (kinds == "i").all() - - result = 1 * df - kinds = result.dtypes.apply(lambda x: x.kind) - assert (kinds == "i").all() - - def test_arith_mixed(self): - left = DataFrame({"A": ["a", "b", "c"], "B": [1, 2, 3]}) - - result = left + left - expected = DataFrame({"A": ["aa", "bb", "cc"], "B": [2, 4, 6]}) - tm.assert_frame_equal(result, expected) - - @pytest.mark.parametrize("col", ["A", "B"]) - def test_arith_getitem_commute(self, all_arithmetic_functions, col): - df = DataFrame({"A": [1.1, 3.3], "B": [2.5, -3.9]}) - result = all_arithmetic_functions(df, 1)[col] - expected = all_arithmetic_functions(df[col], 1) - tm.assert_series_equal(result, expected) - - @pytest.mark.parametrize( - "values", [[1, 2], (1, 2), np.array([1, 2]), range(1, 3), deque([1, 2])] - ) - def test_arith_alignment_non_pandas_object(self, values): - # GH#17901 - df = DataFrame({"A": [1, 1], "B": [1, 1]}) - expected = DataFrame({"A": [2, 2], "B": [3, 3]}) - result = df + values - tm.assert_frame_equal(result, expected) - - def test_arith_non_pandas_object(self): - df = DataFrame( - np.arange(1, 10, dtype="f8").reshape(3, 3), - columns=["one", "two", "three"], - index=["a", "b", "c"], - ) - - val1 = df.xs("a").values - added = DataFrame(df.values + val1, index=df.index, columns=df.columns) - tm.assert_frame_equal(df + val1, added) - - added = DataFrame((df.values.T + val1).T, index=df.index, columns=df.columns) - tm.assert_frame_equal(df.add(val1, axis=0), added) - - val2 = list(df["two"]) - - added = DataFrame(df.values + val2, index=df.index, columns=df.columns) - tm.assert_frame_equal(df + val2, added) - - added = DataFrame((df.values.T + val2).T, index=df.index, columns=df.columns) - tm.assert_frame_equal(df.add(val2, axis="index"), added) - - val3 = np.random.default_rng(2).random(df.shape) - added = DataFrame(df.values + val3, index=df.index, columns=df.columns) - tm.assert_frame_equal(df.add(val3), added) - - def test_operations_with_interval_categories_index(self, all_arithmetic_operators): - # GH#27415 - op = all_arithmetic_operators - ind = pd.CategoricalIndex(pd.interval_range(start=0.0, end=2.0)) - data = [1, 2] - df = DataFrame([data], columns=ind) - num = 10 - result = getattr(df, op)(num) - expected = DataFrame([[getattr(n, op)(num) for n in data]], columns=ind) - tm.assert_frame_equal(result, expected) - - def test_frame_with_frame_reindex(self): - # GH#31623 - df = DataFrame( - { - "foo": [pd.Timestamp("2019"), pd.Timestamp("2020")], - "bar": [pd.Timestamp("2018"), pd.Timestamp("2021")], - }, - columns=["foo", "bar"], - ) - df2 = df[["foo"]] - - result = df - df2 - - expected = DataFrame( - {"foo": [pd.Timedelta(0), pd.Timedelta(0)], "bar": [np.nan, np.nan]}, - columns=["bar", "foo"], - ) - tm.assert_frame_equal(result, expected) - - @pytest.mark.parametrize( - "value, dtype", - [ - (1, "i8"), - (1.0, "f8"), - (2**63, "f8"), - (1j, "complex128"), - (2**63, "complex128"), - (True, "bool"), - (np.timedelta64(20, "ns"), " b - tm.assert_frame_equal(result, expected) - - result = df.values > b - tm.assert_numpy_array_equal(result, expected.values) - - msg1d = "Unable to coerce to Series, length must be 2: given 3" - msg2d = "Unable to coerce to DataFrame, shape must be" - msg2db = "operands could not be broadcast together with shapes" - with pytest.raises(ValueError, match=msg1d): - # wrong shape - df > lst - - with pytest.raises(ValueError, match=msg1d): - # wrong shape - df > tup - - # broadcasts like ndarray (GH#23000) - result = df > b_r - tm.assert_frame_equal(result, expected) - - result = df.values > b_r - tm.assert_numpy_array_equal(result, expected.values) - - with pytest.raises(ValueError, match=msg2d): - df > b_c - - with pytest.raises(ValueError, match=msg2db): - df.values > b_c - - # == - expected = DataFrame([[False, False], [True, False], [False, False]]) - result = df == b - tm.assert_frame_equal(result, expected) - - with pytest.raises(ValueError, match=msg1d): - df == lst - - with pytest.raises(ValueError, match=msg1d): - df == tup - - # broadcasts like ndarray (GH#23000) - result = df == b_r - tm.assert_frame_equal(result, expected) - - result = df.values == b_r - tm.assert_numpy_array_equal(result, expected.values) - - with pytest.raises(ValueError, match=msg2d): - df == b_c - - assert df.values.shape != b_c.shape - - # with alignment - df = DataFrame( - np.arange(6).reshape((3, 2)), columns=list("AB"), index=list("abc") - ) - expected.index = df.index - expected.columns = df.columns - - with pytest.raises(ValueError, match=msg1d): - df == lst - - with pytest.raises(ValueError, match=msg1d): - df == tup - - def test_inplace_ops_alignment(self): - # inplace ops / ops alignment - # GH 8511 - - columns = list("abcdefg") - X_orig = DataFrame( - np.arange(10 * len(columns)).reshape(-1, len(columns)), - columns=columns, - index=range(10), - ) - Z = 100 * X_orig.iloc[:, 1:-1].copy() - block1 = list("bedcf") - subs = list("bcdef") - - # add - X = X_orig.copy() - result1 = (X[block1] + Z).reindex(columns=subs) - - X[block1] += Z - result2 = X.reindex(columns=subs) - - X = X_orig.copy() - result3 = (X[block1] + Z[block1]).reindex(columns=subs) - - X[block1] += Z[block1] - result4 = X.reindex(columns=subs) - - tm.assert_frame_equal(result1, result2) - tm.assert_frame_equal(result1, result3) - tm.assert_frame_equal(result1, result4) - - # sub - X = X_orig.copy() - result1 = (X[block1] - Z).reindex(columns=subs) - - X[block1] -= Z - result2 = X.reindex(columns=subs) - - X = X_orig.copy() - result3 = (X[block1] - Z[block1]).reindex(columns=subs) - - X[block1] -= Z[block1] - result4 = X.reindex(columns=subs) - - tm.assert_frame_equal(result1, result2) - tm.assert_frame_equal(result1, result3) - tm.assert_frame_equal(result1, result4) - - def test_inplace_ops_identity(self): - # GH 5104 - # make sure that we are actually changing the object - s_orig = Series([1, 2, 3]) - df_orig = DataFrame( - np.random.default_rng(2).integers(0, 5, size=10).reshape(-1, 5) - ) - - # no dtype change - s = s_orig.copy() - s2 = s - s += 1 - tm.assert_series_equal(s, s2) - tm.assert_series_equal(s_orig + 1, s) - assert s is s2 - assert s._mgr is s2._mgr - - df = df_orig.copy() - df2 = df - df += 1 - tm.assert_frame_equal(df, df2) - tm.assert_frame_equal(df_orig + 1, df) - assert df is df2 - assert df._mgr is df2._mgr - - # dtype change - s = s_orig.copy() - s2 = s - s += 1.5 - tm.assert_series_equal(s, s2) - tm.assert_series_equal(s_orig + 1.5, s) - - df = df_orig.copy() - df2 = df - df += 1.5 - tm.assert_frame_equal(df, df2) - tm.assert_frame_equal(df_orig + 1.5, df) - assert df is df2 - assert df._mgr is df2._mgr - - # mixed dtype - arr = np.random.default_rng(2).integers(0, 10, size=5) - df_orig = DataFrame({"A": arr.copy(), "B": "foo"}) - df = df_orig.copy() - df2 = df - df["A"] += 1 - expected = DataFrame({"A": arr.copy() + 1, "B": "foo"}) - tm.assert_frame_equal(df, expected) - tm.assert_frame_equal(df2, expected) - assert df._mgr is df2._mgr - - df = df_orig.copy() - df2 = df - df["A"] += 1.5 - expected = DataFrame({"A": arr.copy() + 1.5, "B": "foo"}) - tm.assert_frame_equal(df, expected) - tm.assert_frame_equal(df2, expected) - assert df._mgr is df2._mgr - - @pytest.mark.parametrize( - "op", - [ - "add", - "and", - pytest.param( - "div", - marks=pytest.mark.xfail( - raises=AttributeError, reason="__idiv__ not implemented" - ), - ), - "floordiv", - "mod", - "mul", - "or", - "pow", - "sub", - "truediv", - "xor", - ], - ) - def test_inplace_ops_identity2(self, op): - df = DataFrame({"a": [1.0, 2.0, 3.0], "b": [1, 2, 3]}) - - operand = 2 - if op in ("and", "or", "xor"): - # cannot use floats for boolean ops - df["a"] = [True, False, True] - - df_copy = df.copy() - iop = f"__i{op}__" - op = f"__{op}__" - - # no id change and value is correct - getattr(df, iop)(operand) - expected = getattr(df_copy, op)(operand) - tm.assert_frame_equal(df, expected) - expected = id(df) - assert id(df) == expected - - @pytest.mark.parametrize( - "val", - [ - [1, 2, 3], - (1, 2, 3), - np.array([1, 2, 3], dtype=np.int64), - range(1, 4), - ], - ) - def test_alignment_non_pandas(self, val): - index = ["A", "B", "C"] - columns = ["X", "Y", "Z"] - df = DataFrame( - np.random.default_rng(2).standard_normal((3, 3)), - index=index, - columns=columns, - ) - - align = DataFrame._align_for_op - - expected = DataFrame({"X": val, "Y": val, "Z": val}, index=df.index) - tm.assert_frame_equal(align(df, val, axis=0)[1], expected) - - expected = DataFrame( - {"X": [1, 1, 1], "Y": [2, 2, 2], "Z": [3, 3, 3]}, index=df.index - ) - tm.assert_frame_equal(align(df, val, axis=1)[1], expected) - - @pytest.mark.parametrize("val", [[1, 2], (1, 2), np.array([1, 2]), range(1, 3)]) - def test_alignment_non_pandas_length_mismatch(self, val): - index = ["A", "B", "C"] - columns = ["X", "Y", "Z"] - df = DataFrame( - np.random.default_rng(2).standard_normal((3, 3)), - index=index, - columns=columns, - ) - - align = DataFrame._align_for_op - # length mismatch - msg = "Unable to coerce to Series, length must be 3: given 2" - with pytest.raises(ValueError, match=msg): - align(df, val, axis=0) - - with pytest.raises(ValueError, match=msg): - align(df, val, axis=1) - - def test_alignment_non_pandas_index_columns(self): - index = ["A", "B", "C"] - columns = ["X", "Y", "Z"] - df = DataFrame( - np.random.default_rng(2).standard_normal((3, 3)), - index=index, - columns=columns, - ) - - align = DataFrame._align_for_op - val = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) - tm.assert_frame_equal( - align(df, val, axis=0)[1], - DataFrame(val, index=df.index, columns=df.columns), - ) - tm.assert_frame_equal( - align(df, val, axis=1)[1], - DataFrame(val, index=df.index, columns=df.columns), - ) - - # shape mismatch - msg = "Unable to coerce to DataFrame, shape must be" - val = np.array([[1, 2, 3], [4, 5, 6]]) - with pytest.raises(ValueError, match=msg): - align(df, val, axis=0) - - with pytest.raises(ValueError, match=msg): - align(df, val, axis=1) - - val = np.zeros((3, 3, 3)) - msg = re.escape( - "Unable to coerce to Series/DataFrame, dimension must be <= 2: (3, 3, 3)" - ) - with pytest.raises(ValueError, match=msg): - align(df, val, axis=0) - with pytest.raises(ValueError, match=msg): - align(df, val, axis=1) - - def test_no_warning(self, all_arithmetic_operators): - df = DataFrame({"A": [0.0, 0.0], "B": [0.0, None]}) - b = df["B"] - with tm.assert_produces_warning(None): - getattr(df, all_arithmetic_operators)(b) - - def test_dunder_methods_binary(self, all_arithmetic_operators): - # GH#??? frame.__foo__ should only accept one argument - df = DataFrame({"A": [0.0, 0.0], "B": [0.0, None]}) - b = df["B"] - with pytest.raises(TypeError, match="takes 2 positional arguments"): - getattr(df, all_arithmetic_operators)(b, 0) - - def test_align_int_fill_bug(self): - # GH#910 - X = np.arange(10 * 10, dtype="float64").reshape(10, 10) - Y = np.ones((10, 1), dtype=int) - - df1 = DataFrame(X) - df1["0.X"] = Y.squeeze() - - df2 = df1.astype(float) - - result = df1 - df1.mean() - expected = df2 - df2.mean() - tm.assert_frame_equal(result, expected) - - -def test_pow_with_realignment(): - # GH#32685 pow has special semantics for operating with null values - left = DataFrame({"A": [0, 1, 2]}) - right = DataFrame(index=[0, 1, 2]) - - result = left**right - expected = DataFrame({"A": [np.nan, 1.0, np.nan]}) - tm.assert_frame_equal(result, expected) - - -# TODO: move to tests.arithmetic and parametrize -def test_pow_nan_with_zero(): - left = DataFrame({"A": [np.nan, np.nan, np.nan]}) - right = DataFrame({"A": [0, 0, 0]}) - - expected = DataFrame({"A": [1.0, 1.0, 1.0]}) - - result = left**right - tm.assert_frame_equal(result, expected) - - result = left["A"] ** right["A"] - tm.assert_series_equal(result, expected["A"]) - - -def test_dataframe_series_extension_dtypes(): - # https://github.com/pandas-dev/pandas/issues/34311 - df = DataFrame( - np.random.default_rng(2).integers(0, 100, (10, 3)), columns=["a", "b", "c"] - ) - ser = Series([1, 2, 3], index=["a", "b", "c"]) - - expected = df.to_numpy("int64") + ser.to_numpy("int64").reshape(-1, 3) - expected = DataFrame(expected, columns=df.columns, dtype="Int64") - - df_ea = df.astype("Int64") - result = df_ea + ser - tm.assert_frame_equal(result, expected) - result = df_ea + ser.astype("Int64") - tm.assert_frame_equal(result, expected) - - -def test_dataframe_blockwise_slicelike(): - # GH#34367 - arr = np.random.default_rng(2).integers(0, 1000, (100, 10)) - df1 = DataFrame(arr) - # Explicit cast to float to avoid implicit cast when setting nan - df2 = df1.copy().astype({1: "float", 3: "float", 7: "float"}) - df2.iloc[0, [1, 3, 7]] = np.nan - - # Explicit cast to float to avoid implicit cast when setting nan - df3 = df1.copy().astype({5: "float"}) - df3.iloc[0, [5]] = np.nan - - # Explicit cast to float to avoid implicit cast when setting nan - df4 = df1.copy().astype({2: "float", 3: "float", 4: "float"}) - df4.iloc[0, np.arange(2, 5)] = np.nan - # Explicit cast to float to avoid implicit cast when setting nan - df5 = df1.copy().astype({4: "float", 5: "float", 6: "float"}) - df5.iloc[0, np.arange(4, 7)] = np.nan - - for left, right in [(df1, df2), (df2, df3), (df4, df5)]: - res = left + right - - expected = DataFrame({i: left[i] + right[i] for i in left.columns}) - tm.assert_frame_equal(res, expected) - - -@pytest.mark.parametrize( - "df, col_dtype", - [ - (DataFrame([[1.0, 2.0], [4.0, 5.0]], columns=list("ab")), "float64"), - (DataFrame([[1.0, "b"], [4.0, "b"]], columns=list("ab")), "object"), - ], -) -def test_dataframe_operation_with_non_numeric_types(df, col_dtype): - # GH #22663 - expected = DataFrame([[0.0, np.nan], [3.0, np.nan]], columns=list("ab")) - expected = expected.astype({"b": col_dtype}) - result = df + Series([-1.0], index=list("a")) - tm.assert_frame_equal(result, expected) - - -def test_arith_reindex_with_duplicates(): - # https://github.com/pandas-dev/pandas/issues/35194 - df1 = DataFrame(data=[[0]], columns=["second"]) - df2 = DataFrame(data=[[0, 0, 0]], columns=["first", "second", "second"]) - result = df1 + df2 - expected = DataFrame([[np.nan, 0, 0]], columns=["first", "second", "second"]) - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize("to_add", [[Series([1, 1])], [Series([1, 1]), Series([1, 1])]]) -def test_arith_list_of_arraylike_raise(to_add): - # GH 36702. Raise when trying to add list of array-like to DataFrame - df = DataFrame({"x": [1, 2], "y": [1, 2]}) - - msg = f"Unable to coerce list of {type(to_add[0])} to Series/DataFrame" - with pytest.raises(ValueError, match=msg): - df + to_add - with pytest.raises(ValueError, match=msg): - to_add + df - - -def test_inplace_arithmetic_series_update(using_copy_on_write): - # https://github.com/pandas-dev/pandas/issues/36373 - df = DataFrame({"A": [1, 2, 3]}) - df_orig = df.copy() - series = df["A"] - vals = series._values - - series += 1 - if using_copy_on_write: - assert series._values is not vals - tm.assert_frame_equal(df, df_orig) - else: - assert series._values is vals - - expected = DataFrame({"A": [2, 3, 4]}) - tm.assert_frame_equal(df, expected) - - -def test_arithmetic_multiindex_align(): - """ - Regression test for: https://github.com/pandas-dev/pandas/issues/33765 - """ - df1 = DataFrame( - [[1]], - index=["a"], - columns=MultiIndex.from_product([[0], [1]], names=["a", "b"]), - ) - df2 = DataFrame([[1]], index=["a"], columns=Index([0], name="a")) - expected = DataFrame( - [[0]], - index=["a"], - columns=MultiIndex.from_product([[0], [1]], names=["a", "b"]), - ) - result = df1 - df2 - tm.assert_frame_equal(result, expected) - - -def test_bool_frame_mult_float(): - # GH 18549 - df = DataFrame(True, list("ab"), list("cd")) - result = df * 1.0 - expected = DataFrame(np.ones((2, 2)), list("ab"), list("cd")) - tm.assert_frame_equal(result, expected) - - -def test_frame_sub_nullable_int(any_int_ea_dtype): - # GH 32822 - series1 = Series([1, 2, None], dtype=any_int_ea_dtype) - series2 = Series([1, 2, 3], dtype=any_int_ea_dtype) - expected = DataFrame([0, 0, None], dtype=any_int_ea_dtype) - result = series1.to_frame() - series2.to_frame() - tm.assert_frame_equal(result, expected) - - -@pytest.mark.filterwarnings( - "ignore:Passing a BlockManager|Passing a SingleBlockManager:DeprecationWarning" -) -def test_frame_op_subclass_nonclass_constructor(): - # GH#43201 subclass._constructor is a function, not the subclass itself - - class SubclassedSeries(Series): - @property - def _constructor(self): - return SubclassedSeries - - @property - def _constructor_expanddim(self): - return SubclassedDataFrame - - class SubclassedDataFrame(DataFrame): - _metadata = ["my_extra_data"] - - def __init__(self, my_extra_data, *args, **kwargs) -> None: - self.my_extra_data = my_extra_data - super().__init__(*args, **kwargs) - - @property - def _constructor(self): - return functools.partial(type(self), self.my_extra_data) - - @property - def _constructor_sliced(self): - return SubclassedSeries - - sdf = SubclassedDataFrame("some_data", {"A": [1, 2, 3], "B": [4, 5, 6]}) - result = sdf * 2 - expected = SubclassedDataFrame("some_data", {"A": [2, 4, 6], "B": [8, 10, 12]}) - tm.assert_frame_equal(result, expected) - - result = sdf + sdf - tm.assert_frame_equal(result, expected) - - -def test_enum_column_equality(): - Cols = Enum("Cols", "col1 col2") - - q1 = DataFrame({Cols.col1: [1, 2, 3]}) - q2 = DataFrame({Cols.col1: [1, 2, 3]}) - - result = q1[Cols.col1] == q2[Cols.col1] - expected = Series([True, True, True], name=Cols.col1) - - tm.assert_series_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/methods/test_to_numpy.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/methods/test_to_numpy.py deleted file mode 100644 index 5fe3e19b0a20bbfcf7caafa811af133c5b07fed5..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/methods/test_to_numpy.py +++ /dev/null @@ -1,25 +0,0 @@ -import numpy as np -import pytest - -from pandas import ( - NA, - Series, -) -import pandas._testing as tm - - -@pytest.mark.parametrize("dtype", ["int64", "float64"]) -def test_to_numpy_na_value(dtype): - # GH#48951 - ser = Series([1, 2, NA, 4]) - result = ser.to_numpy(dtype=dtype, na_value=0) - expected = np.array([1, 2, 0, 4], dtype=dtype) - tm.assert_numpy_array_equal(result, expected) - - -def test_to_numpy_cast_before_setting_na(): - # GH#50600 - ser = Series([1]) - result = ser.to_numpy(dtype=np.float64, na_value=np.nan) - expected = np.array([1.0]) - tm.assert_numpy_array_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/requests/help.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/requests/help.py deleted file mode 100644 index 745f0d7b346a1aac57197ab99d3d7d8b207b890a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/requests/help.py +++ /dev/null @@ -1,132 +0,0 @@ -"""Module containing bug report helper(s).""" -from __future__ import print_function - -import json -import platform -import sys -import ssl - -from pip._vendor import idna -from pip._vendor import urllib3 - -from . import __version__ as requests_version - -charset_normalizer = None - -try: - from pip._vendor import chardet -except ImportError: - chardet = None - -try: - from pip._vendor.urllib3.contrib import pyopenssl -except ImportError: - pyopenssl = None - OpenSSL = None - cryptography = None -else: - import OpenSSL - import cryptography - - -def _implementation(): - """Return a dict with the Python implementation and version. - - Provide both the name and the version of the Python implementation - currently running. For example, on CPython 2.7.5 it will return - {'name': 'CPython', 'version': '2.7.5'}. - - This function works best on CPython and PyPy: in particular, it probably - doesn't work for Jython or IronPython. Future investigation should be done - to work out the correct shape of the code for those platforms. - """ - implementation = platform.python_implementation() - - if implementation == 'CPython': - implementation_version = platform.python_version() - elif implementation == 'PyPy': - implementation_version = '%s.%s.%s' % (sys.pypy_version_info.major, - sys.pypy_version_info.minor, - sys.pypy_version_info.micro) - if sys.pypy_version_info.releaselevel != 'final': - implementation_version = ''.join([ - implementation_version, sys.pypy_version_info.releaselevel - ]) - elif implementation == 'Jython': - implementation_version = platform.python_version() # Complete Guess - elif implementation == 'IronPython': - implementation_version = platform.python_version() # Complete Guess - else: - implementation_version = 'Unknown' - - return {'name': implementation, 'version': implementation_version} - - -def info(): - """Generate information for a bug report.""" - try: - platform_info = { - 'system': platform.system(), - 'release': platform.release(), - } - except IOError: - platform_info = { - 'system': 'Unknown', - 'release': 'Unknown', - } - - implementation_info = _implementation() - urllib3_info = {'version': urllib3.__version__} - charset_normalizer_info = {'version': None} - chardet_info = {'version': None} - if charset_normalizer: - charset_normalizer_info = {'version': charset_normalizer.__version__} - if chardet: - chardet_info = {'version': chardet.__version__} - - pyopenssl_info = { - 'version': None, - 'openssl_version': '', - } - if OpenSSL: - pyopenssl_info = { - 'version': OpenSSL.__version__, - 'openssl_version': '%x' % OpenSSL.SSL.OPENSSL_VERSION_NUMBER, - } - cryptography_info = { - 'version': getattr(cryptography, '__version__', ''), - } - idna_info = { - 'version': getattr(idna, '__version__', ''), - } - - system_ssl = ssl.OPENSSL_VERSION_NUMBER - system_ssl_info = { - 'version': '%x' % system_ssl if system_ssl is not None else '' - } - - return { - 'platform': platform_info, - 'implementation': implementation_info, - 'system_ssl': system_ssl_info, - 'using_pyopenssl': pyopenssl is not None, - 'using_charset_normalizer': chardet is None, - 'pyOpenSSL': pyopenssl_info, - 'urllib3': urllib3_info, - 'chardet': chardet_info, - 'charset_normalizer': charset_normalizer_info, - 'cryptography': cryptography_info, - 'idna': idna_info, - 'requests': { - 'version': requests_version, - }, - } - - -def main(): - """Pretty-print the bug information as JSON.""" - print(json.dumps(info(), sort_keys=True, indent=2)) - - -if __name__ == '__main__': - main() diff --git a/spaces/pyodide-demo/self-hosted/sharedlib-test-py.js b/spaces/pyodide-demo/self-hosted/sharedlib-test-py.js deleted file mode 100644 index 47a63946e062f3ddb40e52b43d5bbbdfa7c253ab..0000000000000000000000000000000000000000 --- a/spaces/pyodide-demo/self-hosted/sharedlib-test-py.js +++ /dev/null @@ -1 +0,0 @@ -var Module=typeof globalThis.__pyodide_module!=="undefined"?globalThis.__pyodide_module:{};if(!Module.expectedDataFileDownloads){Module.expectedDataFileDownloads=0}Module.expectedDataFileDownloads++;(function(){var loadPackage=function(metadata){var PACKAGE_PATH="";if(typeof window==="object"){PACKAGE_PATH=window["encodeURIComponent"](window.location.pathname.toString().substring(0,window.location.pathname.toString().lastIndexOf("/"))+"/")}else if(typeof process==="undefined"&&typeof location!=="undefined"){PACKAGE_PATH=encodeURIComponent(location.pathname.toString().substring(0,location.pathname.toString().lastIndexOf("/"))+"/")}var PACKAGE_NAME="sharedlib-test-py.data";var REMOTE_PACKAGE_BASE="sharedlib-test-py.data";if(typeof Module["locateFilePackage"]==="function"&&!Module["locateFile"]){Module["locateFile"]=Module["locateFilePackage"];err("warning: you defined Module.locateFilePackage, that has been renamed to Module.locateFile (using your locateFilePackage for now)")}var REMOTE_PACKAGE_NAME=Module["locateFile"]?Module["locateFile"](REMOTE_PACKAGE_BASE,""):REMOTE_PACKAGE_BASE;var REMOTE_PACKAGE_SIZE=metadata["remote_package_size"];var PACKAGE_UUID=metadata["package_uuid"];function fetchRemotePackage(packageName,packageSize,callback,errback){if(typeof process==="object"){require("fs").readFile(packageName,(function(err,contents){if(err){errback(err)}else{callback(contents.buffer)}}));return}var xhr=new XMLHttpRequest;xhr.open("GET",packageName,true);xhr.responseType="arraybuffer";xhr.onprogress=function(event){var url=packageName;var size=packageSize;if(event.total)size=event.total;if(event.loaded){if(!xhr.addedTotal){xhr.addedTotal=true;if(!Module.dataFileDownloads)Module.dataFileDownloads={};Module.dataFileDownloads[url]={loaded:event.loaded,total:size}}else{Module.dataFileDownloads[url].loaded=event.loaded}var total=0;var loaded=0;var num=0;for(var download in Module.dataFileDownloads){var data=Module.dataFileDownloads[download];total+=data.total;loaded+=data.loaded;num++}total=Math.ceil(total*Module.expectedDataFileDownloads/num);if(Module["setStatus"])Module["setStatus"]("Downloading data... ("+loaded+"/"+total+")")}else if(!Module.dataFileDownloads){if(Module["setStatus"])Module["setStatus"]("Downloading data...")}};xhr.onerror=function(event){throw new Error("NetworkError for: "+packageName)};xhr.onload=function(event){if(xhr.status==200||xhr.status==304||xhr.status==206||xhr.status==0&&xhr.response){var packageData=xhr.response;callback(packageData)}else{throw new Error(xhr.statusText+" : "+xhr.responseURL)}};xhr.send(null)}function handleError(error){console.error("package error:",error)}var fetchedCallback=null;var fetched=Module["getPreloadedPackage"]?Module["getPreloadedPackage"](REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE):null;if(!fetched)fetchRemotePackage(REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE,(function(data){if(fetchedCallback){fetchedCallback(data);fetchedCallback=null}else{fetched=data}}),handleError);function runWithFS(){function assert(check,msg){if(!check)throw msg+(new Error).stack}Module["FS_createPath"]("/","lib",true,true);Module["FS_createPath"]("/lib","python3.9",true,true);Module["FS_createPath"]("/lib/python3.9","site-packages",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","sharedlib_test_py-1.0-py3.9.egg-info",true,true);function processPackageData(arrayBuffer){assert(arrayBuffer,"Loading data file failed.");assert(arrayBuffer instanceof ArrayBuffer,"bad input to processPackageData");var byteArray=new Uint8Array(arrayBuffer);var curr;var compressedData={data:null,cachedOffset:870,cachedIndexes:[-1,-1],cachedChunks:[null,null],offsets:[0],sizes:[870],successes:[1]};compressedData["data"]=byteArray;assert(typeof Module.LZ4==="object","LZ4 not present - was your app build with -s LZ4=1 ?");Module.LZ4.loadPackage({metadata:metadata,compressedData:compressedData},true);Module["removeRunDependency"]("datafile_sharedlib-test-py.data")}Module["addRunDependency"]("datafile_sharedlib-test-py.data");if(!Module.preloadResults)Module.preloadResults={};Module.preloadResults[PACKAGE_NAME]={fromCache:false};if(fetched){processPackageData(fetched);fetched=null}else{fetchedCallback=processPackageData}}if(Module["calledRun"]){runWithFS()}else{if(!Module["preRun"])Module["preRun"]=[];Module["preRun"].push(runWithFS)}};loadPackage({files:[{filename:"/lib/python3.9/site-packages/sharedlib_test.so",start:0,end:711,audio:0},{filename:"/lib/python3.9/site-packages/sharedlib_test_py-1.0-py3.9.egg-info/PKG-INFO",start:711,end:934,audio:0},{filename:"/lib/python3.9/site-packages/sharedlib_test_py-1.0-py3.9.egg-info/dependency_links.txt",start:934,end:935,audio:0},{filename:"/lib/python3.9/site-packages/sharedlib_test_py-1.0-py3.9.egg-info/top_level.txt",start:935,end:950,audio:0},{filename:"/lib/python3.9/site-packages/sharedlib_test_py-1.0-py3.9.egg-info/SOURCES.txt",start:950,end:1139,audio:0}],remote_package_size:4966,package_uuid:"c6b10bc7-8f48-436d-94f3-b2b4afcb6a78"})})(); \ No newline at end of file diff --git a/spaces/qdd319/ChuanhuChatGPT/chatgpt - macOS.command b/spaces/qdd319/ChuanhuChatGPT/chatgpt - macOS.command deleted file mode 100644 index fa015edca9e6916f24394813ce8ba77d2072e296..0000000000000000000000000000000000000000 --- a/spaces/qdd319/ChuanhuChatGPT/chatgpt - macOS.command +++ /dev/null @@ -1,7 +0,0 @@ -#!/bin/bash -echo Opening ChuanhuChatGPT... -cd "$(dirname "${BASH_SOURCE[0]}")" -nohup python3 ChuanhuChatbot.py >/dev/null 2>&1 & -sleep 5 -open http://127.0.0.1:7860 -echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). If you kill ChuanhuChatbot, Use "pkill -f 'ChuanhuChatbot'" command in terminal. \ No newline at end of file diff --git "a/spaces/qingxu98/gpt-academic/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243pdfminer.py" "b/spaces/qingxu98/gpt-academic/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243pdfminer.py" deleted file mode 100644 index 213d8bb26446683fdb87e06e20004dc3861eb087..0000000000000000000000000000000000000000 --- "a/spaces/qingxu98/gpt-academic/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243pdfminer.py" +++ /dev/null @@ -1,162 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -from toolbox import write_history_to_file, promote_file_to_downloadzone - -fast_debug = False - -def readPdf(pdfPath): - """ - 读取pdf文件,返回文本内容 - """ - import pdfminer - from pdfminer.pdfparser import PDFParser - from pdfminer.pdfdocument import PDFDocument - from pdfminer.pdfpage import PDFPage, PDFTextExtractionNotAllowed - from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter - from pdfminer.pdfdevice import PDFDevice - from pdfminer.layout import LAParams - from pdfminer.converter import PDFPageAggregator - - fp = open(pdfPath, 'rb') - - # Create a PDF parser object associated with the file object - parser = PDFParser(fp) - - # Create a PDF document object that stores the document structure. - # Password for initialization as 2nd parameter - document = PDFDocument(parser) - # Check if the document allows text extraction. If not, abort. - if not document.is_extractable: - raise PDFTextExtractionNotAllowed - - # Create a PDF resource manager object that stores shared resources. - rsrcmgr = PDFResourceManager() - - # Create a PDF device object. - # device = PDFDevice(rsrcmgr) - - # BEGIN LAYOUT ANALYSIS. - # Set parameters for analysis. - laparams = LAParams( - char_margin=10.0, - line_margin=0.2, - boxes_flow=0.2, - all_texts=False, - ) - # Create a PDF page aggregator object. - device = PDFPageAggregator(rsrcmgr, laparams=laparams) - # Create a PDF interpreter object. - interpreter = PDFPageInterpreter(rsrcmgr, device) - - # loop over all pages in the document - outTextList = [] - for page in PDFPage.create_pages(document): - # read the page into a layout object - interpreter.process_page(page) - layout = device.get_result() - for obj in layout._objs: - if isinstance(obj, pdfminer.layout.LTTextBoxHorizontal): - # print(obj.get_text()) - outTextList.append(obj.get_text()) - - return outTextList - - -def 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - import time, glob, os - from bs4 import BeautifulSoup - print('begin analysis on:', file_manifest) - for index, fp in enumerate(file_manifest): - if ".tex" in fp: - with open(fp, 'r', encoding='utf-8', errors='replace') as f: - file_content = f.read() - if ".pdf" in fp.lower(): - file_content = readPdf(fp) - file_content = BeautifulSoup(''.join(file_content), features="lxml").body.text.encode('gbk', 'ignore').decode('gbk') - - prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else "" - i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```' - i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}' - chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say_show_user, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=[], - sys_prompt="总结文章。" - ) # 带超时倒计时 - chatbot[-1] = (i_say_show_user, gpt_say) - history.append(i_say_show_user); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - if not fast_debug: time.sleep(2) - - all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)]) - i_say = f'根据以上你自己的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一段英文摘要(包括{all_file})。' - chatbot.append((i_say, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=history, - sys_prompt="总结文章。" - ) # 带超时倒计时 - chatbot[-1] = (i_say, gpt_say) - history.append(i_say); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - res = write_history_to_file(history) - promote_file_to_downloadzone(res, chatbot=chatbot) - chatbot.append(("完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - - - -@CatchException -def 批量总结PDF文档pdfminer(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob, os - - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "批量总结PDF文档,此版本使用pdfminer插件,带token约简功能。函数插件贡献者: Euclid-Jie。"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import pdfminer, bs4 - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pdfminer beautifulsoup4```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)] # + \ - # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \ - # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex或pdf文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) - diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Autodesk Revit 2019.2 With Add-ons ((TOP)).md b/spaces/quidiaMuxgu/Expedit-SAM/Autodesk Revit 2019.2 With Add-ons ((TOP)).md deleted file mode 100644 index 097d8fb5596e7b7853afb642d8703068e633f2cc..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Autodesk Revit 2019.2 With Add-ons ((TOP)).md +++ /dev/null @@ -1,6 +0,0 @@ -

    Autodesk Revit 2019.2 with Add-ons


    Download Filehttps://geags.com/2uCq5K



    - -We plan to address this issue in the upcoming release of the Revit 2019.2 ... Autodesk Revit 2021.1 21.0.0.383 add to watchlist send us an update. buy now ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/r3gm/vscode/start_server.sh b/spaces/r3gm/vscode/start_server.sh deleted file mode 100644 index 5257809d2ea2bcb6ccb3b55473da34eb13982a36..0000000000000000000000000000000000000000 --- a/spaces/r3gm/vscode/start_server.sh +++ /dev/null @@ -1,5 +0,0 @@ -#!/bin/bash - -echo "Starting VSCode Server..." - -exec /app/openvscode-server/bin/openvscode-server --host 0.0.0.0 --port 7860 --without-connection-token \"${@}\" -- diff --git a/spaces/radames/edit-video-by-editing-text/README.md b/spaces/radames/edit-video-by-editing-text/README.md deleted file mode 100644 index dd3af7f307d43e544f464f465137b772bf0e1e6f..0000000000000000000000000000000000000000 --- a/spaces/radames/edit-video-by-editing-text/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Edit Video By Editing Text -emoji: ✍️🎥📄 -colorFrom: red -colorTo: gray -sdk: gradio -app_file: app.py -sdk_version: 3.35.2 -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/raedeXanto/academic-chatgpt-beta/38 Dictionnaires et Recueils de Correspondance - Clubic[1].md b/spaces/raedeXanto/academic-chatgpt-beta/38 Dictionnaires et Recueils de Correspondance - Clubic[1].md deleted file mode 100644 index f9aaebb97a3ae017a3ceb9dc36f161aefcdbb223..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/38 Dictionnaires et Recueils de Correspondance - Clubic[1].md +++ /dev/null @@ -1,200 +0,0 @@ - -
    - A brief overview of the features and benefits of the software.
    - A link to the official website of the software. | | H2: Why use 38 dictionnaires et recueils de correspondance? | - To find definitions, synonyms, paronyms, anagrams and rhymes of more than 500,000 words and expressions.
    - To access more than 7,000 templates of letters and emails for various situations.
    - To solve crossword puzzles and play word games.
    - To learn new words and improve vocabulary. | | H3: How to install 38 dictionnaires et recueils de correspondance? | - The system requirements and the installation process of the software.
    - The different options and settings available in the software.
    - The possible issues and solutions related to the installation. | | H4: What is a crack and why avoid it? | - A crack is a program that modifies or bypasses the security features of a software.
    - The risks and disadvantages of using a crack, such as viruses, malware, legal issues, etc.
    - The advantages of using a legitimate version of the software, such as updates, support, etc. | | H2: How to use 38 dictionnaires et recueils de correspondance? | - The main interface and functions of the software.
    - The different types of dictionaries and tools available in the software.
    - The examples and tips on how to use the software effectively. | | H3: How to write a letter or an email with 38 dictionnaires et recueils de correspondance? | - The steps and guidelines on how to choose a template, customize it, and send it.
    - The different categories and themes of templates available in the software.
    - The best practices and recommendations on how to write a clear, concise, and polite letter or email. | | H4: How to check and improve spelling with 38 dictionnaires et recueils de correspondance? | - The features and options of the spelling checker tool in the software.
    - The common spelling errors and how to avoid them.
    - The tips and tricks on how to improve spelling skills. | | H3: How to find a word or an expression with 38 dictionnaires et recueils de correspondance? | - The functions and options of the search tool in the software.
    - The different types of dictionaries and resources available in the software.
    - The examples and suggestions on how to find a word or an expression quickly and easily. | | H4: How to solve a crossword puzzle with 38 dictionnaires et recueils de correspondance? | - The features and options of the crossword solver tool in the software.
    - The steps and guidelines on how to enter a clue, select a grid size, and find a solution.
    - The tips and hints on how to solve a crossword puzzle faster and smarter. | | H3: How to play word games with 38 dictionnaires et recueils de correspondance? | - The types and rules of word games available in the software.
    - The benefits and challenges of playing word games.
    - The strategies and techniques on how to win word games. | | H2: Conclusion | - A summary of the main points and benefits of using 38 dictionnaires et recueils de correspondance.
    - A call to action to download or buy the software from the official website.
    - A thank you note and an invitation to leave feedback or contact for more information. | | H3: FAQs | - Five unique frequently asked questions about 38 dictionnaires et recueils de correspondance with brief answers. | # Article with HTML formatting

    What is 38 dictionnaires et recueils de correspondance?

    -

    If you are looking for a comprehensive and easy-to-use software that can help you with writing letters, improving spelling, finding words, solving crossword puzzles, or playing word games, then you might want to check out 38 dictionnaires et recueils de correspondance.

    -

    38 dictionnaires et recueils de correspondance avec crack


    Download Zip · https://tinourl.com/2uL0ct



    -

    This software is a collection of linguistic tools that can assist you with various tasks related to language and communication. It includes 13 dictionaries and more than 7,000 templates of letters and emails that cover different situations and purposes. You can also access various resources and features that can help you learn new words, improve your vocabulary, correct your spelling errors, find synonyms, paronyms, anagrams, rhymes, etc.

    -

    Whether you are a student, a professional, a writer, or simply a lover of words, you will find this software useful and enjoyable. You can use it for personal or professional purposes, for education or entertainment, for work or leisure.

    -

    To learn more about this software, you can visit its official website at https://www.microapp.com/logiciel_38_dictionnaires_et_recueils_de_correspondance_1080.html. You can also download a free trial version or buy the full version from there.

    -

    Why use 38 dictionnaires et recueils de correspondance?

    -

    There are many reasons why you might want to use this software. Here are some of them:

    -

    Télécharger 38 dictionnaires et recueils de correspondance avec crack gratuit
    -38 dictionnaires et recueils de correspondance licence
    -38 dictionnaires et recueils de correspondance clé d'activation
    -38 dictionnaires et recueils de correspondance micro application
    -38 dictionnaires et recueils de correspondance windows 10
    -38 dictionnaires et recueils de correspondance avis
    -38 dictionnaires et recueils de correspondance définitions synonymes
    -38 dictionnaires et recueils de correspondance anagrammes rimes
    -38 dictionnaires et recueils de correspondance mots croisés
    -38 dictionnaires et recueils de correspondance modèles de lettres
    -38 dictionnaires et recueils de correspondance orthographe conjugaison
    -38 dictionnaires et recueils de correspondance homonymes paronymes
    -38 dictionnaires et recueils de correspondance expressions citations
    -38 dictionnaires et recueils de correspondance quizz jeux de lettres
    -Comment installer 38 dictionnaires et recueils de correspondance avec crack
    -Comment utiliser 38 dictionnaires et recueils de correspondance avec crack
    -Comment activer 38 dictionnaires et recueils de correspondance avec crack
    -Comment désinstaller 38 dictionnaires et recueils de correspondance avec crack
    -Comment mettre à jour 38 dictionnaires et recueils de correspondance avec crack
    -Comment résoudre les problèmes de 38 dictionnaires et recueils de correspondance avec crack
    -Où télécharger 38 dictionnaires et recueils de correspondance avec crack
    -Où trouver la clé pour activer 38 dictionnaires et recueils de correspondance avec crack
    -Où acheter 38 dictionnaires et recueils de correspondance avec crack
    -Où trouver le mode d'emploi de 38 dictionnaires et recueils de correspondance avec crack
    -Où trouver des avis sur 38 dictionnaires et recueils de correspondance avec crack
    -Pourquoi choisir 38 dictionnaires et recueils de correspondance avec crack
    -Pourquoi télécharger 38 dictionnaires et recueils de correspondance avec crack
    -Pourquoi activer 38 dictionnaires et recueils de correspondance avec crack
    -Pourquoi utiliser 38 dictionnaires et recueils de correspondance avec crack
    -Pourquoi mettre à jour 38 dictionnaires et recueils de correspondance avec crack
    -Quels sont les avantages de 38 dictionnaires et recueils de correspondance avec crack
    -Quels sont les inconvénients de 38 dictionnaires et recueils de correspondance avec crack
    -Quels sont les alternatives à 38 dictionnaires et recueils de correspondance avec crack
    -Quels sont les prix de 38 dictionnaires et recueils de correspondance avec crack
    -Quels sont les meilleurs sites pour télécharger 38 dictionnaires et recueils de correspondance avec crack
    -Quelle est la différence entre 36 et 38 dictionnaires et recueils de correspondance avec crack
    -Quelle est la version la plus récente de 38 dictionnaires et recueils de correspondance avec crack
    -Quelle est la compatibilité de 38 dictionnaires et recueils de correspondance avec crack
    -Quelle est la qualité des contenus de 38 dictionnaires et recueils de correspondance avec crack
    -Quelle est la fiabilité des sources de 38 dictionnaires et recueils de correspondance avec crack
    -Comment écrire une lettre efficace avec 38 dictionnaires et recueils de correspondance avec crack
    -Comment améliorer son vocabulaire avec 38 dictionnaires et recueils de correspondance avec crack
    -Comment apprendre l'orthographe avec 38 dictionnaires et recueils de correspondance avec crack
    -Comment se divertir avec 38 dictionnaires et recueils de correspondance avec crack
    -Comment faire des recherches linguistiques avec 38 dictionnaires et recueils de correspondance avec crack

    -
      -
    • To find definitions, synonyms, paronyms, anagrams and rhymes of more than 500,000 words and expressions. You can use this feature to expand your vocabulary, enrich your writing style, avoid repetitions, or simply satisfy your curiosity.
    • -
    • To access more than 7,000 templates of letters and emails for various situations. You can use this feature to save time, avoid mistakes, or get inspiration when writing a letter or an email. You can choose from different categories such as personal letters, professional letters, administrative letters, etc.
    • -
    • To solve crossword puzzles and play word games. You can use this feature to have fun, challenge yourself, or improve your mental skills. You can choose from different types of games such as anagrams, hangman, word search, etc.
    • -
    • To learn new words and improve your vocabulary. You can use this feature to discover new words every day, learn their meanings, examples, and pronunciations, or test your knowledge with quizzes.
    • -
    -

    These are just some of the benefits that you can get from using this software. There are many more that you can explore by yourself once you download or buy it.

    -

    How to install 38 dictionnaires et recueils de correspondance?

    -

    Installing this software is easy and fast. You just need to follow these steps:

    -
      -
    1. Download the setup file from https://www.microapp.com/logiciel_38_dictionnaires_et_recueils_de_correspondance_1080.html. You can choose between the free trial version or the full version.
    2. -
    3. Run the setup file and follow the instructions on the screen. You will need to accept the license agreement, choose the installation folder, and create a desktop shortcut.
    4. -
    5. Launch the software by double-clicking on the desktop shortcut or by selecting it from the start menu. You will see the main interface of the software with different tabs and buttons.
    6. -
    -

    The system requirements for this software are:

    -
      -
    • Windows XP/Vista/7/8/10
    • -
    • Pentium III processor or higher
    • -
    • 256 MB RAM or higher
    • -
    • 800 MB free disk space or higher
    • -
    • CD-ROM drive
    • -
    • Internet connection (optional)
    • -
    -

    If you encounter any issues during or after the installation process, you can refer to https://www.microapp.com/support.html for help.

    -

    What is a crack and why avoid it?

    -

    A crack is a program that modifies or bypasses the security features of a software, such as serial numbers, activation codes, or copy protection mechanisms. Some people use cracks to access paid software for free, or to extend trial periods indefinitely.

    -

    However, using a crack is not only illegal, but also risky and disadvantage

    ous. Here are some of the reasons why you should avoid using a crack:

    -
      -
    • It can harm your computer. A crack may contain viruses, malware, spyware, or other malicious programs that can infect your system, damage your files, steal your data, or compromise your security.
    • -
    • It can harm your business. A crack may cause errors, bugs, crashes, or compatibility issues that can affect the performance and functionality of the software. This can result in lost productivity, wasted time, frustrated customers, or missed opportunities.
    • -
    • It can harm your reputation. A crack may expose you to legal risks, such as lawsuits, fines, or penalties for violating the intellectual property rights of the software developer. This can damage your credibility, trustworthiness, and image in the market.
    • -
    -

    On the other hand, using a legitimate version of 38 dictionnaires et recueils de correspondance has many advantages, such as:

    -
      -
    • It ensures quality and reliability. You can enjoy the full features and benefits of the software without any glitches, errors, or interruptions. You can also get regular updates, improvements, and bug fixes from the software developer.
    • -
    • It provides support and assistance. You can access customer service, technical support, tutorials, guides, and FAQs from the software developer. You can also get feedback, suggestions, and tips from other users and experts.
    • -
    • It respects and supports the software developer. You can show your appreciation and recognition for the hard work and creativity of the software developer. You can also contribute to their innovation and development by providing feedback and suggestions.
    • -
    -

    Therefore, it is highly recommended to use a legitimate version of 38 dictionnaires et recueils de correspondance rather than a crack. You can download or buy it from https://www.microapp.com/logiciel_38_dictionnaires_et_recueils_de_correspondance_1080.html.

    -

    How to use 38 dictionnaires et recueils de correspondance?

    -

    Using this software is simple and intuitive. You just need to follow these steps:

    -
      -
    1. Launch the software by double-clicking on the desktop shortcut or by selecting it from the start menu. You will see the main interface of the software with different tabs and buttons.
    2. -
    3. Select the tab that corresponds to the function or tool that you want to use. For example, if you want to write a letter or an email, select the "Correspondances" tab. If you want to find a word or an expression, select the "Dictionnaires" tab.
    4. -
    5. Enter your query or input in the appropriate field or box. For example, if you want to find a definition of a word, enter it in the "Rechercher" field. If you want to choose a template for a letter or an email, select it from the list.
    6. -
    7. Click on the button that corresponds to the action that you want to perform. For example, if you want to see the definition of a word, click on "Définition". If you want to customize a template for a letter or an email, click on "Personnaliser".
    8. -
    9. View the results or output in the appropriate area or window. For example, if you want to see the definition of a word, you will see it in the "Définition" area. If you want to customize a template for a letter or an email, you will see it in a new window.
    10. -
    11. Edit, save, print, or send your results or output as needed. For example, if you want to edit a template for a letter or an email, you can use the toolbar to change the font, the color, the alignment, etc. If you want to save a template for a letter or an email, you can use the "Enregistrer" button to save it in your computer. If you want to print a template for a letter or an email, you can use the "Imprimer" button to print it on paper. If you want to send a template for a letter or an email, you can use the "Envoyer" button to send it by email.
    12. -
    -

    You can also use other features and options available in the software, such as:

    -
      -
    • The "Aide" button to access help and support resources.
    • -
    • The "Options" button to change settings and preferences.
    • -
    • The "Quitter" button to exit and close the software.
    • -
    -

    How to write a letter or an email with 38 dictionnaires et recueils de correspondance?

    Writing a letter or an email with 38 dictionnaires et recueils de correspondance is easy and convenient. You can follow these steps:

    -
      -
    1. Select the "Correspondances" tab in the main interface of the software. You will see a list of categories and themes of templates for letters and emails, such as personal, professional, administrative, etc.
    2. -
    3. Choose the category and theme that best suits your purpose and situation. For example, if you want to write a letter of complaint, choose the category "Lettres personnelles" and the theme "Réclamations". You will see a list of templates for different types of complaints.
    4. -
    5. Pick the template that matches your specific case and need. For example, if you want to complain about a defective product that you bought online, pick the template "Réclamation pour un produit défectueux acheté sur internet". You will see the template in a new window.
    6. -
    7. Customize the template according to your personal information and details. You can use the toolbar to edit the font, color, alignment, etc. You can also use the fields and boxes to fill in your name, address, date, subject, salutation, body, closing, signature, etc.
    8. -
    9. Check your letter or email for spelling, grammar, and style errors. You can use the "Orthographe" button to access the spelling checker tool in the software. You can also use the "Synonymes" button to find synonyms for words that you want to replace or avoid repeating.
    10. -
    11. Save, print, or send your letter or email as needed. You can use the "Enregistrer" button to save your letter or email in your computer. You can use the "Imprimer" button to print your letter on paper. You can use the "Envoyer" button to send your email by email.
    12. -
    -

    You can also use other features and options available in the software, such as:

    -
      -
    • The "Aide" button to access help and support resources.
    • -
    • The "Options" button to change settings and preferences.
    • -
    • The "Quitter" button to exit and close the software.
    • -
    -

    How to check and improve spelling with 38 dictionnaires et recueils de correspondance?

    Checking and improving spelling with 38 dictionnaires et recueils de correspondance is easy and convenient. You can follow these steps:

    -
      -
    1. Select the text that you want to check or improve. You can select a word, a sentence, a paragraph, or the entire document.
    2. -
    3. Click on the "Orthographe" button in the toolbar. You will see a window with the spelling checker tool in the software.
    4. -
    5. View the results and suggestions in the window. The software will underline any spelling errors in red and offer corrections in a list. You can also see synonyms, paronyms, anagrams, and rhymes for any word by clicking on the corresponding buttons.
    6. -
    7. Choose the correction or suggestion that you want to apply. You can click on it to replace the original word, or you can type it manually.
    8. -
    9. Repeat the process until you have checked and improved all your spelling errors. You can use the "Suivant" and "Précédent" buttons to navigate through the errors. You can also use the "Ignorer" and "Ignorer tout" buttons to skip any errors that you don't want to correct.
    10. -
    11. Close the window when you are done. You can use the "Terminer" button to exit and close the spelling checker tool.
    12. -
    -

    You can also use other features and options available in the software, such as:

    -
      -
    • The "Aide" button to access help and support resources.
    • -
    • The "Options" button to change settings and preferences.
    • -
    • The "Quitter" button to exit and close the software.
    • -
    -

    How to find a word or an expression with 38 dictionnaires et recueils de correspondance?

    Finding a word or an expression with 38 dictionnaires et recueils de correspondance is easy and convenient. You can follow these steps:

    -
      -
    1. Select the "Dictionnaires" tab in the main interface of the software. You will see a list of types and sources of dictionaries and resources, such as definitions, synonyms, paronyms, anagrams, rhymes, etc.
    2. -
    3. Choose the type and source that best suits your purpose and need. For example, if you want to find a definition of a word, choose the type "Définitions" and the source "Le Petit Robert". You will see a field for entering your query and a list of results.
    4. -
    5. Enter your query in the field. For example, if you want to find a definition of the word "cathouse", enter it in the field. You will see the results in the list below.
    6. -
    7. View the results and information in the list. The software will show you the word, its pronunciation, its part of speech, its definition, its origin, its usage examples, etc. You can also see synonyms, paronyms, anagrams, and rhymes for any word by clicking on the corresponding buttons.
    8. -
    9. Repeat the process until you have found all the words or expressions that you need. You can use the "Suivant" and "Précédent" buttons to navigate through the results. You can also use the "Effacer" button to clear your query and start a new search.
    10. -
    11. Close the window when you are done. You can use the "Terminer" button to exit and close the dictionary tool.
    12. -
    -

    You can also use other features and options available in the software, such as:

    -
      -
    • The "Aide" button to access help and support resources.
    • -
    • The "Options" button to change settings and preferences.
    • -
    • The "Quitter" button to exit and close the software.
    • -
    -

    How to solve a crossword puzzle with 38 dictionnaires et recueils de correspondance?

    Solving a crossword puzzle with 38 dictionnaires et recueils de correspondance is easy and convenient. You can follow these steps:

    -
      -
    1. Select the "Mots croisés" tab in the main interface of the software. You will see a window with the crossword solver tool in the software.
    2. -
    3. Enter your clue and grid size in the appropriate fields. For example, if you want to solve a clue that says "A large herbivorous mammal with a trunk (8)", enter it in the "Indice" field. If you know that the answer has 8 letters, enter "8" in the "Taille" field.
    4. -
    5. Click on the "Rechercher" button to start the search. The software will show you a list of possible solutions that match your clue and grid size.
    6. -
    7. View the solutions and information in the list. The software will show you the word, its definition, its origin, its usage examples, etc. You can also see synonyms, paronyms, anagrams, and rhymes for any word by clicking on the corresponding buttons.
    8. -
    9. Choose the solution that fits your crossword puzzle. You can click on it to copy it to your clipboard, or you can type it manually.
    10. -
    11. Repeat the process until you have solved all your crossword clues. You can use the "Effacer" button to clear your clue and grid size and start a new search.
    12. -
    -

    You can also use other features and options available in the software, such as:

    -
      -
    • The "Aide" button to access help and support resources.
    • -
    • The "Options" button to change settings and preferences.
    • -
    • The "Quitter" button to exit and close the software.
    • -
    -

    How to play word games with 38 dictionnaires et recueils de correspondance?

    Playing word games with 38 dictionnaires et recueils de correspondance is easy and convenient. You can follow these steps:

    -
      -
    1. Select the "Jeux de lettres" tab in the main interface of the software. You will see a list of types and levels of word games, such as anagrams, hangman, word search, etc.
    2. -
    3. Choose the type and level that best suits your preference and skill. For example, if you want to play anagrams, choose the type "Anagrammes" and the level "Facile", "Moyen", or "Difficile". You will see a window with the word game tool in the software.
    4. -
    5. Enter your answer or input in the appropriate field or box. For example, if you want to play anagrams, enter the letters that you want to rearrange in the "Lettres" field. If you want to play hangman, enter a letter that you want to guess in the "Lettre" field.
    6. -
    7. Click on the button that corresponds to the action that you want to perform. For example, if you want to play anagrams, click on "Rechercher" to start the search. If you want to play hangman, click on "Valider" to check your guess.
    8. -
    9. View the results or feedback in the appropriate area or window. For example, if you want to play anagrams, you will see a list of possible words that you can make with the letters. If you want to play hangman, you will see a picture of a hanging man that changes depending on your guesses.
    10. -
    11. Repeat the process until you have completed or quit the word game. You can use the "Nouveau" button to start a new game. You can also use the "Quitter" button to exit and close the word game tool.
    12. -
    -

    You can also use other features and options available in the software, such as:

    -
      -
    • The "Aide" button to access help and support resources.
    • -
    • The "Options" button to change settings and preferences.
    • -
    • The "Quitter" button to exit and close the software.
    • -
    -

    Conclusion

    -

    In conclusion, 38 dictionnaires et recueils de correspondance is a great software that can help you with writing letters, improving spelling, finding words, solving crossword puzzles, or playing word games. It is a collection of linguistic tools that can assist you with various tasks related to language and communication. It includes 13 dictionaries and more than 7,000 templates of letters and emails that cover different situations and purposes. You can also access various resources and features that can help you learn new words, improve your vocabulary, correct your spelling errors, find synonyms, paronyms, anagrams, rhymes, etc.

    -

    If you are interested in this software, you can download or buy it from https://www.microapp.com/logiciel_38_dictionnaires_et_recueils_de_correspondance_1080.html. You can also visit this website to learn more about this software, its features, its benefits, its system requirements, its installation process, its support service, etc.

    -

    Thank you for reading this article. We hope that you have found it informative and helpful. If you have any questions or feedback, please feel free to contact us or leave a comment below. We would love to hear from you.

    -

    FAQs

    -

    Here are some frequently asked questions about 38 dictionnaires et recueils de correspondance with brief answers:

    -
      -
    1. What languages does this software support?
      This software supports French language only. It does not support other languages such as English, Spanish, German, etc.
    2. -
    3. How much does this software cost?
      This software costs 29.95 euros for the full version. You can also download a free trial version for 30 days.
    4. -
    5. How can I update this software?
      You can update this software by visiting https://www.microapp.com/support.html. You can also check for updates from within the software by clicking on the "A propos" button and then on the "Mise à jour" button.
    6. -
    7. How can I contact the software developer?
      You can contact the software developer by visiting https://www.microapp.com/contact.html. You can also contact them by phone at +33 (0)1 53 34 20 20 or by email at contact@microapp.com.
    8. -
    9. How can I get help and support for this software?
      You can get help and support for this software by visiting https://www.microapp.com/support.html. You can also access help and support resources from within the software by clicking on the "Aide" button.
    10. -
    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Crack Eltima Virtual Serial Port Driver Keygen Generator A Step-by-Step Tutorial.md b/spaces/raedeXanto/academic-chatgpt-beta/Crack Eltima Virtual Serial Port Driver Keygen Generator A Step-by-Step Tutorial.md deleted file mode 100644 index eec0b8e78c1e9a10a3fc964f3993f5466a73f5f0..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Crack Eltima Virtual Serial Port Driver Keygen Generator A Step-by-Step Tutorial.md +++ /dev/null @@ -1,126 +0,0 @@ - -
    - How to crack Eltima Virtual Serial Port Driver with a keygen generator
    - Risks and drawbacks of using a cracked version of Eltima Virtual Serial Port Driver
    - Benefits and advantages of using a licensed version of Eltima Virtual Serial Port Driver
    - How to get a licensed version of Eltima Virtual Serial Port Driver at a discounted price | | H2: What is Eltima Virtual Serial Port Driver and why you need it | - A brief introduction to Eltima Virtual Serial Port Driver and its main features
    - A description of how Eltima Virtual Serial Port Driver can help you create and manage virtual COM ports
    - A summary of the use cases and scenarios where Eltima Virtual Serial Port Driver can be useful | | H2: How to crack Eltima Virtual Serial Port Driver with a keygen generator | - A warning about the illegality and unethicality of cracking software
    - A step-by-step guide on how to find and download a keygen generator for Eltima Virtual Serial Port Driver
    - A demonstration of how to use the keygen generator to generate a serial number and activate Eltima Virtual Serial Port Driver | | H2: Risks and drawbacks of using a cracked version of Eltima Virtual Serial Port Driver | - A list of the potential problems and issues that can arise from using a cracked version of Eltima Virtual Serial Port Driver, such as:
    - Malware infection
    - System instability
    - Data loss
    - Legal consequences
    - No technical support or updates | | H2: Benefits and advantages of using a licensed version of Eltima Virtual Serial Port Driver | - A comparison of the features and performance of the licensed version of Eltima Virtual Serial Port Driver versus the cracked version
    - A highlight of the benefits and advantages of using a licensed version of Eltima Virtual Serial Port Driver, such as:
    - Security and reliability
    - Compatibility and functionality
    - Customer service and satisfaction
    - Lifetime upgrades guarantee | | H2: How to get a licensed version of Eltima Virtual Serial Port Driver at a discounted price | - A call to action to purchase a licensed version of Eltima Virtual Serial Port Driver from the official website
    - A mention of the special offer and discount code for the readers of this article
    - A testimonial from a satisfied customer who used Eltima Virtual Serial Port Driver for their project | **Table 2: Article with HTML formatting**

    Crack Eltima Virtual Serial Port Driver Keygen Generator

    -

    If you are looking for a way to crack Eltima Virtual Serial Port Driver with a keygen generator, you might want to think twice before doing so. In this article, we will explain what Eltima Virtual Serial Port Driver is, why you need it, how to crack it with a keygen generator, and what are the risks and drawbacks of using a cracked version. We will also show you how to get a licensed version of Eltima Virtual Serial Port Driver at a discounted price.

    -

    Crack Eltima Virtual Serial Port Driver Keygen Generator


    DOWNLOAD » https://tinourl.com/2uL2hC



    -

    What is Eltima Virtual Serial Port Driver and why you need it

    -

    Eltima Virtual Serial Port Driver is a software tool that allows you to create and manage virtual COM ports on your computer. It emulates the functionality of real serial ports and enables you to communicate with serial devices via virtual ports. With Eltima Virtual Serial Port Driver, you can:

    -
      -
    • Create an unlimited number of virtual COM port pairs that are connected via a virtual null-modem cable
    • -
    • Configure the parameters and settings of each virtual COM port
    • -
    • Monitor the data transmission between virtual COM ports
    • -
    • Use loopback mode to test your serial applications without hardware
    • -
    • Access remote serial devices over TCP/IP network
    • -
    • And much more!
    • -
    -

    Eltima Virtual Serial Port Driver can be useful for various purposes, such as:

    -
      -
    • Developing, debugging, and testing serial applications and devices
    • -
    • Sharing serial devices over network or Internet
    • -
    • Creating complex serial port scenarios for simulation or emulation
    • -
    • Connecting legacy serial devices to modern systems
    • -
    • And much more!
    • -
    -

    How to crack Eltima Virtual Serial Port Driver with a keygen generator

    -

    Before we show you how to crack Eltima Virtual Serial Port Driver with a keygen generator, we want to warn you that cracking software is illegal and unethical. By cracking software, you are violating the intellectual property rights of the software developers and distributors. You are also exposing yourself to various risks and drawbacks that we will discuss later in this article.

    -

    If you still want to proceed with cracking Eltima Virtual Serial Port Driver with a keygen generator, here are the steps you need to follow:

    -

    How to crack Eltima Virtual Serial Port Driver with keygen
    -Eltima Virtual Serial Port Driver crack and keygen download
    -Eltima Virtual Serial Port Driver keygen generator online
    -Crack Eltima Virtual Serial Port Driver for free
    -Eltima Virtual Serial Port Driver cracked version download
    -Eltima Virtual Serial Port Driver keygen generator software
    -Crack Eltima Virtual Serial Port Driver license code
    -Eltima Virtual Serial Port Driver crack and keygen tutorial
    -Eltima Virtual Serial Port Driver keygen generator for windows
    -Crack Eltima Virtual Serial Port Driver activation code
    -Eltima Virtual Serial Port Driver crack and keygen full version
    -Eltima Virtual Serial Port Driver keygen generator for mac
    -Crack Eltima Virtual Serial Port Driver registration code
    -Eltima Virtual Serial Port Driver crack and keygen serial number
    -Eltima Virtual Serial Port Driver keygen generator no survey
    -Crack Eltima Virtual Serial Port Driver product key
    -Eltima Virtual Serial Port Driver crack and keygen patch
    -Eltima Virtual Serial Port Driver keygen generator 2023
    -Crack Eltima Virtual Serial Port Driver serial key
    -Eltima Virtual Serial Port Driver crack and keygen free trial
    -Eltima Virtual Serial Port Driver keygen generator without virus
    -Crack Eltima Virtual Serial Port Driver activation key
    -Eltima Virtual Serial Port Driver crack and keygen latest version
    -Eltima Virtual Serial Port Driver keygen generator reddit
    -Crack Eltima Virtual Serial Port Driver license key
    -Eltima Virtual Serial Port Driver crack and keygen working
    -Eltima Virtual Serial Port Driver keygen generator review
    -Crack Eltima Virtual Serial Port Driver registration key
    -Eltima Virtual Serial Port Driver crack and keygen 2022
    -Eltima Virtual Serial Port Driver keygen generator legit
    -Crack Eltima Virtual Serial Port Driver product code
    -Eltima Virtual Serial Port Driver crack and keygen 2021
    -Eltima Virtual Serial Port Driver keygen generator safe
    -Crack Eltima Virtual Serial Port Driver serial code
    -Eltima Virtual Serial Port Driver crack and keygen 2020
    -Eltima Virtual Serial Port Driver keygen generator free download
    -Crack Eltima Virtual Serial Port Driver activation code free
    -Eltima Virtual Serial Port Driver crack and keygen 2019
    -Eltima Virtual Serial Port Driver keygen generator hack tool
    -Crack Eltima Virtual Serial Port Driver license code free
    -Eltima Virtual Serial Port Driver crack and keygen 2018
    -Eltima Virtual Serial Port Driver keygen generator cracked apk
    -Crack Eltima Virtual Serial Port Driver registration code free
    -Eltima Virtual Serial Port Driver crack and keygen 2017
    -Eltima Virtual Serial Port Driver keygen generator mod apk
    -Crack Eltima Virtual Serial Port Driver product key free
    -Eltima Virtual Serial Port Driver crack and keygen 2016
    -Eltima Virtual Serial Port Driver keygen generator premium apk

    -
      -
    1. Go to a website that offers keygen generators for various software products. For example, you can try https://keygens.pro/ or https://www.keygenninja.com/ (Note: We do not endorse or recommend these websites. Use them at your own risk.)
    2. -
    3. Search for "Eltima Virtual Serial Port Driver" in the search box.
    4. -
    5. Select the version that matches your system requirements and download the keygen generator file.
    6. -
    7. Run the keygen generator file on your computer. You may need to disable your antivirus or firewall software temporarily, as they may detect the keygen generator as malware.
    8. -
    9. Select "Eltima Software" as the product name and click "Generate". The keygen generator will generate a serial number for Eltima Virtual Serial Port Driver.
    10. -
    11. Copy the serial number and paste it into the activation window of Eltima Virtual Serial Port Driver. Click "Activate" and wait for the confirmation message.
    12. -
    13. Congratulations! You have successfully cracked Eltima Virtual Serial Port Driver with a keygen generator.
    14. -
    -

    Risks and drawbacks of using a cracked version of Eltima Virtual Serial Port Driver

    -

    While cracking Eltima Virtual Serial Port Driver with a keygen generator may seem like an easy and cheap way to get access to its features, it comes with many risks and drawbacks that can outweigh its benefits. Here are some of the potential problems and issues that can arise from using a cracked version of Eltima Virtual Serial Port Driver:

    -
      -
    • Malware infection: The keygen generator file that you download from an untrusted source may contain viruses, trojans, worms, spyware, ransomware, or other malicious code that can infect your computer and compromise your data and privacy. The malware may also damage your system files, corrupt your registry, slow down your performance, or display unwanted ads or pop-ups.
    • -
    • System instability: The cracked version of Eltima Virtual Serial Port Driver may not work properly on your computer due to compatibility issues or bugs. It may cause errors, crashes, freezes, or blue screens that can affect your productivity and user experience. It may also interfere with other software or hardware components on your system.
    • -
    • Data loss: The cracked version of Eltima Virtual Serial Port Driver may not be able to handle your data transmission securely and reliably. It may cause data corruption, loss, or leakage during your serial communication. It may also fail to save your settings or preferences for your virtual COM ports.
    • -
    • Legal consequences: As we mentioned earlier, cracking software is illegal and unethical. By cracking Eltima Virtual Serial Port Driver with a keygen generator, you are violating the End User License Agreement (EULA) that you agreed to when installing the software. You are also infringing on the intellectual property rights of Electronic Team Inc., the developer and distributor of Eltima Virtual Serial Port Driver. You may face legal actions or penalties from Electronic Team Inc. or other authorities if they discover your illegal activity.
    • -
    • No technical support or updates: The cracked version of Eltima Virtual Serial Port Driver is not eligible for any technical support or updates from Electronic Team Inc. You will not be able to contact their customer service team for any assistance or troubleshooting. You will also not be able to receive any bug fixes, security patches, feature enhancements, or compatibility improvements that Electronic Team Inc. may release for their software.
    • -
    -

    Benefits and advantages of using a licensed version of Eltima Virtual Serial Port Driver

    -

    If you want to avoid all these risks and drawbacks of using a cracked version of Eltima Virtual Serial Port Driver, you should consider purchasing a licensed version of Eltima Virtual Serial Port Driver from the official website of Electronic Team Inc. A licensed version of Eltima Virtual Serial Port Driver will give you many benefits and advantages that a cracked version cannot offer. Here are some of them:

    -
      -
    • Security and reliability: A licensed version of Eltima Virtual Serial Port Driver is free from any malware or viruses that can harm your computer or data. It is also tested and verified by Electronic Team Inc. to ensure its quality and stability. You can trust that a licensed version of Eltima Virtual Serial Port Driver will work as intended and deliver the best results.
    • -
    • Compatibility and functionality: A licensed version of Eltima Virtual Serial Port Driver is compatible with all Windows versions from XP to 10, both 32-bit and 64-bit. It also supports all serial devices and applications that use standard Windows API for serial communication. You can enjoy the full functionality and features of Eltima Virtual Serial Port Driver without any limitations or restrictions.
    • -
    • Customer service and satisfaction: A licensed version of Eltima Virtual Serial Port Driver comes with a 14-day money-back guarantee, so you can try it risk-free and see if it meets your needs. It also comes with a lifetime upgrades guarantee, so you can always get the latest version of Eltima Virtual Serial Port Driver for free. Moreover, you can access the professional and friendly customer support team of Electronic Team Inc. via email, phone, or live chat for any assistance or inquiries.
    • -
    -

    How to get a licensed version of Eltima Virtual Serial Port Driver at a discounted price

    -

    If you are convinced that a licensed version of Eltima Virtual Serial Port Driver is worth investing in, we have good news for you. As a reader of this article, you can get a special offer and discount code for purchasing a licensed version of Eltima Virtual Serial Port Driver from the official website of Electronic Team Inc.

    -

    Here are the steps you need to follow:

    -
      -
    1. Go to https://www.eltima.com/purchase/vspdxp/ and choose the license option that suits your needs. You can choose between Virtual Serial Port Driver ($139.99) and Virtual Serial Port Driver Pro ($199.99).
    2. -
    3. Click on the "Buy Now" button and proceed to the checkout page.
    4. -
    5. Enter the discount code "VSPD15OFF" in the coupon box and click on the "Apply" button. You will see a 15% discount applied to your order total.
    6. -
    7. Fill in your billing information and payment method and complete your order.
    8. -
    9. You will receive an email confirmation with your license key and download link for Eltima Virtual Serial Port Driver.
    10. -
    11. Download and install Eltima Virtual Serial Port Driver on your computer and activate it with your license key.
    12. -
    13. Congratulations! You have successfully purchased a licensed version of Eltima Virtual Serial Port Driver at a discounted price.
    14. -
    -

    Don't miss this opportunity to get a high-quality software tool that can help you create and manage virtual COM ports with ease and efficiency. Order your licensed version of Eltima Virtual Serial Port Driver today and enjoy its benefits and advantages.

    -

    Conclusion

    -

    In this article, we have discussed what Eltima Virtual Serial Port Driver is, why you need it, how to crack it with a keygen generator, and what are the risks and drawbacks of using a cracked version. We have also shown you how to get a licensed version of Eltima Virtual Serial Port Driver at a discounted price.

    -

    We hope that this article has helped you understand the difference between a cracked version and a licensed version of Eltima Virtual Serial Port Driver, and why you should choose the latter. A licensed version of Eltima Virtual Serial Port Driver will provide you with security, reliability, compatibility, functionality, customer service, satisfaction, and lifetime upgrades guarantee. A cracked version of Eltima Virtual Serial Port Driver will expose you to malware infection, system instability, data loss, legal consequences, and no technical support or updates.

    -

    The choice is clear: if you want to create and manage virtual COM ports on your computer without any hassle or risk, you should purchase a licensed version of Eltima Virtual Serial Port Driver from the official website of Electronic Team Inc. And don't forget to use the discount code "VSPD15OFF" to get a 15% off your order.

    -

    Frequently Asked Questions

    -
      -
    1. What is a keygen generator?
      A keygen generator is a software tool that generates serial numbers or activation codes for various software products. It is often used by hackers or pirates to crack software and bypass its license verification process.
    2. -
    3. Is cracking software illegal?
      Yes, cracking software is illegal and unethical. It violates the intellectual property rights of the software developers and distributors. It also exposes the user to various risks and drawbacks that can harm their computer or data.
    4. -
    5. What is Eltima Virtual Serial Port Driver?
      Eltima Virtual Serial Port Driver is a software tool that allows you to create and manage virtual COM ports on your computer. It emulates the functionality of real serial ports and enables you to communicate with serial devices via virtual ports.
    6. -
    7. What are the benefits and advantages of using a licensed version of Eltima Virtual Serial Port Driver?
      A licensed version of Eltima Virtual Serial Port Driver provides you with security, reliability, compatibility, functionality, customer service, satisfaction, and lifetime upgrades guarantee. It also comes with a 14-day money-back guarantee and a special offer for the readers of this article.
    8. -
    9. How to get a licensed version of Eltima Virtual Serial Port Driver at a discounted price?
      You can get a licensed version of Eltima Virtual Serial Port Driver at a discounted price by ordering it from the official website of Electronic Team Inc. and using the discount code "VSPD15OFF" to get a 15% off your order.
    10. -
    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/ts4.8/dns.d.ts b/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/ts4.8/dns.d.ts deleted file mode 100644 index 305367b81d17a30d1a914cda62fdaf25acf3567e..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/ts4.8/dns.d.ts +++ /dev/null @@ -1,659 +0,0 @@ -/** - * The `dns` module enables name resolution. For example, use it to look up IP - * addresses of host names. - * - * Although named for the [Domain Name System (DNS)](https://en.wikipedia.org/wiki/Domain_Name_System), it does not always use the - * DNS protocol for lookups. {@link lookup} uses the operating system - * facilities to perform name resolution. It may not need to perform any network - * communication. To perform name resolution the way other applications on the same - * system do, use {@link lookup}. - * - * ```js - * const dns = require('dns'); - * - * dns.lookup('example.org', (err, address, family) => { - * console.log('address: %j family: IPv%s', address, family); - * }); - * // address: "93.184.216.34" family: IPv4 - * ``` - * - * All other functions in the `dns` module connect to an actual DNS server to - * perform name resolution. They will always use the network to perform DNS - * queries. These functions do not use the same set of configuration files used by {@link lookup} (e.g. `/etc/hosts`). Use these functions to always perform - * DNS queries, bypassing other name-resolution facilities. - * - * ```js - * const dns = require('dns'); - * - * dns.resolve4('archive.org', (err, addresses) => { - * if (err) throw err; - * - * console.log(`addresses: ${JSON.stringify(addresses)}`); - * - * addresses.forEach((a) => { - * dns.reverse(a, (err, hostnames) => { - * if (err) { - * throw err; - * } - * console.log(`reverse for ${a}: ${JSON.stringify(hostnames)}`); - * }); - * }); - * }); - * ``` - * - * See the `Implementation considerations section` for more information. - * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/dns.js) - */ -declare module 'dns' { - import * as dnsPromises from 'node:dns/promises'; - // Supported getaddrinfo flags. - export const ADDRCONFIG: number; - export const V4MAPPED: number; - /** - * If `dns.V4MAPPED` is specified, return resolved IPv6 addresses as - * well as IPv4 mapped IPv6 addresses. - */ - export const ALL: number; - export interface LookupOptions { - family?: number | undefined; - hints?: number | undefined; - all?: boolean | undefined; - /** - * @default true - */ - verbatim?: boolean | undefined; - } - export interface LookupOneOptions extends LookupOptions { - all?: false | undefined; - } - export interface LookupAllOptions extends LookupOptions { - all: true; - } - export interface LookupAddress { - address: string; - family: number; - } - /** - * Resolves a host name (e.g. `'nodejs.org'`) into the first found A (IPv4) or - * AAAA (IPv6) record. All `option` properties are optional. If `options` is an - * integer, then it must be `4` or `6` – if `options` is not provided, then IPv4 - * and IPv6 addresses are both returned if found. - * - * With the `all` option set to `true`, the arguments for `callback` change to`(err, addresses)`, with `addresses` being an array of objects with the - * properties `address` and `family`. - * - * On error, `err` is an `Error` object, where `err.code` is the error code. - * Keep in mind that `err.code` will be set to `'ENOTFOUND'` not only when - * the host name does not exist but also when the lookup fails in other ways - * such as no available file descriptors. - * - * `dns.lookup()` does not necessarily have anything to do with the DNS protocol. - * The implementation uses an operating system facility that can associate names - * with addresses, and vice versa. This implementation can have subtle but - * important consequences on the behavior of any Node.js program. Please take some - * time to consult the `Implementation considerations section` before using`dns.lookup()`. - * - * Example usage: - * - * ```js - * const dns = require('dns'); - * const options = { - * family: 6, - * hints: dns.ADDRCONFIG | dns.V4MAPPED, - * }; - * dns.lookup('example.com', options, (err, address, family) => - * console.log('address: %j family: IPv%s', address, family)); - * // address: "2606:2800:220:1:248:1893:25c8:1946" family: IPv6 - * - * // When options.all is true, the result will be an Array. - * options.all = true; - * dns.lookup('example.com', options, (err, addresses) => - * console.log('addresses: %j', addresses)); - * // addresses: [{"address":"2606:2800:220:1:248:1893:25c8:1946","family":6}] - * ``` - * - * If this method is invoked as its `util.promisify()` ed version, and `all`is not set to `true`, it returns a `Promise` for an `Object` with `address` and`family` properties. - * @since v0.1.90 - */ - export function lookup(hostname: string, family: number, callback: (err: NodeJS.ErrnoException | null, address: string, family: number) => void): void; - export function lookup(hostname: string, options: LookupOneOptions, callback: (err: NodeJS.ErrnoException | null, address: string, family: number) => void): void; - export function lookup(hostname: string, options: LookupAllOptions, callback: (err: NodeJS.ErrnoException | null, addresses: LookupAddress[]) => void): void; - export function lookup(hostname: string, options: LookupOptions, callback: (err: NodeJS.ErrnoException | null, address: string | LookupAddress[], family: number) => void): void; - export function lookup(hostname: string, callback: (err: NodeJS.ErrnoException | null, address: string, family: number) => void): void; - export namespace lookup { - function __promisify__(hostname: string, options: LookupAllOptions): Promise; - function __promisify__(hostname: string, options?: LookupOneOptions | number): Promise; - function __promisify__(hostname: string, options: LookupOptions): Promise; - } - /** - * Resolves the given `address` and `port` into a host name and service using - * the operating system's underlying `getnameinfo` implementation. - * - * If `address` is not a valid IP address, a `TypeError` will be thrown. - * The `port` will be coerced to a number. If it is not a legal port, a `TypeError`will be thrown. - * - * On an error, `err` is an `Error` object, where `err.code` is the error code. - * - * ```js - * const dns = require('dns'); - * dns.lookupService('127.0.0.1', 22, (err, hostname, service) => { - * console.log(hostname, service); - * // Prints: localhost ssh - * }); - * ``` - * - * If this method is invoked as its `util.promisify()` ed version, it returns a`Promise` for an `Object` with `hostname` and `service` properties. - * @since v0.11.14 - */ - export function lookupService(address: string, port: number, callback: (err: NodeJS.ErrnoException | null, hostname: string, service: string) => void): void; - export namespace lookupService { - function __promisify__( - address: string, - port: number - ): Promise<{ - hostname: string; - service: string; - }>; - } - export interface ResolveOptions { - ttl: boolean; - } - export interface ResolveWithTtlOptions extends ResolveOptions { - ttl: true; - } - export interface RecordWithTtl { - address: string; - ttl: number; - } - /** @deprecated Use `AnyARecord` or `AnyAaaaRecord` instead. */ - export type AnyRecordWithTtl = AnyARecord | AnyAaaaRecord; - export interface AnyARecord extends RecordWithTtl { - type: 'A'; - } - export interface AnyAaaaRecord extends RecordWithTtl { - type: 'AAAA'; - } - export interface CaaRecord { - critial: number; - issue?: string | undefined; - issuewild?: string | undefined; - iodef?: string | undefined; - contactemail?: string | undefined; - contactphone?: string | undefined; - } - export interface MxRecord { - priority: number; - exchange: string; - } - export interface AnyMxRecord extends MxRecord { - type: 'MX'; - } - export interface NaptrRecord { - flags: string; - service: string; - regexp: string; - replacement: string; - order: number; - preference: number; - } - export interface AnyNaptrRecord extends NaptrRecord { - type: 'NAPTR'; - } - export interface SoaRecord { - nsname: string; - hostmaster: string; - serial: number; - refresh: number; - retry: number; - expire: number; - minttl: number; - } - export interface AnySoaRecord extends SoaRecord { - type: 'SOA'; - } - export interface SrvRecord { - priority: number; - weight: number; - port: number; - name: string; - } - export interface AnySrvRecord extends SrvRecord { - type: 'SRV'; - } - export interface AnyTxtRecord { - type: 'TXT'; - entries: string[]; - } - export interface AnyNsRecord { - type: 'NS'; - value: string; - } - export interface AnyPtrRecord { - type: 'PTR'; - value: string; - } - export interface AnyCnameRecord { - type: 'CNAME'; - value: string; - } - export type AnyRecord = AnyARecord | AnyAaaaRecord | AnyCnameRecord | AnyMxRecord | AnyNaptrRecord | AnyNsRecord | AnyPtrRecord | AnySoaRecord | AnySrvRecord | AnyTxtRecord; - /** - * Uses the DNS protocol to resolve a host name (e.g. `'nodejs.org'`) into an array - * of the resource records. The `callback` function has arguments`(err, records)`. When successful, `records` will be an array of resource - * records. The type and structure of individual results varies based on `rrtype`: - * - * - * - * On error, `err` is an `Error` object, where `err.code` is one of the `DNS error codes`. - * @since v0.1.27 - * @param hostname Host name to resolve. - * @param [rrtype='A'] Resource record type. - */ - export function resolve(hostname: string, callback: (err: NodeJS.ErrnoException | null, addresses: string[]) => void): void; - export function resolve(hostname: string, rrtype: 'A', callback: (err: NodeJS.ErrnoException | null, addresses: string[]) => void): void; - export function resolve(hostname: string, rrtype: 'AAAA', callback: (err: NodeJS.ErrnoException | null, addresses: string[]) => void): void; - export function resolve(hostname: string, rrtype: 'ANY', callback: (err: NodeJS.ErrnoException | null, addresses: AnyRecord[]) => void): void; - export function resolve(hostname: string, rrtype: 'CNAME', callback: (err: NodeJS.ErrnoException | null, addresses: string[]) => void): void; - export function resolve(hostname: string, rrtype: 'MX', callback: (err: NodeJS.ErrnoException | null, addresses: MxRecord[]) => void): void; - export function resolve(hostname: string, rrtype: 'NAPTR', callback: (err: NodeJS.ErrnoException | null, addresses: NaptrRecord[]) => void): void; - export function resolve(hostname: string, rrtype: 'NS', callback: (err: NodeJS.ErrnoException | null, addresses: string[]) => void): void; - export function resolve(hostname: string, rrtype: 'PTR', callback: (err: NodeJS.ErrnoException | null, addresses: string[]) => void): void; - export function resolve(hostname: string, rrtype: 'SOA', callback: (err: NodeJS.ErrnoException | null, addresses: SoaRecord) => void): void; - export function resolve(hostname: string, rrtype: 'SRV', callback: (err: NodeJS.ErrnoException | null, addresses: SrvRecord[]) => void): void; - export function resolve(hostname: string, rrtype: 'TXT', callback: (err: NodeJS.ErrnoException | null, addresses: string[][]) => void): void; - export function resolve( - hostname: string, - rrtype: string, - callback: (err: NodeJS.ErrnoException | null, addresses: string[] | MxRecord[] | NaptrRecord[] | SoaRecord | SrvRecord[] | string[][] | AnyRecord[]) => void - ): void; - export namespace resolve { - function __promisify__(hostname: string, rrtype?: 'A' | 'AAAA' | 'CNAME' | 'NS' | 'PTR'): Promise; - function __promisify__(hostname: string, rrtype: 'ANY'): Promise; - function __promisify__(hostname: string, rrtype: 'MX'): Promise; - function __promisify__(hostname: string, rrtype: 'NAPTR'): Promise; - function __promisify__(hostname: string, rrtype: 'SOA'): Promise; - function __promisify__(hostname: string, rrtype: 'SRV'): Promise; - function __promisify__(hostname: string, rrtype: 'TXT'): Promise; - function __promisify__(hostname: string, rrtype: string): Promise; - } - /** - * Uses the DNS protocol to resolve a IPv4 addresses (`A` records) for the`hostname`. The `addresses` argument passed to the `callback` function - * will contain an array of IPv4 addresses (e.g.`['74.125.79.104', '74.125.79.105', '74.125.79.106']`). - * @since v0.1.16 - * @param hostname Host name to resolve. - */ - export function resolve4(hostname: string, callback: (err: NodeJS.ErrnoException | null, addresses: string[]) => void): void; - export function resolve4(hostname: string, options: ResolveWithTtlOptions, callback: (err: NodeJS.ErrnoException | null, addresses: RecordWithTtl[]) => void): void; - export function resolve4(hostname: string, options: ResolveOptions, callback: (err: NodeJS.ErrnoException | null, addresses: string[] | RecordWithTtl[]) => void): void; - export namespace resolve4 { - function __promisify__(hostname: string): Promise; - function __promisify__(hostname: string, options: ResolveWithTtlOptions): Promise; - function __promisify__(hostname: string, options?: ResolveOptions): Promise; - } - /** - * Uses the DNS protocol to resolve a IPv6 addresses (`AAAA` records) for the`hostname`. The `addresses` argument passed to the `callback` function - * will contain an array of IPv6 addresses. - * @since v0.1.16 - * @param hostname Host name to resolve. - */ - export function resolve6(hostname: string, callback: (err: NodeJS.ErrnoException | null, addresses: string[]) => void): void; - export function resolve6(hostname: string, options: ResolveWithTtlOptions, callback: (err: NodeJS.ErrnoException | null, addresses: RecordWithTtl[]) => void): void; - export function resolve6(hostname: string, options: ResolveOptions, callback: (err: NodeJS.ErrnoException | null, addresses: string[] | RecordWithTtl[]) => void): void; - export namespace resolve6 { - function __promisify__(hostname: string): Promise; - function __promisify__(hostname: string, options: ResolveWithTtlOptions): Promise; - function __promisify__(hostname: string, options?: ResolveOptions): Promise; - } - /** - * Uses the DNS protocol to resolve `CNAME` records for the `hostname`. The`addresses` argument passed to the `callback` function - * will contain an array of canonical name records available for the `hostname`(e.g. `['bar.example.com']`). - * @since v0.3.2 - */ - export function resolveCname(hostname: string, callback: (err: NodeJS.ErrnoException | null, addresses: string[]) => void): void; - export namespace resolveCname { - function __promisify__(hostname: string): Promise; - } - /** - * Uses the DNS protocol to resolve `CAA` records for the `hostname`. The`addresses` argument passed to the `callback` function - * will contain an array of certification authority authorization records - * available for the `hostname` (e.g. `[{critical: 0, iodef: 'mailto:pki@example.com'}, {critical: 128, issue: 'pki.example.com'}]`). - * @since v15.0.0, v14.17.0 - */ - export function resolveCaa(hostname: string, callback: (err: NodeJS.ErrnoException | null, records: CaaRecord[]) => void): void; - export namespace resolveCaa { - function __promisify__(hostname: string): Promise; - } - /** - * Uses the DNS protocol to resolve mail exchange records (`MX` records) for the`hostname`. The `addresses` argument passed to the `callback` function will - * contain an array of objects containing both a `priority` and `exchange`property (e.g. `[{priority: 10, exchange: 'mx.example.com'}, ...]`). - * @since v0.1.27 - */ - export function resolveMx(hostname: string, callback: (err: NodeJS.ErrnoException | null, addresses: MxRecord[]) => void): void; - export namespace resolveMx { - function __promisify__(hostname: string): Promise; - } - /** - * Uses the DNS protocol to resolve regular expression based records (`NAPTR`records) for the `hostname`. The `addresses` argument passed to the `callback`function will contain an array of - * objects with the following properties: - * - * * `flags` - * * `service` - * * `regexp` - * * `replacement` - * * `order` - * * `preference` - * - * ```js - * { - * flags: 's', - * service: 'SIP+D2U', - * regexp: '', - * replacement: '_sip._udp.example.com', - * order: 30, - * preference: 100 - * } - * ``` - * @since v0.9.12 - */ - export function resolveNaptr(hostname: string, callback: (err: NodeJS.ErrnoException | null, addresses: NaptrRecord[]) => void): void; - export namespace resolveNaptr { - function __promisify__(hostname: string): Promise; - } - /** - * Uses the DNS protocol to resolve name server records (`NS` records) for the`hostname`. The `addresses` argument passed to the `callback` function will - * contain an array of name server records available for `hostname`(e.g. `['ns1.example.com', 'ns2.example.com']`). - * @since v0.1.90 - */ - export function resolveNs(hostname: string, callback: (err: NodeJS.ErrnoException | null, addresses: string[]) => void): void; - export namespace resolveNs { - function __promisify__(hostname: string): Promise; - } - /** - * Uses the DNS protocol to resolve pointer records (`PTR` records) for the`hostname`. The `addresses` argument passed to the `callback` function will - * be an array of strings containing the reply records. - * @since v6.0.0 - */ - export function resolvePtr(hostname: string, callback: (err: NodeJS.ErrnoException | null, addresses: string[]) => void): void; - export namespace resolvePtr { - function __promisify__(hostname: string): Promise; - } - /** - * Uses the DNS protocol to resolve a start of authority record (`SOA` record) for - * the `hostname`. The `address` argument passed to the `callback` function will - * be an object with the following properties: - * - * * `nsname` - * * `hostmaster` - * * `serial` - * * `refresh` - * * `retry` - * * `expire` - * * `minttl` - * - * ```js - * { - * nsname: 'ns.example.com', - * hostmaster: 'root.example.com', - * serial: 2013101809, - * refresh: 10000, - * retry: 2400, - * expire: 604800, - * minttl: 3600 - * } - * ``` - * @since v0.11.10 - */ - export function resolveSoa(hostname: string, callback: (err: NodeJS.ErrnoException | null, address: SoaRecord) => void): void; - export namespace resolveSoa { - function __promisify__(hostname: string): Promise; - } - /** - * Uses the DNS protocol to resolve service records (`SRV` records) for the`hostname`. The `addresses` argument passed to the `callback` function will - * be an array of objects with the following properties: - * - * * `priority` - * * `weight` - * * `port` - * * `name` - * - * ```js - * { - * priority: 10, - * weight: 5, - * port: 21223, - * name: 'service.example.com' - * } - * ``` - * @since v0.1.27 - */ - export function resolveSrv(hostname: string, callback: (err: NodeJS.ErrnoException | null, addresses: SrvRecord[]) => void): void; - export namespace resolveSrv { - function __promisify__(hostname: string): Promise; - } - /** - * Uses the DNS protocol to resolve text queries (`TXT` records) for the`hostname`. The `records` argument passed to the `callback` function is a - * two-dimensional array of the text records available for `hostname` (e.g.`[ ['v=spf1 ip4:0.0.0.0 ', '~all' ] ]`). Each sub-array contains TXT chunks of - * one record. Depending on the use case, these could be either joined together or - * treated separately. - * @since v0.1.27 - */ - export function resolveTxt(hostname: string, callback: (err: NodeJS.ErrnoException | null, addresses: string[][]) => void): void; - export namespace resolveTxt { - function __promisify__(hostname: string): Promise; - } - /** - * Uses the DNS protocol to resolve all records (also known as `ANY` or `*` query). - * The `ret` argument passed to the `callback` function will be an array containing - * various types of records. Each object has a property `type` that indicates the - * type of the current record. And depending on the `type`, additional properties - * will be present on the object: - * - * - * - * Here is an example of the `ret` object passed to the callback: - * - * ```js - * [ { type: 'A', address: '127.0.0.1', ttl: 299 }, - * { type: 'CNAME', value: 'example.com' }, - * { type: 'MX', exchange: 'alt4.aspmx.l.example.com', priority: 50 }, - * { type: 'NS', value: 'ns1.example.com' }, - * { type: 'TXT', entries: [ 'v=spf1 include:_spf.example.com ~all' ] }, - * { type: 'SOA', - * nsname: 'ns1.example.com', - * hostmaster: 'admin.example.com', - * serial: 156696742, - * refresh: 900, - * retry: 900, - * expire: 1800, - * minttl: 60 } ] - * ``` - * - * DNS server operators may choose not to respond to `ANY`queries. It may be better to call individual methods like {@link resolve4},{@link resolveMx}, and so on. For more details, see [RFC - * 8482](https://tools.ietf.org/html/rfc8482). - */ - export function resolveAny(hostname: string, callback: (err: NodeJS.ErrnoException | null, addresses: AnyRecord[]) => void): void; - export namespace resolveAny { - function __promisify__(hostname: string): Promise; - } - /** - * Performs a reverse DNS query that resolves an IPv4 or IPv6 address to an - * array of host names. - * - * On error, `err` is an `Error` object, where `err.code` is - * one of the `DNS error codes`. - * @since v0.1.16 - */ - export function reverse(ip: string, callback: (err: NodeJS.ErrnoException | null, hostnames: string[]) => void): void; - /** - * Sets the IP address and port of servers to be used when performing DNS - * resolution. The `servers` argument is an array of [RFC 5952](https://tools.ietf.org/html/rfc5952#section-6) formatted - * addresses. If the port is the IANA default DNS port (53) it can be omitted. - * - * ```js - * dns.setServers([ - * '4.4.4.4', - * '[2001:4860:4860::8888]', - * '4.4.4.4:1053', - * '[2001:4860:4860::8888]:1053', - * ]); - * ``` - * - * An error will be thrown if an invalid address is provided. - * - * The `dns.setServers()` method must not be called while a DNS query is in - * progress. - * - * The {@link setServers} method affects only {@link resolve},`dns.resolve*()` and {@link reverse} (and specifically _not_ {@link lookup}). - * - * This method works much like [resolve.conf](https://man7.org/linux/man-pages/man5/resolv.conf.5.html). - * That is, if attempting to resolve with the first server provided results in a`NOTFOUND` error, the `resolve()` method will _not_ attempt to resolve with - * subsequent servers provided. Fallback DNS servers will only be used if the - * earlier ones time out or result in some other error. - * @since v0.11.3 - * @param servers array of `RFC 5952` formatted addresses - */ - export function setServers(servers: ReadonlyArray): void; - /** - * Returns an array of IP address strings, formatted according to [RFC 5952](https://tools.ietf.org/html/rfc5952#section-6), - * that are currently configured for DNS resolution. A string will include a port - * section if a custom port is used. - * - * ```js - * [ - * '4.4.4.4', - * '2001:4860:4860::8888', - * '4.4.4.4:1053', - * '[2001:4860:4860::8888]:1053', - * ] - * ``` - * @since v0.11.3 - */ - export function getServers(): string[]; - /** - * Set the default value of `verbatim` in {@link lookup} and `dnsPromises.lookup()`. The value could be: - * - * * `ipv4first`: sets default `verbatim` `false`. - * * `verbatim`: sets default `verbatim` `true`. - * - * The default is `ipv4first` and {@link setDefaultResultOrder} have higher - * priority than `--dns-result-order`. When using `worker threads`,{@link setDefaultResultOrder} from the main thread won't affect the default - * dns orders in workers. - * @since v16.4.0, v14.18.0 - * @param order must be `'ipv4first'` or `'verbatim'`. - */ - export function setDefaultResultOrder(order: 'ipv4first' | 'verbatim'): void; - // Error codes - export const NODATA: string; - export const FORMERR: string; - export const SERVFAIL: string; - export const NOTFOUND: string; - export const NOTIMP: string; - export const REFUSED: string; - export const BADQUERY: string; - export const BADNAME: string; - export const BADFAMILY: string; - export const BADRESP: string; - export const CONNREFUSED: string; - export const TIMEOUT: string; - export const EOF: string; - export const FILE: string; - export const NOMEM: string; - export const DESTRUCTION: string; - export const BADSTR: string; - export const BADFLAGS: string; - export const NONAME: string; - export const BADHINTS: string; - export const NOTINITIALIZED: string; - export const LOADIPHLPAPI: string; - export const ADDRGETNETWORKPARAMS: string; - export const CANCELLED: string; - export interface ResolverOptions { - timeout?: number | undefined; - /** - * @default 4 - */ - tries?: number; - } - /** - * An independent resolver for DNS requests. - * - * Creating a new resolver uses the default server settings. Setting - * the servers used for a resolver using `resolver.setServers()` does not affect - * other resolvers: - * - * ```js - * const { Resolver } = require('dns'); - * const resolver = new Resolver(); - * resolver.setServers(['4.4.4.4']); - * - * // This request will use the server at 4.4.4.4, independent of global settings. - * resolver.resolve4('example.org', (err, addresses) => { - * // ... - * }); - * ``` - * - * The following methods from the `dns` module are available: - * - * * `resolver.getServers()` - * * `resolver.resolve()` - * * `resolver.resolve4()` - * * `resolver.resolve6()` - * * `resolver.resolveAny()` - * * `resolver.resolveCaa()` - * * `resolver.resolveCname()` - * * `resolver.resolveMx()` - * * `resolver.resolveNaptr()` - * * `resolver.resolveNs()` - * * `resolver.resolvePtr()` - * * `resolver.resolveSoa()` - * * `resolver.resolveSrv()` - * * `resolver.resolveTxt()` - * * `resolver.reverse()` - * * `resolver.setServers()` - * @since v8.3.0 - */ - export class Resolver { - constructor(options?: ResolverOptions); - /** - * Cancel all outstanding DNS queries made by this resolver. The corresponding - * callbacks will be called with an error with code `ECANCELLED`. - * @since v8.3.0 - */ - cancel(): void; - getServers: typeof getServers; - resolve: typeof resolve; - resolve4: typeof resolve4; - resolve6: typeof resolve6; - resolveAny: typeof resolveAny; - resolveCname: typeof resolveCname; - resolveMx: typeof resolveMx; - resolveNaptr: typeof resolveNaptr; - resolveNs: typeof resolveNs; - resolvePtr: typeof resolvePtr; - resolveSoa: typeof resolveSoa; - resolveSrv: typeof resolveSrv; - resolveTxt: typeof resolveTxt; - reverse: typeof reverse; - /** - * The resolver instance will send its requests from the specified IP address. - * This allows programs to specify outbound interfaces when used on multi-homed - * systems. - * - * If a v4 or v6 address is not specified, it is set to the default, and the - * operating system will choose a local address automatically. - * - * The resolver will use the v4 local address when making requests to IPv4 DNS - * servers, and the v6 local address when making requests to IPv6 DNS servers. - * The `rrtype` of resolution requests has no impact on the local address used. - * @since v15.1.0, v14.17.0 - * @param [ipv4='0.0.0.0'] A string representation of an IPv4 address. - * @param [ipv6='::0'] A string representation of an IPv6 address. - */ - setLocalAddress(ipv4?: string, ipv6?: string): void; - setServers: typeof setServers; - } - export { dnsPromises as promises }; -} -declare module 'node:dns' { - export * from 'dns'; -} diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Keygen [Extra Quality] Xforce For AutoCAD Architecture 2007 Keygen.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Keygen [Extra Quality] Xforce For AutoCAD Architecture 2007 Keygen.md deleted file mode 100644 index a2dc03990e3b0a89d5dbd49aee6847079c046188..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Keygen [Extra Quality] Xforce For AutoCAD Architecture 2007 Keygen.md +++ /dev/null @@ -1,6 +0,0 @@ -

    download keygen xforce for AutoCAD Architecture 2007 keygen


    Download File - https://urlgoal.com/2uCJVt



    -
    -Millions of users download 3D and 2D CAD files everyday. 26 Sep 2014 Xforce Keygen 64bits Version For Autocad 2013 64 Bit Free Download ... 0, released at 09/01/2007, it can, open mpeg2 vob files on any unencrypted DVD. ... (CAD), it is used by engineering services, as the engineering industry, and architecture. 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Fotos De Danielle Colby Desnuda.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Fotos De Danielle Colby Desnuda.md deleted file mode 100644 index 30e1b44047e076bef96c9c012816741befed0ba6..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Fotos De Danielle Colby Desnuda.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Fotos De Danielle Colby Desnuda


    Download Zip ::: https://urlgoal.com/2uCJls



    -
    -Danielle colby fotos desnuda putada prostitutas mostomes prostitutas navia prostitutas en andujar prostitutas paraguayas . real sexo consolador cerca de ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/riccorl/relik-entity-linking/relik/inference/data/tokenizers/regex_tokenizer.py b/spaces/riccorl/relik-entity-linking/relik/inference/data/tokenizers/regex_tokenizer.py deleted file mode 100644 index ebe8656afb891a8318a7030375427e190d1dc383..0000000000000000000000000000000000000000 --- a/spaces/riccorl/relik-entity-linking/relik/inference/data/tokenizers/regex_tokenizer.py +++ /dev/null @@ -1,73 +0,0 @@ -import re -from typing import List, Union - -from overrides import overrides - -from relik.inference.data.objects import Word -from relik.inference.data.tokenizers.base_tokenizer import BaseTokenizer - - -class RegexTokenizer(BaseTokenizer): - """ - A :obj:`Tokenizer` that splits the text based on a simple regex. - """ - - def __init__(self): - super(RegexTokenizer, self).__init__() - # regex for splitting on spaces and punctuation and new lines - # self._regex = re.compile(r"\S+|[\[\](),.!?;:\"]|\\n") - self._regex = re.compile( - r"\w+|\$[\d\.]+|\S+", re.UNICODE | re.MULTILINE | re.DOTALL - ) - - def __call__( - self, - texts: Union[str, List[str], List[List[str]]], - is_split_into_words: bool = False, - **kwargs, - ) -> List[List[Word]]: - """ - Tokenize the input into single words by splitting using a simple regex. - - Args: - texts (:obj:`str`, :obj:`List[str]`, :obj:`List[List[str]]`): - Text to tag. It can be a single string, a batch of string and pre-tokenized strings. - is_split_into_words (:obj:`bool`, optional, defaults to :obj:`False`): - If :obj:`True` and the input is a string, the input is split on spaces. - - Returns: - :obj:`List[List[Word]]`: The input text tokenized in single words. - - Example:: - - >>> from relik.retriever.serve.tokenizers.regex_tokenizer import RegexTokenizer - - >>> regex_tokenizer = RegexTokenizer() - >>> regex_tokenizer("Mary sold the car to John.") - - """ - # check if input is batched or a single sample - is_batched = self.check_is_batched(texts, is_split_into_words) - - if is_batched: - tokenized = self.tokenize_batch(texts) - else: - tokenized = self.tokenize(texts) - - return tokenized - - @overrides - def tokenize(self, text: Union[str, List[str]]) -> List[Word]: - if not isinstance(text, (str, list)): - raise ValueError( - f"text must be either `str` or `list`, found: `{type(text)}`" - ) - - if isinstance(text, list): - text = " ".join(text) - return [ - Word(t[0], i, start_char=t[1], end_char=t[2]) - for i, t in enumerate( - (m.group(0), m.start(), m.end()) for m in self._regex.finditer(text) - ) - ] diff --git a/spaces/ronvolutional/http-server/inference.py b/spaces/ronvolutional/http-server/inference.py deleted file mode 100644 index fbf5cce09c4dd0844bb300e7afb161a15f7b0149..0000000000000000000000000000000000000000 --- a/spaces/ronvolutional/http-server/inference.py +++ /dev/null @@ -1,11 +0,0 @@ -from transformers import T5Tokenizer, T5ForConditionalGeneration - -tokenizer = T5Tokenizer.from_pretrained("t5-small") -model = T5ForConditionalGeneration.from_pretrained("t5-small") - - -def infer_t5(input): - input_ids = tokenizer(input, return_tensors="pt").input_ids - outputs = model.generate(input_ids) - - return tokenizer.decode(outputs[0], skip_special_tokens=True) diff --git a/spaces/ronvolutional/iframe-test/start.py b/spaces/ronvolutional/iframe-test/start.py deleted file mode 100644 index f5b9217e0ef966bb596f57a8ec32839f7cd3eafe..0000000000000000000000000000000000000000 --- a/spaces/ronvolutional/iframe-test/start.py +++ /dev/null @@ -1,3 +0,0 @@ -import subprocess - -subprocess.run("uvicorn modules.app:app --host 0.0.0.0 --port 7860", shell=True) diff --git a/spaces/rorallitri/biomedical-language-models/logs/Allah Facebook Cover Page Showcase Your Love and Devotion.md b/spaces/rorallitri/biomedical-language-models/logs/Allah Facebook Cover Page Showcase Your Love and Devotion.md deleted file mode 100644 index 294e12c7830b3f3d605dfde6683702eb5b35e692..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Allah Facebook Cover Page Showcase Your Love and Devotion.md +++ /dev/null @@ -1,6 +0,0 @@ -
    -

    How do I create a story highlight on my facebook business page? Been trying for ages and cant find how to do it. I have it set up on my personal facebook page and both instagram accounts but am at a loss on facebook business

    -

    allah facebook cover page


    Download File ✺✺✺ https://tinurll.com/2uzotg



    -

    Are you looking for allah hashtags to boost likes and followers on your Instagram post? If Yes, then you have reached at right place because this page has a collection of latest allah hashtags for Instagram, Twitter, Facebook, Tumblr, Youtube, TikTok which are updated in 2023. These allah hashtags peoples are widely using with their instagram post to get more likes and followers, Following are the list of allah hashtags which you can copy and paste with your instagram, facebook, twitter post or even your youtube videos to get more views, likes and followers.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Crayon Physics Deluxe - Release 55 [ADHDerby] Torrent !FULL!.md b/spaces/rorallitri/biomedical-language-models/logs/Crayon Physics Deluxe - Release 55 [ADHDerby] Torrent !FULL!.md deleted file mode 100644 index 1f8b9f16135b4eddf17a8491f1f7c969e6dbab9b..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Crayon Physics Deluxe - Release 55 [ADHDerby] Torrent !FULL!.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Crayon Physics Deluxe - Release 55 [ADHDerby] Torrent


    Downloadhttps://tinurll.com/2uzmCd



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/rueckstiess/english-to-mql/README.md b/spaces/rueckstiess/english-to-mql/README.md deleted file mode 100644 index 39e3b194d742fd89a67b9f45c984cf9bf2281d21..0000000000000000000000000000000000000000 --- a/spaces/rueckstiess/english-to-mql/README.md +++ /dev/null @@ -1,71 +0,0 @@ ---- -title: English To MQL -emoji: 🌱 -colorFrom: green -colorTo: yellow -sdk: streamlit -sdk_version: 1.20.0 -app_file: app.py -pinned: false -license: mit -python_version: 3.10.10 ---- - -# English to MQL Translation with GPT-3.5 - -This is a basic demo app to show translation from English to MQL with language models like GPT-3.5 (model `text-davinci-003`). - -**Disclaimer**: This app is experimental and for demo purposes only. We expect the results to be frequently incorrect at this early stage. Do not enter sensitive data, the inputs will be -sent to OpenAI's APIs. - -![](./app-screenshot.png) - -## Requirements - -You need Python version 3.7+ installed on your system and have an [OpenAI](https://openai.com) API key. - -## Installation - -Clone this repo and change into the sub-directory `english-to-mql/`. - -(Optional) We recomment installing the dependencies in a local/virtual Python environment. You can create one with: - -``` -python -m venv .venv -source .venv/bin/activate -``` - -Now install the `openai` and `streamlit` packages and their dependencies. - -``` -pip install -r requirements.txt -``` - -Additionally, you need to create a Streamlit secrets file and add your OpenAI API key. - -``` -mkdir .streamlit -echo 'OPENAI_API_KEY = "REPLACE_WITH_KEY"' > .streamlit/secrets.toml -``` - -## Running the app - -To start the app, run - -``` -streamlit run app.py -``` - -This will start the Streamlit server and open a browser tab in your browser showing the application. - -## Usage - -Enter an example document from your collection, which you can get from the `mongosh` shell with `db.collection.findOne()`. - -Then enter a question about the dataset in English and press 'Translate'. - -The MQL aggregation pipeline should appear on the right-hand side. - -## Questions - -For questions reach out in our `#labs` Slack channel. diff --git a/spaces/sam-hq-team/sam-hq/GroundingDINO/groundingdino/models/GroundingDINO/fuse_modules.py b/spaces/sam-hq-team/sam-hq/GroundingDINO/groundingdino/models/GroundingDINO/fuse_modules.py deleted file mode 100644 index 2753b3ddee43c7a9fe28d1824db5d786e7e1ad59..0000000000000000000000000000000000000000 --- a/spaces/sam-hq-team/sam-hq/GroundingDINO/groundingdino/models/GroundingDINO/fuse_modules.py +++ /dev/null @@ -1,297 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ - -import torch -import torch.nn as nn -import torch.nn.functional as F -from timm.models.layers import DropPath - - -class FeatureResizer(nn.Module): - """ - This class takes as input a set of embeddings of dimension C1 and outputs a set of - embedding of dimension C2, after a linear transformation, dropout and normalization (LN). - """ - - def __init__(self, input_feat_size, output_feat_size, dropout, do_ln=True): - super().__init__() - self.do_ln = do_ln - # Object feature encoding - self.fc = nn.Linear(input_feat_size, output_feat_size, bias=True) - self.layer_norm = nn.LayerNorm(output_feat_size, eps=1e-12) - self.dropout = nn.Dropout(dropout) - - def forward(self, encoder_features): - x = self.fc(encoder_features) - if self.do_ln: - x = self.layer_norm(x) - output = self.dropout(x) - return output - - -def l1norm(X, dim, eps=1e-8): - """L1-normalize columns of X""" - norm = torch.abs(X).sum(dim=dim, keepdim=True) + eps - X = torch.div(X, norm) - return X - - -def l2norm(X, dim, eps=1e-8): - """L2-normalize columns of X""" - norm = torch.pow(X, 2).sum(dim=dim, keepdim=True).sqrt() + eps - X = torch.div(X, norm) - return X - - -def func_attention(query, context, smooth=1, raw_feature_norm="softmax", eps=1e-8): - """ - query: (n_context, queryL, d) - context: (n_context, sourceL, d) - """ - batch_size_q, queryL = query.size(0), query.size(1) - batch_size, sourceL = context.size(0), context.size(1) - - # Get attention - # --> (batch, d, queryL) - queryT = torch.transpose(query, 1, 2) - - # (batch, sourceL, d)(batch, d, queryL) - # --> (batch, sourceL, queryL) - attn = torch.bmm(context, queryT) - if raw_feature_norm == "softmax": - # --> (batch*sourceL, queryL) - attn = attn.view(batch_size * sourceL, queryL) - attn = nn.Softmax()(attn) - # --> (batch, sourceL, queryL) - attn = attn.view(batch_size, sourceL, queryL) - elif raw_feature_norm == "l2norm": - attn = l2norm(attn, 2) - elif raw_feature_norm == "clipped_l2norm": - attn = nn.LeakyReLU(0.1)(attn) - attn = l2norm(attn, 2) - else: - raise ValueError("unknown first norm type:", raw_feature_norm) - # --> (batch, queryL, sourceL) - attn = torch.transpose(attn, 1, 2).contiguous() - # --> (batch*queryL, sourceL) - attn = attn.view(batch_size * queryL, sourceL) - attn = nn.Softmax()(attn * smooth) - # --> (batch, queryL, sourceL) - attn = attn.view(batch_size, queryL, sourceL) - # --> (batch, sourceL, queryL) - attnT = torch.transpose(attn, 1, 2).contiguous() - - # --> (batch, d, sourceL) - contextT = torch.transpose(context, 1, 2) - # (batch x d x sourceL)(batch x sourceL x queryL) - # --> (batch, d, queryL) - weightedContext = torch.bmm(contextT, attnT) - # --> (batch, queryL, d) - weightedContext = torch.transpose(weightedContext, 1, 2) - - return weightedContext, attnT - - -class BiMultiHeadAttention(nn.Module): - def __init__(self, v_dim, l_dim, embed_dim, num_heads, dropout=0.1, cfg=None): - super(BiMultiHeadAttention, self).__init__() - - self.embed_dim = embed_dim - self.num_heads = num_heads - self.head_dim = embed_dim // num_heads - self.v_dim = v_dim - self.l_dim = l_dim - - assert ( - self.head_dim * self.num_heads == self.embed_dim - ), f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim} and `num_heads`: {self.num_heads})." - self.scale = self.head_dim ** (-0.5) - self.dropout = dropout - - self.v_proj = nn.Linear(self.v_dim, self.embed_dim) - self.l_proj = nn.Linear(self.l_dim, self.embed_dim) - self.values_v_proj = nn.Linear(self.v_dim, self.embed_dim) - self.values_l_proj = nn.Linear(self.l_dim, self.embed_dim) - - self.out_v_proj = nn.Linear(self.embed_dim, self.v_dim) - self.out_l_proj = nn.Linear(self.embed_dim, self.l_dim) - - self.stable_softmax_2d = True - self.clamp_min_for_underflow = True - self.clamp_max_for_overflow = True - - self._reset_parameters() - - def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int): - return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous() - - def _reset_parameters(self): - nn.init.xavier_uniform_(self.v_proj.weight) - self.v_proj.bias.data.fill_(0) - nn.init.xavier_uniform_(self.l_proj.weight) - self.l_proj.bias.data.fill_(0) - nn.init.xavier_uniform_(self.values_v_proj.weight) - self.values_v_proj.bias.data.fill_(0) - nn.init.xavier_uniform_(self.values_l_proj.weight) - self.values_l_proj.bias.data.fill_(0) - nn.init.xavier_uniform_(self.out_v_proj.weight) - self.out_v_proj.bias.data.fill_(0) - nn.init.xavier_uniform_(self.out_l_proj.weight) - self.out_l_proj.bias.data.fill_(0) - - def forward(self, v, l, attention_mask_v=None, attention_mask_l=None): - """_summary_ - - Args: - v (_type_): bs, n_img, dim - l (_type_): bs, n_text, dim - attention_mask_v (_type_, optional): _description_. bs, n_img - attention_mask_l (_type_, optional): _description_. bs, n_text - - Returns: - _type_: _description_ - """ - # if os.environ.get('IPDB_SHILONG_DEBUG', None) == 'INFO': - # import ipdb; ipdb.set_trace() - bsz, tgt_len, _ = v.size() - - query_states = self.v_proj(v) * self.scale - key_states = self._shape(self.l_proj(l), -1, bsz) - value_v_states = self._shape(self.values_v_proj(v), -1, bsz) - value_l_states = self._shape(self.values_l_proj(l), -1, bsz) - - proj_shape = (bsz * self.num_heads, -1, self.head_dim) - query_states = self._shape(query_states, tgt_len, bsz).view(*proj_shape) - key_states = key_states.view(*proj_shape) - value_v_states = value_v_states.view(*proj_shape) - value_l_states = value_l_states.view(*proj_shape) - - src_len = key_states.size(1) - attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) # bs*nhead, nimg, ntxt - - if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len): - raise ValueError( - f"Attention weights should be of size {(bsz * self.num_heads, tgt_len, src_len)}, but is {attn_weights.size()}" - ) - - if self.stable_softmax_2d: - attn_weights = attn_weights - attn_weights.max() - - if self.clamp_min_for_underflow: - attn_weights = torch.clamp( - attn_weights, min=-50000 - ) # Do not increase -50000, data type half has quite limited range - if self.clamp_max_for_overflow: - attn_weights = torch.clamp( - attn_weights, max=50000 - ) # Do not increase 50000, data type half has quite limited range - - attn_weights_T = attn_weights.transpose(1, 2) - attn_weights_l = attn_weights_T - torch.max(attn_weights_T, dim=-1, keepdim=True)[0] - if self.clamp_min_for_underflow: - attn_weights_l = torch.clamp( - attn_weights_l, min=-50000 - ) # Do not increase -50000, data type half has quite limited range - if self.clamp_max_for_overflow: - attn_weights_l = torch.clamp( - attn_weights_l, max=50000 - ) # Do not increase 50000, data type half has quite limited range - - # mask vison for language - if attention_mask_v is not None: - attention_mask_v = ( - attention_mask_v[:, None, None, :].repeat(1, self.num_heads, 1, 1).flatten(0, 1) - ) - attn_weights_l.masked_fill_(attention_mask_v, float("-inf")) - - attn_weights_l = attn_weights_l.softmax(dim=-1) - - # mask language for vision - if attention_mask_l is not None: - attention_mask_l = ( - attention_mask_l[:, None, None, :].repeat(1, self.num_heads, 1, 1).flatten(0, 1) - ) - attn_weights.masked_fill_(attention_mask_l, float("-inf")) - attn_weights_v = attn_weights.softmax(dim=-1) - - attn_probs_v = F.dropout(attn_weights_v, p=self.dropout, training=self.training) - attn_probs_l = F.dropout(attn_weights_l, p=self.dropout, training=self.training) - - attn_output_v = torch.bmm(attn_probs_v, value_l_states) - attn_output_l = torch.bmm(attn_probs_l, value_v_states) - - if attn_output_v.size() != (bsz * self.num_heads, tgt_len, self.head_dim): - raise ValueError( - f"`attn_output_v` should be of size {(bsz, self.num_heads, tgt_len, self.head_dim)}, but is {attn_output_v.size()}" - ) - - if attn_output_l.size() != (bsz * self.num_heads, src_len, self.head_dim): - raise ValueError( - f"`attn_output_l` should be of size {(bsz, self.num_heads, src_len, self.head_dim)}, but is {attn_output_l.size()}" - ) - - attn_output_v = attn_output_v.view(bsz, self.num_heads, tgt_len, self.head_dim) - attn_output_v = attn_output_v.transpose(1, 2) - attn_output_v = attn_output_v.reshape(bsz, tgt_len, self.embed_dim) - - attn_output_l = attn_output_l.view(bsz, self.num_heads, src_len, self.head_dim) - attn_output_l = attn_output_l.transpose(1, 2) - attn_output_l = attn_output_l.reshape(bsz, src_len, self.embed_dim) - - attn_output_v = self.out_v_proj(attn_output_v) - attn_output_l = self.out_l_proj(attn_output_l) - - return attn_output_v, attn_output_l - - -# Bi-Direction MHA (text->image, image->text) -class BiAttentionBlock(nn.Module): - def __init__( - self, - v_dim, - l_dim, - embed_dim, - num_heads, - dropout=0.1, - drop_path=0.0, - init_values=1e-4, - cfg=None, - ): - """ - Inputs: - embed_dim - Dimensionality of input and attention feature vectors - hidden_dim - Dimensionality of hidden layer in feed-forward network - (usually 2-4x larger than embed_dim) - num_heads - Number of heads to use in the Multi-Head Attention block - dropout - Amount of dropout to apply in the feed-forward network - """ - super(BiAttentionBlock, self).__init__() - - # pre layer norm - self.layer_norm_v = nn.LayerNorm(v_dim) - self.layer_norm_l = nn.LayerNorm(l_dim) - self.attn = BiMultiHeadAttention( - v_dim=v_dim, l_dim=l_dim, embed_dim=embed_dim, num_heads=num_heads, dropout=dropout - ) - - # add layer scale for training stability - self.drop_path = DropPath(drop_path) if drop_path > 0.0 else nn.Identity() - self.gamma_v = nn.Parameter(init_values * torch.ones((v_dim)), requires_grad=True) - self.gamma_l = nn.Parameter(init_values * torch.ones((l_dim)), requires_grad=True) - - def forward(self, v, l, attention_mask_v=None, attention_mask_l=None): - v = self.layer_norm_v(v) - l = self.layer_norm_l(l) - delta_v, delta_l = self.attn( - v, l, attention_mask_v=attention_mask_v, attention_mask_l=attention_mask_l - ) - # v, l = v + delta_v, l + delta_l - v = v + self.drop_path(self.gamma_v * delta_v) - l = l + self.drop_path(self.gamma_l * delta_l) - return v, l - - # def forward(self, v:List[torch.Tensor], l, attention_mask_v=None, attention_mask_l=None) diff --git a/spaces/samcaicn/bingai/next.config.js b/spaces/samcaicn/bingai/next.config.js deleted file mode 100644 index d5cfc81958a76bb32daad41c8c8808c1f8b34884..0000000000000000000000000000000000000000 --- a/spaces/samcaicn/bingai/next.config.js +++ /dev/null @@ -1,36 +0,0 @@ -/** @type {import('next').NextConfig} */ -const nextConfig = { - // output: 'export', - // assetPrefix: '.', - webpack: (config, { isServer }) => { - if (!isServer) { - config.resolve = { - ...config.resolve, - fallback: { - http: false, - https: false, - stream: false, - // fixes proxy-agent dependencies - net: false, - dns: false, - tls: false, - assert: false, - // fixes next-i18next dependencies - path: false, - fs: false, - // fixes mapbox dependencies - events: false, - // fixes sentry dependencies - process: false - } - }; - } - config.module.exprContextCritical = false; - - return config; - }, -} - -module.exports = (...args) => { - return nextConfig -} diff --git a/spaces/sayakpaul/raindrop-deraining-maxim/maxim/blocks/block_gating.py b/spaces/sayakpaul/raindrop-deraining-maxim/maxim/blocks/block_gating.py deleted file mode 100644 index 0d06af50448f7a15a39c84100be1a99710b24c32..0000000000000000000000000000000000000000 --- a/spaces/sayakpaul/raindrop-deraining-maxim/maxim/blocks/block_gating.py +++ /dev/null @@ -1,67 +0,0 @@ -import tensorflow as tf -from tensorflow.keras import backend as K -from tensorflow.keras import layers - -from ..layers import BlockImages, SwapAxes, UnblockImages - - -def BlockGatingUnit(use_bias: bool = True, name: str = "block_gating_unit"): - """A SpatialGatingUnit as defined in the gMLP paper. - - The 'spatial' dim is defined as the **second last**. - If applied on other dims, you should swapaxes first. - """ - - def apply(x): - u, v = tf.split(x, 2, axis=-1) - v = layers.LayerNormalization( - epsilon=1e-06, name=f"{name}_intermediate_layernorm" - )(v) - n = K.int_shape(x)[-2] # get spatial dim - v = SwapAxes()(v, -1, -2) - v = layers.Dense(n, use_bias=use_bias, name=f"{name}_Dense_0")(v) - v = SwapAxes()(v, -1, -2) - return u * (v + 1.0) - - return apply - - -def BlockGmlpLayer( - block_size, - use_bias: bool = True, - factor: int = 2, - dropout_rate: float = 0.0, - name: str = "block_gmlp", -): - """Block gMLP layer that performs local mixing of tokens.""" - - def apply(x): - n, h, w, num_channels = ( - K.int_shape(x)[0], - K.int_shape(x)[1], - K.int_shape(x)[2], - K.int_shape(x)[3], - ) - fh, fw = block_size - gh, gw = h // fh, w // fw - x = BlockImages()(x, patch_size=(fh, fw)) - # MLP2: Local (block) mixing part, provides within-block communication. - y = layers.LayerNormalization(epsilon=1e-06, name=f"{name}_LayerNorm")(x) - y = layers.Dense( - num_channels * factor, - use_bias=use_bias, - name=f"{name}_in_project", - )(y) - y = tf.nn.gelu(y, approximate=True) - y = BlockGatingUnit(use_bias=use_bias, name=f"{name}_BlockGatingUnit")(y) - y = layers.Dense( - num_channels, - use_bias=use_bias, - name=f"{name}_out_project", - )(y) - y = layers.Dropout(dropout_rate)(y) - x = x + y - x = UnblockImages()(x, grid_size=(gh, gw), patch_size=(fh, fw)) - return x - - return apply diff --git a/spaces/sciling/Face_and_Plate_License_Blur/utils/activations.py b/spaces/sciling/Face_and_Plate_License_Blur/utils/activations.py deleted file mode 100644 index aa3ddf071d28daa3061b6d796cb60cd7a88f557c..0000000000000000000000000000000000000000 --- a/spaces/sciling/Face_and_Plate_License_Blur/utils/activations.py +++ /dev/null @@ -1,72 +0,0 @@ -# Activation functions - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -# SiLU https://arxiv.org/pdf/1606.08415.pdf ---------------------------------------------------------------------------- -class SiLU(nn.Module): # export-friendly version of nn.SiLU() - @staticmethod - def forward(x): - return x * torch.sigmoid(x) - - -class Hardswish(nn.Module): # export-friendly version of nn.Hardswish() - @staticmethod - def forward(x): - # return x * F.hardsigmoid(x) # for torchscript and CoreML - return x * F.hardtanh(x + 3, 0., 6.) / 6. # for torchscript, CoreML and ONNX - - -class MemoryEfficientSwish(nn.Module): - class F(torch.autograd.Function): - @staticmethod - def forward(ctx, x): - ctx.save_for_backward(x) - return x * torch.sigmoid(x) - - @staticmethod - def backward(ctx, grad_output): - x = ctx.saved_tensors[0] - sx = torch.sigmoid(x) - return grad_output * (sx * (1 + x * (1 - sx))) - - def forward(self, x): - return self.F.apply(x) - - -# Mish https://github.com/digantamisra98/Mish -------------------------------------------------------------------------- -class Mish(nn.Module): - @staticmethod - def forward(x): - return x * F.softplus(x).tanh() - - -class MemoryEfficientMish(nn.Module): - class F(torch.autograd.Function): - @staticmethod - def forward(ctx, x): - ctx.save_for_backward(x) - return x.mul(torch.tanh(F.softplus(x))) # x * tanh(ln(1 + exp(x))) - - @staticmethod - def backward(ctx, grad_output): - x = ctx.saved_tensors[0] - sx = torch.sigmoid(x) - fx = F.softplus(x).tanh() - return grad_output * (fx + x * sx * (1 - fx * fx)) - - def forward(self, x): - return self.F.apply(x) - - -# FReLU https://arxiv.org/abs/2007.11824 ------------------------------------------------------------------------------- -class FReLU(nn.Module): - def __init__(self, c1, k=3): # ch_in, kernel - super().__init__() - self.conv = nn.Conv2d(c1, c1, k, 1, 1, groups=c1, bias=False) - self.bn = nn.BatchNorm2d(c1) - - def forward(self, x): - return torch.max(x, self.bn(self.conv(x))) diff --git a/spaces/sdhsdhk/bingo111/src/lib/bots/bing/index.ts b/spaces/sdhsdhk/bingo111/src/lib/bots/bing/index.ts deleted file mode 100644 index 2c4afae01a345b8415935228566cb30d695e768d..0000000000000000000000000000000000000000 --- a/spaces/sdhsdhk/bingo111/src/lib/bots/bing/index.ts +++ /dev/null @@ -1,421 +0,0 @@ -import { fetch, WebSocket, debug } from '@/lib/isomorphic' -import WebSocketAsPromised from 'websocket-as-promised' -import { - SendMessageParams, - BingConversationStyle, - ConversationResponse, - ChatResponseMessage, - ConversationInfo, - InvocationEventType, - ChatError, - ErrorCode, - ChatUpdateCompleteResponse, - ImageInfo, - KBlobResponse -} from './types' - -import { convertMessageToMarkdown, websocketUtils, streamAsyncIterable } from './utils' -import { WatchDog, createChunkDecoder } from '@/lib/utils' - -type Params = SendMessageParams<{ bingConversationStyle: BingConversationStyle }> - -const OPTIONS_SETS = [ - 'nlu_direct_response_filter', - 'deepleo', - 'disable_emoji_spoken_text', - 'responsible_ai_policy_235', - 'enablemm', - 'iycapbing', - 'iyxapbing', - 'objopinion', - 'rweasgv2', - 'dagslnv1', - 'dv3sugg', - 'autosave', - 'iyoloxap', - 'iyoloneutral', - 'clgalileo', - 'gencontentv3', -] - -export class BingWebBot { - protected conversationContext?: ConversationInfo - protected cookie: string - protected ua: string - protected endpoint = '' - private lastText = '' - private asyncTasks: Array> = [] - - constructor(opts: { - cookie: string - ua: string - bingConversationStyle?: BingConversationStyle - conversationContext?: ConversationInfo - }) { - const { cookie, ua, conversationContext } = opts - this.cookie = cookie?.includes(';') ? cookie : `_EDGE_V=1; _U=${cookie}` - this.ua = ua - this.conversationContext = conversationContext - } - - static buildChatRequest(conversation: ConversationInfo) { - const optionsSets = OPTIONS_SETS - if (conversation.conversationStyle === BingConversationStyle.Precise) { - optionsSets.push('h3precise') - } else if (conversation.conversationStyle === BingConversationStyle.Creative) { - optionsSets.push('h3imaginative') - } - return { - arguments: [ - { - source: 'cib', - optionsSets, - allowedMessageTypes: [ - 'Chat', - 'InternalSearchQuery', - 'Disengaged', - 'InternalLoaderMessage', - 'SemanticSerp', - 'GenerateContentQuery', - 'SearchQuery', - ], - sliceIds: [ - 'winmuid1tf', - 'anssupfor_c', - 'imgchatgptv2', - 'tts2cf', - 'contansperf', - 'mlchatpc8500w', - 'mlchatpc2', - 'ctrlworkpay', - 'winshortmsgtf', - 'cibctrl', - 'sydtransctrl', - 'sydconfigoptc', - '0705trt4', - '517opinion', - '628ajcopus0', - '330uaugs0', - '529rwea', - '0626snptrcs0', - '424dagslnv1', - ], - isStartOfSession: conversation.invocationId === 0, - message: { - author: 'user', - inputMethod: 'Keyboard', - text: conversation.prompt, - imageUrl: conversation.imageUrl, - messageType: 'Chat', - }, - conversationId: conversation.conversationId, - conversationSignature: conversation.conversationSignature, - participant: { id: conversation.clientId }, - }, - ], - invocationId: conversation.invocationId.toString(), - target: 'chat', - type: InvocationEventType.StreamInvocation, - } - } - - async createConversation(): Promise { - const headers = { - 'Accept-Encoding': 'gzip, deflate, br, zsdch', - 'User-Agent': this.ua, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: this.cookie, - } - - let resp: ConversationResponse | undefined - try { - const response = await fetch(this.endpoint + '/api/create', { method: 'POST', headers, redirect: 'error', mode: 'cors', credentials: 'include' }) - if (response.status === 404) { - throw new ChatError('Not Found', ErrorCode.NOTFOUND_ERROR) - } - resp = await response.json() as ConversationResponse - } catch (err) { - console.error('create conversation error', err) - } - - if (!resp?.result) { - throw new ChatError('Invalid response', ErrorCode.UNKOWN_ERROR) - } - - const { value, message } = resp.result || {} - if (value !== 'Success') { - const errorMsg = `${value}: ${message}` - if (value === 'UnauthorizedRequest') { - throw new ChatError(errorMsg, ErrorCode.BING_UNAUTHORIZED) - } - if (value === 'Forbidden') { - throw new ChatError(errorMsg, ErrorCode.BING_FORBIDDEN) - } - throw new ChatError(errorMsg, ErrorCode.UNKOWN_ERROR) - } - return resp - } - - private async createContext(conversationStyle: BingConversationStyle) { - if (!this.conversationContext) { - const conversation = await this.createConversation() - this.conversationContext = { - conversationId: conversation.conversationId, - conversationSignature: conversation.conversationSignature, - clientId: conversation.clientId, - invocationId: 0, - conversationStyle, - prompt: '', - } - } - return this.conversationContext - } - - async sendMessage(params: Params) { - try { - await this.createContext(params.options.bingConversationStyle) - Object.assign(this.conversationContext!, { prompt: params.prompt, imageUrl: params.imageUrl }) - return this.sydneyProxy(params) - } catch (error) { - params.onEvent({ - type: 'ERROR', - error: error instanceof ChatError ? error : new ChatError('Catch Error', ErrorCode.UNKOWN_ERROR), - }) - } - } - - private async sydneyProxy(params: Params) { - const abortController = new AbortController() - const response = await fetch(this.endpoint + '/api/sydney', { - method: 'POST', - headers: { - 'Content-Type': 'application/json', - }, - signal: abortController.signal, - body: JSON.stringify(this.conversationContext!) - }) - if (response.status !== 200) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - 'Unknown error', - ErrorCode.UNKOWN_ERROR, - ), - }) - } - params.signal?.addEventListener('abort', () => { - abortController.abort() - }) - - const textDecoder = createChunkDecoder() - for await (const chunk of streamAsyncIterable(response.body!)) { - this.parseEvents(params, websocketUtils.unpackMessage(textDecoder(chunk))) - } - } - - async sendWs() { - const wsConfig: ConstructorParameters[1] = { - packMessage: websocketUtils.packMessage, - unpackMessage: websocketUtils.unpackMessage, - createWebSocket: (url) => new WebSocket(url, { - headers: { - 'accept-language': 'zh-CN,zh;q=0.9', - 'cache-control': 'no-cache', - 'User-Agent': this.ua, - pragma: 'no-cache', - cookie: this.cookie, - } - }) - } - const wsp = new WebSocketAsPromised('wss://sydney.bing.com/sydney/ChatHub', wsConfig) - - wsp.open().then(() => { - wsp.sendPacked({ protocol: 'json', version: 1 }) - wsp.sendPacked({ type: 6 }) - wsp.sendPacked(BingWebBot.buildChatRequest(this.conversationContext!)) - }) - - return wsp - } - - private async useWs(params: Params) { - const wsp = await this.sendWs() - const watchDog = new WatchDog() - wsp.onUnpackedMessage.addListener((events) => { - watchDog.watch(() => { - wsp.sendPacked({ type: 6 }) - }) - this.parseEvents(params, events) - }) - - wsp.onClose.addListener(() => { - watchDog.reset() - params.onEvent({ type: 'DONE' }) - wsp.removeAllListeners() - }) - - params.signal?.addEventListener('abort', () => { - wsp.removeAllListeners() - wsp.close() - }) - } - - private async createImage(prompt: string, id: string) { - try { - const headers = { - 'Accept-Encoding': 'gzip, deflate, br, zsdch', - 'User-Agent': this.ua, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: this.cookie, - } - const query = new URLSearchParams({ - prompt, - id - }) - const response = await fetch(this.endpoint + '/api/image?' + query.toString(), - { - method: 'POST', - headers, - mode: 'cors', - credentials: 'include' - }) - .then(res => res.text()) - if (response) { - this.lastText += '\n' + response - } - } catch (err) { - console.error('Create Image Error', err) - } - } - - private buildKnowledgeApiPayload(imageUrl: string, conversationStyle: BingConversationStyle) { - const imageInfo: ImageInfo = {} - let imageBase64: string | undefined = undefined - const knowledgeRequest = { - imageInfo, - knowledgeRequest: { - invokedSkills: [ - 'ImageById' - ], - subscriptionId: 'Bing.Chat.Multimodal', - invokedSkillsRequestData: { - enableFaceBlur: true - }, - convoData: { - convoid: this.conversationContext?.conversationId, - convotone: conversationStyle, - } - }, - } - - if (imageUrl.startsWith('data:image/')) { - imageBase64 = imageUrl.replace('data:image/', ''); - const partIndex = imageBase64.indexOf(',') - if (partIndex) { - imageBase64 = imageBase64.substring(partIndex + 1) - } - } else { - imageInfo.url = imageUrl - } - return { knowledgeRequest, imageBase64 } - } - - async uploadImage(imageUrl: string, conversationStyle: BingConversationStyle = BingConversationStyle.Creative): Promise { - if (!imageUrl) { - return - } - await this.createContext(conversationStyle) - const payload = this.buildKnowledgeApiPayload(imageUrl, conversationStyle) - - const response = await fetch(this.endpoint + '/api/kblob', - { - headers: { - 'Content-Type': 'application/json', - }, - method: 'POST', - mode: 'cors', - credentials: 'include', - body: JSON.stringify(payload), - }) - .then(res => res.json()) - .catch(e => { - console.log('Error', e) - }) - return response - } - - private async generateContent(message: ChatResponseMessage) { - if (message.contentType === 'IMAGE') { - this.asyncTasks.push(this.createImage(message.text, message.messageId)) - } - } - - private async parseEvents(params: Params, events: any) { - const conversation = this.conversationContext! - - events?.forEach(async (event: ChatUpdateCompleteResponse) => { - debug('bing event', event) - if (event.type === 3) { - await Promise.all(this.asyncTasks) - this.asyncTasks = [] - params.onEvent({ type: 'UPDATE_ANSWER', data: { text: this.lastText } }) - params.onEvent({ type: 'DONE' }) - conversation.invocationId = parseInt(event.invocationId, 10) + 1 - } else if (event.type === 1) { - const messages = event.arguments[0].messages - if (messages) { - const text = convertMessageToMarkdown(messages[0]) - this.lastText = text - params.onEvent({ type: 'UPDATE_ANSWER', data: { text, spokenText: messages[0].text, throttling: event.arguments[0].throttling } }) - } - } else if (event.type === 2) { - const messages = event.item.messages as ChatResponseMessage[] | undefined - if (!messages) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - event.item.result.error || 'Unknown error', - event.item.result.value === 'Throttled' ? ErrorCode.THROTTLE_LIMIT - : event.item.result.value === 'CaptchaChallenge' ? (this.conversationContext?.conversationId?.includes('BingProdUnAuthenticatedUsers') ? ErrorCode.BING_UNAUTHORIZED : ErrorCode.BING_CAPTCHA) - : ErrorCode.UNKOWN_ERROR - ), - }) - return - } - const limited = messages.some((message) => - message.contentOrigin === 'TurnLimiter' - || message.messageType === 'Disengaged' - ) - if (limited) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - 'Sorry, you have reached chat limit in this conversation.', - ErrorCode.CONVERSATION_LIMIT, - ), - }) - return - } - - const lastMessage = event.item.messages.at(-1) as ChatResponseMessage - const specialMessage = event.item.messages.find(message => message.author === 'bot' && message.contentType === 'IMAGE') - if (specialMessage) { - this.generateContent(specialMessage) - } - - if (lastMessage) { - const text = convertMessageToMarkdown(lastMessage) - this.lastText = text - params.onEvent({ - type: 'UPDATE_ANSWER', - data: { text, throttling: event.item.throttling, suggestedResponses: lastMessage.suggestedResponses, sourceAttributions: lastMessage.sourceAttributions }, - }) - } - } - }) - } - - resetConversation() { - this.conversationContext = undefined - } -} diff --git a/spaces/serdaryildiz/TRCaptionNet/app.py b/spaces/serdaryildiz/TRCaptionNet/app.py deleted file mode 100644 index 81382b88470804908b6830593023c9b25c990f0c..0000000000000000000000000000000000000000 --- a/spaces/serdaryildiz/TRCaptionNet/app.py +++ /dev/null @@ -1,63 +0,0 @@ -import os.path - -import gdown -import gradio as gr -import torch - -from Model import TRCaptionNet, clip_transform - -model_ckpt = "./checkpoints/TRCaptionNet_L14_berturk.pth" -if not os.path.exists(model_ckpt): - os.makedirs("./checkpoints/", exist_ok=True) - url = 'https://drive.google.com/u/0/uc?id=14Ll1PIQhsMSypHT34Rt9voz_zaAf4Xh9&export=download&confirm=t&uuid=9b4bf589-d438-4b4f-a37c-fc34b0a63a5d&at=AB6BwCAY8xK0EZiPGv2YT7isL8pG:1697575816291' - gdown.download(url, model_ckpt, quiet=False) - -device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') -# device = "cpu" - -preprocess = clip_transform(224) -model = TRCaptionNet({ - "max_length": 35, - "clip": "ViT-L/14", - "bert": "dbmdz/bert-base-turkish-cased", - "proj": True, - "proj_num_head": 16 -}) -model.load_state_dict(torch.load(model_ckpt, map_location=device)["model"], strict=True) -model = model.to(device) -model.eval() - - -def inference(raw_image, min_length, repetition_penalty): - batch = preprocess(raw_image).unsqueeze(0).to(device) - caption = model.generate(batch, min_length=min_length, repetition_penalty=repetition_penalty)[0] - return caption - - -inputs = [gr.Image(type='pil', interactive=True,), - gr.Slider(minimum=6, maximum=22, value=11, label="MINIMUM CAPTION LENGTH", step=1), - gr.Slider(minimum=1, maximum=2, value=1.6, label="REPETITION PENALTY")] -outputs = gr.components.Textbox(label="Caption") -title = "TRCaptionNet" -paper_link = "" -github_link = "https://github.com/serdaryildiz/TRCaptionNet" -description = f"

    TRCaptionNet : A novel and accurate deep Turkish image captioning model with vision transformer based image encoders and deep linguistic text decoders" -examples = [ - ["images/test1.jpg"], - ["images/test2.jpg"], - ["images/test3.jpg"], - ["images/test4.jpg"] -] -article = f"

    Paper | Github Repo

    " -css = ".output-image, .input-image, .image-preview {height: 600px !important}" - -iface = gr.Interface(fn=inference, - inputs=inputs, - outputs=outputs, - title=title, - description=description, - examples=examples, - article=article, - css=css, - enable_queue=True) -iface.launch() diff --git a/spaces/simonduerr/ProteinMPNN/af_backprop/alphafold/model/mapping.py b/spaces/simonduerr/ProteinMPNN/af_backprop/alphafold/model/mapping.py deleted file mode 100644 index 6041f7c769d4e32f34af2e3528bfba9128cc521f..0000000000000000000000000000000000000000 --- a/spaces/simonduerr/ProteinMPNN/af_backprop/alphafold/model/mapping.py +++ /dev/null @@ -1,218 +0,0 @@ -# Copyright 2021 DeepMind Technologies Limited -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Specialized mapping functions.""" - -import functools - -from typing import Any, Callable, Optional, Sequence, Union - -import haiku as hk -import jax -import jax.numpy as jnp - - -PYTREE = Any -PYTREE_JAX_ARRAY = Any - -partial = functools.partial -PROXY = object() - - -def _maybe_slice(array, i, slice_size, axis): - if axis is PROXY: - return array - else: - return jax.lax.dynamic_slice_in_dim( - array, i, slice_size=slice_size, axis=axis) - - -def _maybe_get_size(array, axis): - if axis == PROXY: - return -1 - else: - return array.shape[axis] - - -def _expand_axes(axes, values, name='sharded_apply'): - values_tree_def = jax.tree_flatten(values)[1] - flat_axes = jax.api_util.flatten_axes(name, values_tree_def, axes) - # Replace None's with PROXY - flat_axes = [PROXY if x is None else x for x in flat_axes] - return jax.tree_unflatten(values_tree_def, flat_axes) - - -def sharded_map( - fun: Callable[..., PYTREE_JAX_ARRAY], - shard_size: Union[int, None] = 1, - in_axes: Union[int, PYTREE] = 0, - out_axes: Union[int, PYTREE] = 0) -> Callable[..., PYTREE_JAX_ARRAY]: - """Sharded vmap. - - Maps `fun` over axes, in a way similar to vmap, but does so in shards of - `shard_size`. This allows a smooth trade-off between memory usage - (as in a plain map) vs higher throughput (as in a vmap). - - Args: - fun: Function to apply smap transform to. - shard_size: Integer denoting shard size. - in_axes: Either integer or pytree describing which axis to map over for each - input to `fun`, None denotes broadcasting. - out_axes: integer or pytree denoting to what axis in the output the mapped - over axis maps. - - Returns: - function with smap applied. - """ - vmapped_fun = hk.vmap(fun, in_axes, out_axes) - return sharded_apply(vmapped_fun, shard_size, in_axes, out_axes) - - -def sharded_apply( - fun: Callable[..., PYTREE_JAX_ARRAY], # pylint: disable=g-bare-generic - shard_size: Union[int, None] = 1, - in_axes: Union[int, PYTREE] = 0, - out_axes: Union[int, PYTREE] = 0, - new_out_axes: bool = False) -> Callable[..., PYTREE_JAX_ARRAY]: - """Sharded apply. - - Applies `fun` over shards to axes, in a way similar to vmap, - but does so in shards of `shard_size`. Shards are stacked after. - This allows a smooth trade-off between - memory usage (as in a plain map) vs higher throughput (as in a vmap). - - Args: - fun: Function to apply smap transform to. - shard_size: Integer denoting shard size. - in_axes: Either integer or pytree describing which axis to map over for each - input to `fun`, None denotes broadcasting. - out_axes: integer or pytree denoting to what axis in the output the mapped - over axis maps. - new_out_axes: whether to stack outputs on new axes. This assumes that the - output sizes for each shard (including the possible remainder shard) are - the same. - - Returns: - function with smap applied. - """ - docstr = ('Mapped version of {fun}. Takes similar arguments to {fun} ' - 'but with additional array axes over which {fun} is mapped.') - if new_out_axes: - raise NotImplementedError('New output axes not yet implemented.') - - # shard size None denotes no sharding - if shard_size is None: - return fun - - @jax.util.wraps(fun, docstr=docstr) - def mapped_fn(*args): - # Expand in axes and Determine Loop range - in_axes_ = _expand_axes(in_axes, args) - - in_sizes = jax.tree_util.tree_map(_maybe_get_size, args, in_axes_) - flat_sizes = jax.tree_flatten(in_sizes)[0] - in_size = max(flat_sizes) - assert all(i in {in_size, -1} for i in flat_sizes) - - num_extra_shards = (in_size - 1) // shard_size - - # Fix Up if necessary - last_shard_size = in_size % shard_size - last_shard_size = shard_size if last_shard_size == 0 else last_shard_size - - def apply_fun_to_slice(slice_start, slice_size): - input_slice = jax.tree_util.tree_map( - lambda array, axis: _maybe_slice(array, slice_start, slice_size, axis - ), args, in_axes_) - return fun(*input_slice) - - remainder_shape_dtype = hk.eval_shape( - partial(apply_fun_to_slice, 0, last_shard_size)) - out_dtypes = jax.tree_map(lambda x: x.dtype, remainder_shape_dtype) - out_shapes = jax.tree_map(lambda x: x.shape, remainder_shape_dtype) - out_axes_ = _expand_axes(out_axes, remainder_shape_dtype) - - if num_extra_shards > 0: - regular_shard_shape_dtype = hk.eval_shape( - partial(apply_fun_to_slice, 0, shard_size)) - shard_shapes = jax.tree_map(lambda x: x.shape, regular_shard_shape_dtype) - - def make_output_shape(axis, shard_shape, remainder_shape): - return shard_shape[:axis] + ( - shard_shape[axis] * num_extra_shards + - remainder_shape[axis],) + shard_shape[axis + 1:] - - out_shapes = jax.tree_util.tree_map(make_output_shape, out_axes_, shard_shapes, - out_shapes) - - # Calls dynamic Update slice with different argument order - # This is here since tree_multimap only works with positional arguments - def dynamic_update_slice_in_dim(full_array, update, axis, i): - return jax.lax.dynamic_update_slice_in_dim(full_array, update, i, axis) - - def compute_shard(outputs, slice_start, slice_size): - slice_out = apply_fun_to_slice(slice_start, slice_size) - update_slice = partial( - dynamic_update_slice_in_dim, i=slice_start) - return jax.tree_util.tree_map(update_slice, outputs, slice_out, out_axes_) - - def scan_iteration(outputs, i): - new_outputs = compute_shard(outputs, i, shard_size) - return new_outputs, () - - slice_starts = jnp.arange(0, in_size - shard_size + 1, shard_size) - - def allocate_buffer(dtype, shape): - return jnp.zeros(shape, dtype=dtype) - - outputs = jax.tree_util.tree_map(allocate_buffer, out_dtypes, out_shapes) - - if slice_starts.shape[0] > 0: - outputs, _ = hk.scan(scan_iteration, outputs, slice_starts) - - if last_shard_size != shard_size: - remainder_start = in_size - last_shard_size - outputs = compute_shard(outputs, remainder_start, last_shard_size) - - return outputs - - return mapped_fn - - -def inference_subbatch( - module: Callable[..., PYTREE_JAX_ARRAY], - subbatch_size: int, - batched_args: Sequence[PYTREE_JAX_ARRAY], - nonbatched_args: Sequence[PYTREE_JAX_ARRAY], - low_memory: bool = True, - input_subbatch_dim: int = 0, - output_subbatch_dim: Optional[int] = None) -> PYTREE_JAX_ARRAY: - """Run through subbatches (like batch apply but with split and concat).""" - assert len(batched_args) > 0 # pylint: disable=g-explicit-length-test - - if not low_memory: - args = list(batched_args) + list(nonbatched_args) - return module(*args) - - if output_subbatch_dim is None: - output_subbatch_dim = input_subbatch_dim - - def run_module(*batched_args): - args = list(batched_args) + list(nonbatched_args) - return module(*args) - sharded_module = sharded_apply(run_module, - shard_size=subbatch_size, - in_axes=input_subbatch_dim, - out_axes=output_subbatch_dim) - return sharded_module(*batched_args) diff --git a/spaces/simonduerr/ProteinMPNNESM/app.py b/spaces/simonduerr/ProteinMPNNESM/app.py deleted file mode 100644 index f8ecd57b89b4b6d659f18f7501cfce1c34bf0da3..0000000000000000000000000000000000000000 --- a/spaces/simonduerr/ProteinMPNNESM/app.py +++ /dev/null @@ -1,1333 +0,0 @@ -import copy -import re - -import os.path - -import torch -import sys - -import gradio as gr -import pandas as pd -import numpy as np -import plotly.express as px -import matplotlib.pyplot as plt -import plotly.graph_objects as go - -import tempfile -import requests -from moleculekit.molecule import Molecule - -sys.path.append("/home/user/app/ProteinMPNN/vanilla_proteinmpnn") -# this is for local -sys.path.append(os.path.join(os.getcwd(), "ProteinMPNN/vanilla_proteinmpnn")) - - -def make_tied_positions_for_homomers(pdb_dict_list): - my_dict = {} - for result in pdb_dict_list: - all_chain_list = sorted( - [item[-1:] for item in list(result) if item[:9] == "seq_chain"] - ) # A, B, C, ... - tied_positions_list = [] - chain_length = len(result[f"seq_chain_{all_chain_list[0]}"]) - for i in range(1, chain_length + 1): - temp_dict = {} - for j, chain in enumerate(all_chain_list): - temp_dict[chain] = [i] # needs to be a list - tied_positions_list.append(temp_dict) - my_dict[result["name"]] = tied_positions_list - return my_dict - - -def align_structures(pdb1, pdb2, index): - """Take two structure and superimpose pdb1 on pdb2""" - import Bio.PDB - import subprocess - - pdb_parser = Bio.PDB.PDBParser(QUIET=True) - # Get the structures - ref_structure = pdb_parser.get_structure("ref", pdb1) - sample_structure = pdb_parser.get_structure("sample", pdb2) - - sample_structure_ca = [ - atom for atom in sample_structure.get_atoms() if atom.name == "CA" - ] - plddts = [atom.get_bfactor() for atom in sample_structure_ca] - - aligner = Bio.PDB.CEAligner() - aligner.set_reference(ref_structure) - aligner.align(sample_structure) - - io = Bio.PDB.PDBIO() - io.set_structure(ref_structure) - hash = os.path.splitext(os.path.basename(pdb2))[0] - io.save(f"outputs/{hash}_ref_{index}.pdb") - io.set_structure(sample_structure) - io.save(f"outputs/{hash}_align_{index}.pdb") - # Doing this to get around biopython CEALIGN bug - # subprocess.call("pymol -c -Q -r cealign.pml", shell=True) - - return ( - aligner.rms, - f"outputs/{hash}_ref_{index}.pdb", - f"outputs/{hash}_align_{index}.pdb", - plddts, - ) - - -if not os.path.exists("/home/user/app/ProteinMPNN/"): - path_to_model_weights = os.path.join( - os.getcwd(), "ProteinMPNN/vanilla_proteinmpnn/vanilla_model_weights" - ) - is_local = True -else: - path_to_model_weights = ( - "/home/user/app/ProteinMPNN/vanilla_proteinmpnn/vanilla_model_weights" - ) - is_local = False - - -if is_local: - print("Running locally") - from transformers import AutoTokenizer, EsmForProteinFolding - - -def setup_proteinmpnn(model_name="v_48_020", backbone_noise=0.00): - from protein_mpnn_utils import ( - loss_nll, - loss_smoothed, - gather_edges, - gather_nodes, - gather_nodes_t, - cat_neighbors_nodes, - _scores, - _S_to_seq, - tied_featurize, - parse_PDB, - ) - from protein_mpnn_utils import StructureDataset, StructureDatasetPDB, ProteinMPNN - - device = torch.device( - "cpu" - ) # torch.device("cuda:0" if (torch.cuda.is_available()) else "cpu") #fix for memory issues - # ProteinMPNN model name: v_48_002, v_48_010, v_48_020, v_48_030, v_32_002, v_32_010; v_32_020, v_32_030; v_48_010=version with 48 edges 0.10A noise - # Standard deviation of Gaussian noise to add to backbone atoms - hidden_dim = 128 - num_layers = 3 - model_folder_path = path_to_model_weights - if model_folder_path[-1] != "/": - model_folder_path = model_folder_path + "/" - checkpoint_path = model_folder_path + f"{model_name}.pt" - - checkpoint = torch.load(checkpoint_path, map_location=device) - - noise_level_print = checkpoint["noise_level"] - - model = ProteinMPNN( - num_letters=21, - node_features=hidden_dim, - edge_features=hidden_dim, - hidden_dim=hidden_dim, - num_encoder_layers=num_layers, - num_decoder_layers=num_layers, - augment_eps=backbone_noise, - k_neighbors=checkpoint["num_edges"], - ) - model.to(device) - model.load_state_dict(checkpoint["model_state_dict"]) - model.eval() - return model, device - - -def get_pdb(pdb_code="", filepath=""): - if pdb_code is None or pdb_code == "": - try: - return filepath.name - except AttributeError as e: - return None - else: - os.system(f"wget -qnc https://files.rcsb.org/view/{pdb_code}.pdb") - return f"{pdb_code}.pdb" - - -def preprocess_mol(pdb_code="", filepath=""): - if pdb_code is None or pdb_code == "": - try: - mol = Molecule(filepath.name) - except AttributeError as e: - return None - else: - mol = Molecule(pdb_code) - mol.write("original.pdb") - # clean messy files and only include protein itself - mol.filter("protein") - # renumber using moleculekit 0...len(protein) - df = mol.renumberResidues(returnMapping=True) - # add proteinMPNN index col which used 1..len(chain), 1...len(chain) - indexes = [] - for chain, g in df.groupby("chain"): - j = 1 - for i, row in g.iterrows(): - indexes.append(j) - j += 1 - df["proteinMPNN_index"] = indexes - mol.write("cleaned.pdb") - return "cleaned.pdb", df - - -def assign_sasa(mol): - from moleculekit.projections.metricsasa import MetricSasa - - metr = MetricSasa(mode="residue", filtersel="protein") - sasaR = metr.project(mol)[0] - is_prot = mol.atomselect("protein") - resids = pd.DataFrame.from_dict({"resid": mol.resid, "is_prot": is_prot}) - new_masses = [] - i_without_non_prot = 0 - for i, g in resids.groupby((resids["resid"].shift() != resids["resid"]).cumsum()): - if g["is_prot"].unique()[0] == True: - g["sasa"] = sasaR[i_without_non_prot] - i_without_non_prot += 1 - else: - g["sasa"] = 0 - new_masses.extend(list(g.sasa)) - return np.array(new_masses) - - -def process_atomsel(atomsel): - """everything lowercase and replace some keywords not relevant for protein design""" - atomsel = re.sub("sasa", "mass", atomsel, flags=re.I) - atomsel = re.sub("plddt", "beta", atomsel, flags=re.I) - return atomsel - - -def make_fixed_positions_dict(atomsel, residue_index_df): - # we use the uploaded file for the selection - mol = Molecule("original.pdb") - # use index for selection as resids will change - - # set sasa to 0 for all non protein atoms (all non protein atoms are deleted later) - mol.masses = assign_sasa(mol) - print(mol.masses.shape) - print(assign_sasa(mol).shape) - atomsel = process_atomsel(atomsel) - selected_residues = mol.get("index", atomsel) - - # clean up - mol.filter("protein") - mol.renumberResidues() - # based on selected index now get resids - selected_residues = [str(i) for i in selected_residues] - if len(selected_residues) == 0: - return None, [] - selected_residues_str = " ".join(selected_residues) - selected_residues = set(mol.get("resid", sel=f"index {selected_residues_str}")) - - # use the proteinMPNN index nomenclature to assemble fixed_positions_dict - fixed_positions_df = residue_index_df[ - residue_index_df["new_resid"].isin(selected_residues) - ] - - chains = set(mol.get("chain", sel="all")) - fixed_position_dict = {"cleaned": {}} - # store the selected residues in a list for the visualization later with cleaned.pdb - selected_residues = list(fixed_positions_df["new_resid"]) - - for c in chains: - fixed_position_dict["cleaned"][c] = [] - - for i, row in fixed_positions_df.iterrows(): - fixed_position_dict["cleaned"][row["chain"]].append(row["proteinMPNN_index"]) - return fixed_position_dict, selected_residues - - -def update( - inp, - file, - designed_chain, - fixed_chain, - homomer, - num_seqs, - sampling_temp, - model_name, - backbone_noise, - atomsel, -): - from protein_mpnn_utils import ( - loss_nll, - loss_smoothed, - gather_edges, - gather_nodes, - gather_nodes_t, - cat_neighbors_nodes, - _scores, - _S_to_seq, - tied_featurize, - parse_PDB, - ) - from protein_mpnn_utils import StructureDataset, StructureDatasetPDB, ProteinMPNN - - # pdb_path = get_pdb(pdb_code=inp, filepath=file) - - pdb_path, mol_index = preprocess_mol(pdb_code=inp, filepath=file) - - if pdb_path == None: - return "Error processing PDB" - - model, device = setup_proteinmpnn( - model_name=model_name, backbone_noise=backbone_noise - ) - - if designed_chain == "": - designed_chain_list = [] - else: - designed_chain_list = re.sub("[^A-Za-z]+", ",", designed_chain).split(",") - - if fixed_chain == "": - fixed_chain_list = [] - else: - fixed_chain_list = re.sub("[^A-Za-z]+", ",", fixed_chain).split(",") - - chain_list = list(set(designed_chain_list + fixed_chain_list)) - num_seq_per_target = num_seqs - save_score = 0 # 0 for False, 1 for True; save score=-log_prob to npy files - save_probs = ( - 0 # 0 for False, 1 for True; save MPNN predicted probabilites per position - ) - score_only = 0 # 0 for False, 1 for True; score input backbone-sequence pairs - conditional_probs_only = 0 # 0 for False, 1 for True; output conditional probabilities p(s_i given the rest of the sequence and backbone) - conditional_probs_only_backbone = 0 # 0 for False, 1 for True; if true output conditional probabilities p(s_i given backbone) - - batch_size = 1 # Batch size; can set higher for titan, quadro GPUs, reduce this if running out of GPU memory - max_length = 20000 # Max sequence length - - out_folder = "." # Path to a folder to output sequences, e.g. /home/out/ - jsonl_path = "" # Path to a folder with parsed pdb into jsonl - omit_AAs = "X" # Specify which amino acids should be omitted in the generated sequence, e.g. 'AC' would omit alanine and cystine. - - pssm_multi = 0.0 # A value between [0.0, 1.0], 0.0 means do not use pssm, 1.0 ignore MPNN predictions - pssm_threshold = 0.0 # A value between -inf + inf to restric per position AAs - pssm_log_odds_flag = 0 # 0 for False, 1 for True - pssm_bias_flag = 0 # 0 for False, 1 for True - - folder_for_outputs = out_folder - - NUM_BATCHES = num_seq_per_target // batch_size - BATCH_COPIES = batch_size - temperatures = [sampling_temp] - omit_AAs_list = omit_AAs - alphabet = "ACDEFGHIKLMNPQRSTVWYX" - - omit_AAs_np = np.array([AA in omit_AAs_list for AA in alphabet]).astype(np.float32) - - chain_id_dict = None - if atomsel == "": - fixed_positions_dict, selected_residues = None, [] - else: - fixed_positions_dict, selected_residues = make_fixed_positions_dict( - atomsel, mol_index - ) - - pssm_dict = None - omit_AA_dict = None - bias_AA_dict = None - - bias_by_res_dict = None - bias_AAs_np = np.zeros(len(alphabet)) - - ############################################################### - pdb_dict_list = parse_PDB(pdb_path, input_chain_list=chain_list) - dataset_valid = StructureDatasetPDB( - pdb_dict_list, truncate=None, max_length=max_length - ) - if homomer: - tied_positions_dict = make_tied_positions_for_homomers(pdb_dict_list) - else: - tied_positions_dict = None - - chain_id_dict = {} - chain_id_dict[pdb_dict_list[0]["name"]] = (designed_chain_list, fixed_chain_list) - with torch.no_grad(): - for ix, prot in enumerate(dataset_valid): - score_list = [] - all_probs_list = [] - all_log_probs_list = [] - S_sample_list = [] - batch_clones = [copy.deepcopy(prot) for i in range(BATCH_COPIES)] - ( - X, - S, - mask, - lengths, - chain_M, - chain_encoding_all, - chain_list_list, - visible_list_list, - masked_list_list, - masked_chain_length_list_list, - chain_M_pos, - omit_AA_mask, - residue_idx, - dihedral_mask, - tied_pos_list_of_lists_list, - pssm_coef, - pssm_bias, - pssm_log_odds_all, - bias_by_res_all, - tied_beta, - ) = tied_featurize( - batch_clones, - device, - chain_id_dict, - fixed_positions_dict, - omit_AA_dict, - tied_positions_dict, - pssm_dict, - bias_by_res_dict, - ) - pssm_log_odds_mask = ( - pssm_log_odds_all > pssm_threshold - ).float() # 1.0 for true, 0.0 for false - name_ = batch_clones[0]["name"] - - randn_1 = torch.randn(chain_M.shape, device=X.device) - log_probs = model( - X, - S, - mask, - chain_M * chain_M_pos, - residue_idx, - chain_encoding_all, - randn_1, - ) - mask_for_loss = mask * chain_M * chain_M_pos - scores = _scores(S, log_probs, mask_for_loss) - native_score = scores.cpu().data.numpy() - message = "" - seq_list = [] - seq_recovery = [] - seq_score = [] - for temp in temperatures: - for j in range(NUM_BATCHES): - randn_2 = torch.randn(chain_M.shape, device=X.device) - if tied_positions_dict == None: - sample_dict = model.sample( - X, - randn_2, - S, - chain_M, - chain_encoding_all, - residue_idx, - mask=mask, - temperature=temp, - omit_AAs_np=omit_AAs_np, - bias_AAs_np=bias_AAs_np, - chain_M_pos=chain_M_pos, - omit_AA_mask=omit_AA_mask, - pssm_coef=pssm_coef, - pssm_bias=pssm_bias, - pssm_multi=pssm_multi, - pssm_log_odds_flag=bool(pssm_log_odds_flag), - pssm_log_odds_mask=pssm_log_odds_mask, - pssm_bias_flag=bool(pssm_bias_flag), - bias_by_res=bias_by_res_all, - ) - S_sample = sample_dict["S"] - else: - sample_dict = model.tied_sample( - X, - randn_2, - S, - chain_M, - chain_encoding_all, - residue_idx, - mask=mask, - temperature=temp, - omit_AAs_np=omit_AAs_np, - bias_AAs_np=bias_AAs_np, - chain_M_pos=chain_M_pos, - omit_AA_mask=omit_AA_mask, - pssm_coef=pssm_coef, - pssm_bias=pssm_bias, - pssm_multi=pssm_multi, - pssm_log_odds_flag=bool(pssm_log_odds_flag), - pssm_log_odds_mask=pssm_log_odds_mask, - pssm_bias_flag=bool(pssm_bias_flag), - tied_pos=tied_pos_list_of_lists_list[0], - tied_beta=tied_beta, - bias_by_res=bias_by_res_all, - ) - # Compute scores - S_sample = sample_dict["S"] - log_probs = model( - X, - S_sample, - mask, - chain_M * chain_M_pos, - residue_idx, - chain_encoding_all, - randn_2, - use_input_decoding_order=True, - decoding_order=sample_dict["decoding_order"], - ) - mask_for_loss = mask * chain_M * chain_M_pos - scores = _scores(S_sample, log_probs, mask_for_loss) - scores = scores.cpu().data.numpy() - all_probs_list.append(sample_dict["probs"].cpu().data.numpy()) - all_log_probs_list.append(log_probs.cpu().data.numpy()) - S_sample_list.append(S_sample.cpu().data.numpy()) - for b_ix in range(BATCH_COPIES): - masked_chain_length_list = masked_chain_length_list_list[b_ix] - masked_list = masked_list_list[b_ix] - seq_recovery_rate = torch.sum( - torch.sum( - torch.nn.functional.one_hot(S[b_ix], 21) - * torch.nn.functional.one_hot(S_sample[b_ix], 21), - axis=-1, - ) - * mask_for_loss[b_ix] - ) / torch.sum(mask_for_loss[b_ix]) - seq = _S_to_seq(S_sample[b_ix], chain_M[b_ix]) - score = scores[b_ix] - score_list.append(score) - native_seq = _S_to_seq(S[b_ix], chain_M[b_ix]) - if b_ix == 0 and j == 0 and temp == temperatures[0]: - start = 0 - end = 0 - list_of_AAs = [] - for mask_l in masked_chain_length_list: - end += mask_l - list_of_AAs.append(native_seq[start:end]) - start = end - native_seq = "".join( - list(np.array(list_of_AAs)[np.argsort(masked_list)]) - ) - l0 = 0 - for mc_length in list( - np.array(masked_chain_length_list)[ - np.argsort(masked_list) - ] - )[:-1]: - l0 += mc_length - native_seq = native_seq[:l0] + "/" + native_seq[l0:] - l0 += 1 - sorted_masked_chain_letters = np.argsort( - masked_list_list[0] - ) - print_masked_chains = [ - masked_list_list[0][i] - for i in sorted_masked_chain_letters - ] - sorted_visible_chain_letters = np.argsort( - visible_list_list[0] - ) - print_visible_chains = [ - visible_list_list[0][i] - for i in sorted_visible_chain_letters - ] - native_score_print = np.format_float_positional( - np.float32(native_score.mean()), - unique=False, - precision=4, - ) - line = ">{}, score={}, fixed_chains={}, designed_chains={}, model_name={}\n{}\n".format( - name_, - native_score_print, - print_visible_chains, - print_masked_chains, - model_name, - native_seq, - ) - message += f"{line}\n" - start = 0 - end = 0 - list_of_AAs = [] - for mask_l in masked_chain_length_list: - end += mask_l - list_of_AAs.append(seq[start:end]) - start = end - - seq = "".join( - list(np.array(list_of_AAs)[np.argsort(masked_list)]) - ) - # add non designed chains to predicted sequence - l0 = 0 - for mc_length in list( - np.array(masked_chain_length_list)[np.argsort(masked_list)] - )[:-1]: - l0 += mc_length - seq = seq[:l0] + "/" + seq[l0:] - l0 += 1 - score_print = np.format_float_positional( - np.float32(score), unique=False, precision=4 - ) - seq_rec_print = np.format_float_positional( - np.float32(seq_recovery_rate.detach().cpu().numpy()), - unique=False, - precision=4, - ) - chain_s = "" - if len(visible_list_list[0]) > 0: - chain_M_bool = chain_M.bool() - not_designed = _S_to_seq(S[b_ix], ~chain_M_bool[b_ix]) - - labels = ( - chain_encoding_all[b_ix][~chain_M_bool[b_ix]] - .detach() - .cpu() - .numpy() - ) - - for c in set(labels): - chain_s += ":" - nd_mask = labels == c - for i, x in enumerate(not_designed): - if nd_mask[i]: - chain_s += x - seq_recovery.append(seq_rec_print) - seq_score.append(score_print) - line = ( - ">T={}, sample={}, score={}, seq_recovery={}\n{}\n".format( - temp, b_ix, score_print, seq_rec_print, seq - ) - ) - seq_list.append(seq + chain_s) - message += f"{line}\n" - if fixed_positions_dict != None: - message += f"\nfixed positions:* {fixed_positions_dict['cleaned']} \n\n*uses CHAIN:[1..len(chain)] residue numbering" - # somehow sequences still contain X, remove again - for i, x in enumerate(seq_list): - for aa in omit_AAs: - seq_list[i] = x.replace(aa, "") - all_probs_concat = np.concatenate(all_probs_list) - all_log_probs_concat = np.concatenate(all_log_probs_list) - np.savetxt("all_probs_concat.csv", all_probs_concat.mean(0).T, delimiter=",") - np.savetxt( - "all_log_probs_concat.csv", - np.exp(all_log_probs_concat).mean(0).T, - delimiter=",", - ) - S_sample_concat = np.concatenate(S_sample_list) - fig = px.imshow( - np.exp(all_log_probs_concat).mean(0).T, - labels=dict(x="positions", y="amino acids", color="probability"), - y=list(alphabet), - template="simple_white", - ) - fig.update_xaxes(side="top") - - fig_tadjusted = px.imshow( - all_probs_concat.mean(0).T, - labels=dict(x="positions", y="amino acids", color="probability"), - y=list(alphabet), - template="simple_white", - ) - - fig_tadjusted.update_xaxes(side="top") - seq_dict = {"seq_list": seq_list, "recovery": seq_recovery, "seq_score": seq_score} - mol = structure_pred(seq_dict, pdb_path, selected_residues) - print(seq_list) - return ( - message, - fig, - fig_tadjusted, - gr.File.update(value="all_log_probs_concat.csv", visible=True), - gr.File.update(value="all_probs_concat.csv", visible=True), - pdb_path, - gr.Dropdown.update(choices=seq_list, value=seq_list[0], interactive=True), - selected_residues, - seq_dict, - mol, - ) - - -def updateseq(seq, seq_dict, pdb_path, selected_residues): - # find index of seq in seq_dict - - seq_list = seq_dict["seq_list"] - seq_index = seq_list.index(seq) - print(seq, seq_index) - mol = structure_pred(seq_dict, pdb_path, selected_residues, index=seq_index) - return mol - - -from transformers.models.esm.openfold_utils.protein import to_pdb, Protein as OFProtein -from transformers.models.esm.openfold_utils.feats import atom14_to_atom37 - - -def convert_outputs_to_pdb(outputs): - final_atom_positions = atom14_to_atom37(outputs["positions"][-1], outputs) - outputs = {k: v.to("cpu").numpy() for k, v in outputs.items()} - final_atom_positions = final_atom_positions.cpu().numpy() - final_atom_mask = outputs["atom37_atom_exists"] - pdbs = [] - for i in range(outputs["aatype"].shape[0]): - aa = outputs["aatype"][i] - pred_pos = final_atom_positions[i] - mask = final_atom_mask[i] - resid = outputs["residue_index"][i] + 1 - pred = OFProtein( - aatype=aa, - atom_positions=pred_pos, - atom_mask=mask, - residue_index=resid, - b_factors=outputs["plddt"][i], - chain_index=outputs["chain_index"][i] if "chain_index" in outputs else None, - ) - pdbs.append(to_pdb(pred)) - return pdbs - - -def get_esmfold_local(sequence): - - filename = "outputs/" + hashlib.md5(str.encode(sequence)).hexdigest() + ".pdb" - - if not os.path.exists(filename): - tokenizer = AutoTokenizer.from_pretrained("facebook/esmfold_v1") - model = EsmForProteinFolding.from_pretrained( - "facebook/esmfold_v1", low_cpu_mem_usage=True - ) - - model = model.cuda() - model.esm = model.esm.half() - import torch - - torch.backends.cuda.matmul.allow_tf32 = True - model.trunk.set_chunk_size(64) - - position_id_offsets = [] - linker_mask = [] - for i, s in enumerate(sequence.split("/")): - linker = 25 if i < sequence.count("/") else 0 - offsets = [i * 512] * (len(s) + linker) - linker_mask.extend([1] * len(s) + [0] * linker) - position_id_offsets.extend(offsets) - sequence = sequence.replace("/", "G" * 25) - tokenized = tokenizer([sequence], return_tensors="pt", add_special_tokens=False) - with torch.no_grad(): - position_ids = torch.arange(len(sequence), dtype=torch.long) - position_ids = position_ids + torch.torch.LongTensor(position_id_offsets) - - linker_mask = torch.Tensor(linker_mask).unsqueeze(1) - tokenized["position_ids"] = position_ids.unsqueeze(0) - tokenized = {key: tensor.cuda() for key, tensor in tokenized.items()} - with torch.no_grad(): - output = model(**tokenized) - - output["atom37_atom_exists"] = output["atom37_atom_exists"] * linker_mask.to( - output["atom37_atom_exists"].device - ) - pdb = convert_outputs_to_pdb(output) - - with open(filename, "w+") as f: - f.write("".join(pdb)) - print("local prediction", filename) - else: - print("prediction already on disk") - return filename - - -def structure_pred(seq_dict, pdb, selectedResidues, index=0): - allSeqs = seq_dict["seq_list"] - lenSeqs = len(allSeqs) - if len(allSeqs[index]) > 400: - return """ - - """ - if "/" in allSeqs[index] and not is_local: - return """ - - """ - i = 0 - sequences = {} - if is_local: - pdb_file = get_esmfold_local(allSeqs[index]) - else: - pdb_file = get_esmfold(allSeqs[index]) - - rms, input_pdb, aligned_pdb, plddts = align_structures(pdb, pdb_file, index) - sequences[i] = { - "Seq": index, - "RMSD": f"{rms:.2f}", - "Score": seq_dict["seq_score"][i], - "Recovery": seq_dict["recovery"][i], - "Mean pLDDT": f"{np.mean(plddts):.4f}", - } - - num_res = len(allSeqs[index]) - return molecule( - input_pdb, - aligned_pdb, - lenSeqs, - num_res, - selectedResidues, - allSeqs, - sequences, - ) - - -def read_mol(molpath): - with open(molpath, "r") as fp: - lines = fp.readlines() - mol = "" - for l in lines: - mol += l - return mol - - -def molecule( - input_pdb, aligned_pdb, lenSeqs, num_res, selectedResidues, allSeqs, sequences -): - print("mol updated") - print("filenames", input_pdb, aligned_pdb) - mol = read_mol(input_pdb) - options = "" - pred_mol = "[" - seqdata = "{" - selected = "selected" - for i in range(1): # lenSeqs): - seqdata += ( - str(sequences[i]["Seq"]) - + ': { "score": ' - + sequences[i]["Score"] - + ', "rmsd": ' - + sequences[i]["RMSD"] - + ', "recovery": ' - + sequences[i]["Recovery"] - + ', "plddt": ' - + sequences[i]["Mean pLDDT"] - + ', "seq":"' - + allSeqs[i] - + '"}' - ) - - pred_mol += f"`{read_mol(aligned_pdb)}`" - selected = "" - # if i != lenSeqs - 1: - # pred_mol += "," - # seqdata += "," - pred_mol += "]" - seqdata += "}" - - x = ( - """ - - - - - - - - -
    - > seq , score , RMSD , Recovery - , pLDDT
    -

    - -

    -
    -
    -
    -
    - -
    -
    - -
    - -
    -
    -
    RMSD ESMFold vs. native: Å computed using CEAlign on the aligned fragment
    -
    -
    -
    - -
    AF2 model of redesigned sequence
    -
    ESMFold model confidence:
    -
     Very high - (pLDDT > 90)
    -
     Confident - (90 > pLDDT > 70)
    -
     Low (70 > - pLDDT > 50)
    -
     Very low - (pLDDT < 50)
    -
    ESMFold produces a per-residue confidence - score (pLDDT) between 0 and 100. Some regions below 50 pLDDT may be unstructured in isolation. -
    -
    -
    -
    Input structure
    - -
     Fixed positions
    - -
    -
    - - """ - ) - - return f"""""" - - -def set_examples(example): - ( - label, - inp, - designed_chain, - fixed_chain, - homomer, - num_seqs, - sampling_temp, - atomsel, - ) = example - return [ - label, - inp, - designed_chain, - fixed_chain, - homomer, - gr.Slider.update(value=num_seqs), - gr.Radio.update(value=sampling_temp), - atomsel, - ] - - -import hashlib - - -def get_esmfold(sequence): - headers = { - "Content-Type": "application/x-www-form-urlencoded", - } - sequence = sequence.replace("/", ":") - filename = "outputs/" + hashlib.md5(str.encode(sequence)).hexdigest() + ".pdb" - - if not os.path.exists(filename): - response = requests.post( - "https://api.esmatlas.com/foldSequence/v1/pdb/", - headers=headers, - data=sequence, - ) - - name = sequence[:3] + sequence[-3:] - pdb_string = response.content.decode("utf-8") - - with open(filename, "w+") as f: - f.write(pdb_string) - print("retrieved prediction", filename) - else: - print("prediction already on disk") - return filename - - -proteinMPNN = gr.Blocks() - -with proteinMPNN: - gr.Markdown("# ProteinMPNN + ESMFold") - gr.Markdown( - """This model takes as input a protein structure and based on its backbone predicts new sequences that will fold into that backbone. - It will then run [ESMFold](https://esmatlas.com/about) by MetaAI on the predicted structures and align the predicted structure for the designed sequence with the original backbone. - - **Note, there is a 400 residue limit in this version and multimeric structures can only be predicted locally. Follow, [README](https://huggingface.co/spaces/simonduerr/ProteinMPNNESM/blob/main/README.md) for instructions on how to run locally.** - """ - ) - - with gr.Tabs(): - with gr.TabItem("Input"): - inp = gr.Textbox( - placeholder="PDB Code or upload file below", label="Input structure" - ) - file = gr.File(file_count="single") - - with gr.TabItem("Settings"): - with gr.Row(): - designed_chain = gr.Textbox(value="A", label="Designed chain") - fixed_chain = gr.Textbox( - placeholder="Use commas to fix multiple chains", label="Fixed chain" - ) - with gr.Row(): - num_seqs = gr.Slider( - minimum=1, maximum=15, value=1, step=1, label="Number of sequences" - ) - sampling_temp = gr.Radio( - choices=[0.1, 0.15, 0.2, 0.25, 0.3], - value=0.1, - label="Sampling temperature", - - ) - gr.Markdown( - """ Sampling temperature for amino acids, `T=0.0` means taking argmax, `T>>1.0` means sample randomly. Suggested values `0.1, 0.15, 0.2, 0.25, 0.3`. Higher values will lead to more diversity. - """ - ) - with gr.Row(): - model_name = gr.Dropdown( - choices=[ - "v_48_002", - "v_48_010", - "v_48_020", - "v_48_030", - ], - label="Model", - value="v_48_020", - - ) - backbone_noise = gr.Dropdown( - choices=[0, 0.02, 0.10, 0.20, 0.30], label="Backbone noise", value=0, - - ) - with gr.Row(): - homomer = gr.Checkbox(value=False, label="Homomer?") - gr.Markdown( - "for correct symmetric tying lenghts of homomer chains should be the same" - ) - gr.Markdown("## Fixed positions") - gr.Markdown( - """You can fix important positions in the protein. Resid should be specified with the same numbering as in the input pdb file. The fixed residues will be highlighted in the output. - The [VMD selection](http://www.ks.uiuc.edu/Research/vmd/vmd-1.9.2/ug/node89.html) synthax is used. You can also select based on ligands or chains in the input structure to specify interfaces to be fixed. - - - within 5 of resid 94 All residues that have >1 atom closer than 5 Å to any atom of residue 94 - - name CA and within 5 of resid 94 All residues that have CA atom closer than 5 Å to any atom of residue 94 - - resid 94 96 119 Residues 94, 94 and 119 - - within 5 of resname ZN All residues with any atom <5 Å of zinc ion - - chain A and within 5 of chain B All residues of chain A that are part of the interface with chain B - - protein and within 5 of nucleic All residues that bind to DNA (if present in structure) - - not (chain A and within 5 of chain B) only modify residues that are in the interface with the fixed chain, not further away - - chain A or (chain B and sasa < 20) Keep chain A and all core residues fixeds - - pLDDT >70 Redesign all residues with low pLDDT - - Note that sasa and pLDDT selectors modify default VMD behavior. SASA is calculated using moleculekit and written to the mass attribute. Selections based on mass do not work. - pLDDT is an alias for beta, it only works correctly with structures that contain the appropriate values in the beta column of the PDB file. """ - ) - atomsel = gr.Textbox( - placeholder="Specify atom selection ", label="Fixed positions", - api_name= "fixed_positions" - ) - - btn = gr.Button("Run") - label = gr.Textbox(label="Label", visible=False) - - samples = [["Monomer design", "6MRR", "A", "", False, 2, 0.1, ""]] - - if is_local: - samples.extend( - [ - ["Homomer design", "1O91", "A,B,C", "", True, 2, 0.1, ""], - [ - "Redesign of Homomer to Heteromer", - "3HTN", - "A,B", - "C", - False, - 2, - 0.1, - "", - ], - [ - "Redesign of MID1 scaffold keeping binding site fixed", - "3V1C", - "A,B", - "", - False, - 2, - 0.1, - "within 5 of resname ZN", - ], - [ - "Redesign of DNA binding protein", - "3JRD", - "A,B", - "", - False, - 2, - 0.1, - "within 8 of nucleic", - ], - [ - "Surface Redesign of miniprotein", - "7JZM", - "A,B", - "", - False, - 2, - 0.1, - "chain B or (chain A and sasa < 20)", - ], - ] - ) - examples = gr.Dataset( - components=[ - label, - inp, - designed_chain, - fixed_chain, - homomer, - num_seqs, - sampling_temp, - atomsel, - ], - samples=samples, - ) - - gr.Markdown("# Output") - - with gr.Tabs(): - with gr.TabItem("Designed sequences"): - chosen_seq = gr.Dropdown( - choices=[], - label="Select a sequence for validation", - ) - mol = gr.HTML() - out = gr.Textbox(label="Fasta Output") - - with gr.TabItem("Amino acid probabilities"): - plot = gr.Plot() - all_log_probs = gr.File(visible=False) - with gr.TabItem("T adjusted probabilities"): - gr.Markdown("Sampling temperature adjusted amino acid probabilties") - plot_tadjusted = gr.Plot() - all_probs = gr.File(visible=False) - - tempFile = gr.Variable() - selectedResidues = gr.Variable() - seq_dict = gr.Variable() - btn.click( - fn=update, - inputs=[ - inp, - file, - designed_chain, - fixed_chain, - homomer, - num_seqs, - sampling_temp, - model_name, - backbone_noise, - atomsel, - ], - outputs=[ - out, - plot, - plot_tadjusted, - all_log_probs, - all_probs, - tempFile, - chosen_seq, - selectedResidues, - seq_dict, - mol, - ], - api_name = "proteinmpnn" - ) - chosen_seq.change( - updateseq, - inputs=[chosen_seq, seq_dict, tempFile, selectedResidues], - outputs=mol, - ) - examples.click(fn=set_examples, inputs=examples, outputs=examples.components) - gr.Markdown( - """Citation: **Robust deep learning based protein sequence design using ProteinMPNN**
    -Justas Dauparas, Ivan Anishchenko, Nathaniel Bennett, Hua Bai, Robert J. Ragotte, Lukas F. Milles, Basile I. M. Wicky, Alexis Courbet, Robbert J. de Haas, Neville Bethel, Philip J. Y. Leung, Timothy F. Huddy, Sam Pellock, Doug Tischer, Frederick Chan, Brian Koepnick, Hannah Nguyen, Alex Kang, Banumathi Sankaran, Asim Bera, Neil P. King, David Baker
    -Science Vol 378, Issue 6615, pp. 49 -56; doi: [10.1126/science.add2187](https://doi.org/10.1126/science.add2187

    Server built by [@simonduerr](https://twitter.com/simonduerr) and hosted by Huggingface""" - ) - - -proteinMPNN.launch() diff --git a/spaces/simonduerr/diffdock/esm/esm/inverse_folding/gvp_transformer.py b/spaces/simonduerr/diffdock/esm/esm/inverse_folding/gvp_transformer.py deleted file mode 100644 index faf7c1555d2d74b43d71a7f59b508da1533cc52f..0000000000000000000000000000000000000000 --- a/spaces/simonduerr/diffdock/esm/esm/inverse_folding/gvp_transformer.py +++ /dev/null @@ -1,137 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -from typing import Any, Dict, List, Optional, Tuple, NamedTuple -import torch -from torch import nn -from torch import Tensor -import torch.nn.functional as F -from scipy.spatial import transform - -from esm.data import Alphabet - -from .features import DihedralFeatures -from .gvp_encoder import GVPEncoder -from .gvp_utils import unflatten_graph -from .gvp_transformer_encoder import GVPTransformerEncoder -from .transformer_decoder import TransformerDecoder -from .util import rotate, CoordBatchConverter - - -class GVPTransformerModel(nn.Module): - """ - GVP-Transformer inverse folding model. - - Architecture: Geometric GVP-GNN as initial layers, followed by - sequence-to-sequence Transformer encoder and decoder. - """ - - def __init__(self, args, alphabet): - super().__init__() - encoder_embed_tokens = self.build_embedding( - args, alphabet, args.encoder_embed_dim, - ) - decoder_embed_tokens = self.build_embedding( - args, alphabet, args.decoder_embed_dim, - ) - encoder = self.build_encoder(args, alphabet, encoder_embed_tokens) - decoder = self.build_decoder(args, alphabet, decoder_embed_tokens) - self.args = args - self.encoder = encoder - self.decoder = decoder - - @classmethod - def build_encoder(cls, args, src_dict, embed_tokens): - encoder = GVPTransformerEncoder(args, src_dict, embed_tokens) - return encoder - - @classmethod - def build_decoder(cls, args, tgt_dict, embed_tokens): - decoder = TransformerDecoder( - args, - tgt_dict, - embed_tokens, - ) - return decoder - - @classmethod - def build_embedding(cls, args, dictionary, embed_dim): - num_embeddings = len(dictionary) - padding_idx = dictionary.padding_idx - emb = nn.Embedding(num_embeddings, embed_dim, padding_idx) - nn.init.normal_(emb.weight, mean=0, std=embed_dim ** -0.5) - nn.init.constant_(emb.weight[padding_idx], 0) - return emb - - def forward( - self, - coords, - padding_mask, - confidence, - prev_output_tokens, - return_all_hiddens: bool = False, - features_only: bool = False, - ): - encoder_out = self.encoder(coords, padding_mask, confidence, - return_all_hiddens=return_all_hiddens) - logits, extra = self.decoder( - prev_output_tokens, - encoder_out=encoder_out, - features_only=features_only, - return_all_hiddens=return_all_hiddens, - ) - return logits, extra - - def sample(self, coords, partial_seq=None, temperature=1.0, confidence=None): - """ - Samples sequences based on multinomial sampling (no beam search). - - Args: - coords: L x 3 x 3 list representing one backbone - partial_seq: Optional, partial sequence with mask tokens if part of - the sequence is known - temperature: sampling temperature, use low temperature for higher - sequence recovery and high temperature for higher diversity - confidence: optional length L list of confidence scores for coordinates - """ - L = len(coords) - # Convert to batch format - batch_converter = CoordBatchConverter(self.decoder.dictionary) - batch_coords, confidence, _, _, padding_mask = ( - batch_converter([(coords, confidence, None)]) - ) - - # Start with prepend token - mask_idx = self.decoder.dictionary.get_idx('') - sampled_tokens = torch.full((1, 1+L), mask_idx, dtype=int) - sampled_tokens[0, 0] = self.decoder.dictionary.get_idx('') - if partial_seq is not None: - for i, c in enumerate(partial_seq): - sampled_tokens[0, i+1] = self.decoder.dictionary.get_idx(c) - - # Save incremental states for faster sampling - incremental_state = dict() - - # Run encoder only once - encoder_out = self.encoder(batch_coords, padding_mask, confidence) - - # Decode one token at a time - for i in range(1, L+1): - if sampled_tokens[0, i] != mask_idx: - continue - logits, _ = self.decoder( - sampled_tokens[:, :i], - encoder_out, - incremental_state=incremental_state, - ) - logits = logits[0].transpose(0, 1) - logits /= temperature - probs = F.softmax(logits, dim=-1) - sampled_tokens[:, i] = torch.multinomial(probs, 1).squeeze(-1) - sampled_seq = sampled_tokens[0, 1:] - - # Convert back to string via lookup - return ''.join([self.decoder.dictionary.get_tok(a) for a in sampled_seq]) diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Clash Royale for Mac The Best Way to Enjoy the Clash Universe.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Clash Royale for Mac The Best Way to Enjoy the Clash Universe.md deleted file mode 100644 index 6df0176395a68b8c5617ecdda8ce7036d535bc20..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Clash Royale for Mac The Best Way to Enjoy the Clash Universe.md +++ /dev/null @@ -1,86 +0,0 @@ -
    -

    How to Download and Play Clash Royale on Mac

    -

    Clash Royale is one of the most popular mobile games in the world, with millions of players competing in real-time card battles. It features your favorite characters from the Clash of Clans universe, as well as new ones like Princes, Knights, Baby Dragons, and more. You can collect and upgrade dozens of cards, build your own deck, join a clan, and fight for glory in the arena.

    -

    But what if you want to play Clash Royale on your Mac device? Unfortunately, there is no official version of Clash Royale for Mac, as it is only available for Android and iOS devices. However, that does not mean you cannot enjoy this amazing game on your Mac. There are ways to run Android apps and games on your Mac using software called emulators.

    -

    download clash royale for mac


    Download Ziphttps://ssurll.com/2uO19s



    -

    In this article, we will show you how to download and play Clash Royale on your Mac using different emulators. We will also compare their features, performance, compatibility, and user ratings, so you can choose the best one for your needs. By the end of this article, you will be able to play Clash Royale on your Mac with ease.

    -

    Why Play Clash Royale on Mac?

    -

    Before

    Before we get into the details of how to play Clash Royale on Mac, let's first look at some of the benefits of doing so. Here are some reasons why you might want to play Clash Royale on your Mac device:

    -
      -
    • Bigger screen: Playing Clash Royale on your Mac will give you a larger and clearer view of the game, which can enhance your gaming experience. You will be able to see more details, such as the cards, the arena, and the animations. You will also have a better perspective of the battlefield, which can help you plan your strategy and react faster.
    • -
    • Better graphics: Playing Clash Royale on your Mac will also improve the graphics quality of the game, as you can adjust the resolution and the frame rate according to your preferences. You can enjoy smoother and sharper graphics, which can make the game more realistic and immersive.
    • -
    • Keyboard and mouse controls: Playing Clash Royale on your Mac will allow you to use your keyboard and mouse as input devices, which can give you more control and accuracy. You can customize the key mapping and the mouse sensitivity to suit your play style and comfort. You can also use keyboard shortcuts and mouse gestures to perform actions faster and easier.
    • -
    • No battery or storage issues: Playing Clash Royale on your Mac will also save you from worrying about your battery life or storage space. You can play for as long as you want without draining your battery or overheating your device. You can also store more data and files on your Mac without running out of space.
    • -
    -

    As you can see, playing Clash Royale on your Mac has many advantages that can make your gaming experience more enjoyable and convenient. But how do you actually play Clash Royale on your Mac? Let's find out in the next section.

    -

    How to Play Clash Royale on Mac with Bluestacks

    -

    One of the easiest and most popular ways to play Clash Royale on your Mac is by using Bluestacks. Bluestacks is an Android emulator that allows you to run Android apps and games on your Mac device. It is free, safe, and easy to use. Here are the steps to download and play Clash Royale on your Mac with Bluestacks:

    -
      -
    1. Download and install Bluestacks on your Mac: To download Bluestacks, go to their official website and click on the "Download Bluestacks" button. This will start downloading the Bluestacks installer file on your Mac. Once the download is complete, open the file and follow the instructions to install Bluestacks on your Mac. This may take a few minutes, depending on your internet speed and system performance.
    2. -
    3. Download and install Clash Royale on Bluestacks: To download Clash Royale, launch Bluestacks on your Mac and sign in with your Google account. This will give you access to the Google Play Store, where you can search for Clash Royale. Alternatively, you can also use this link to download Clash Royale directly from Bluestacks. Once you find Clash Royale, click on the "Install" button to download and install it on Bluestacks.
    4. -
    5. Play Clash Royale on Bluestacks: To play Clash Royale, open Bluestacks and click on the "My Apps" tab. You will see a list of apps that you have installed on Bluestacks, including Clash Royale. Click on the "Clash Royale" icon to launch the game. You can now enjoy playing Clash Royale on your Mac with Bluestacks.
    6. -
    -

    Here are some tips and tricks to play Clash Royale on Bluestacks:

    -
      -
    • Adjust the settings: To optimize your gaming experience, you can adjust the settings of Bluestacks and Clash Royale according to your preferences. For example, you can change the resolution, the frame rate, the sound volume, the language, etc.
    • -
    • Use the keyboard and mouse: To control the game, you can use your keyboard and mouse as input devices. You can customize the key mapping and the mouse sensitivity in the settings of Bluestacks. You can also use keyboard shortcuts and mouse gestures to perform actions faster and easier.
    • -
    • Access the Google Play Store: To access the Google Play Store, you need to sign in with your Google account in Bluestacks. This will allow you to download more apps and games, as well as update them regularly. You can also sync your progress and achievements with your Google account.
    • -
    -

    How to Play Clash Royale on Mac with Other Emulators

    -

    Bluestacks is not the only emulator that can run Clash Royale on Mac. There are other emulators that can - Compatible with most Mac devices
    - Compatible with most keyboard and mouse models | - 4.3 out of 5 stars
    - Over 500,000 reviews
    - Positive feedback from users | | LDPlayer | - Supports high-resolution games
    - Has a built-in app center
    - Has a multi-instance mode
    - Has a game mode
    - Has a macro recorder
    - Has a screen recorder
    - Has a key mapping tool
    - Has a virtualization tool | - Fast and smooth
    - Stable and reliable
    - Low CPU and memory usage
    - High FPS | - Compatible with most Android apps and games
    - Compatible with most Mac devices
    - Compatible with most keyboard and mouse models | - 4.2 out of 5 stars
    - Over 300,000 reviews
    - Positive feedback from users |

    As you can see, all the emulators have similar features and functions, but they may differ in some aspects, such as the performance, the compatibility, and the user ratings. Based on the table, we can conclude that Bluestacks is the best emulator for playing Clash Royale on Mac, as it has the highest performance, compatibility, and user ratings among the emulators. However, you can also try out other emulators and see which one suits you better.

    -

    How to download clash royale for mac m1
    -Clash royale for mac free download
    -Download clash royale for macbook air
    -Best emulator for clash royale on mac
    -Download clash royale for mac os x
    -Clash royale for mac app store
    -Download clash royale for macbook pro
    -Play clash royale on mac without emulator
    -Download clash royale for mac catalina
    -Clash royale for mac reddit
    -Download clash royale for mac mini
    -Clash royale for mac online
    -Download clash royale for macbook m1
    -Clash royale for mac steam
    -Download clash royale for imac
    -Clash royale for mac nox player
    -Download clash royale for mac big sur
    -Clash royale for mac bluestacks
    -Download clash royale for macbook air m1
    -Clash royale for mac apk
    -Download clash royale for mac sierra
    -Clash royale for mac ios app
    -Download clash royale for mac mojave
    -Clash royale for mac wine
    -Download clash royale for mac high sierra
    -Clash royale for mac game center
    -Download clash royale for mac monterey
    -Clash royale for mac playcover
    -Download clash royale for mac el capitan
    -Clash royale for mac sideloadly
    -Download clash royale for macbook pro m1
    -Clash royale for mac ipa file
    -Download clash royale for imac m1
    -Clash royale for mac apple configurator
    -Download clash royale for imac pro
    -Clash royale for mac supercell id
    -Download clash royale for imac 2021
    -Clash royale for mac keyboard controls
    -Download clash royale for imac 27 inch
    -Clash royale for mac mouse support

    -

    Conclusion

    -

    In this article, we have shown you how to download and play Clash Royale on your Mac using different emulators. We have also compared their features, performance, compatibility, and user ratings, so you can choose the best one for your needs. By following the steps and tips in this article, you will be able to enjoy playing Clash Royale on your Mac with ease.

    -

    Playing Clash Royale on your Mac has many benefits, such as a bigger screen, better graphics, keyboard and mouse controls, and no battery or storage issues. You can also access the Google Play Store and sync your progress and achievements with your Google account. You can also play with your friends who use Android or iOS devices.

    -

    So what are you waiting for? Download Clash Royale on your Mac today and join the millions of players who are competing in real-time card battles. You will have a blast collecting and upgrading cards, building your own deck, joining a clan, and fighting for glory in the arena.

    -

    If you have any questions or feedback about playing Clash Royale on your Mac, feel free to leave a comment below. We would love to hear from you.

    -

    FAQs

    -

    Here are some answers to some common questions that readers may have about playing Clash Royale on Mac:

    -
      -
    1. Is Clash Royale free to play?
      Yes, Clash Royale is free to play. You can download and install it on your Mac device without paying anything. However, there are some optional in-app purchases that can enhance your gaming experience, such as gems, gold, chests, cards, etc. You can buy these items with real money or earn them by playing the game.
    2. -
    3. Can I play Clash Royale with my friends who use Android or iOS devices?
      Yes, you can play Clash Royale with your friends who use Android or iOS devices. You can add them as friends in the game and invite them to join your clan or challenge them to friendly battles. You can also chat with them in the game and share tips and strategies.
    4. -
    5. Can I transfer my progress from my mobile device to my Mac?
      Yes, you can transfer your progress from your mobile device to your Mac. You just need to sign in with the same Google account on both devices. This will sync your progress and achievements across devices. You can also switch between devices anytime without losing your data.
    6. -
    7. What are the minimum system requirements for playing Clash Royale on Mac?
      The minimum system requirements for playing Clash Royale on Mac vary depending on the emulator you use. However, generally speaking, you need at least 4 GB of RAM, 4 GB of disk space, an Intel or AMD processor, and a Mac OS X 10.11 or higher.
    8. -
    9. How can I update Clash Royale on my Mac?
      To update Clash Royale on your Mac, you need to access the Google Play Store on your emulator and check for updates. If there is an update available, you can download and install it on your emulator. Alternatively, you can also use this link to download the latest version of Clash Royale directly from the emulator.
    10. -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download FIFA Soccer and Enjoy the New 23 Season on Your Mobile Device.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download FIFA Soccer and Enjoy the New 23 Season on Your Mobile Device.md deleted file mode 100644 index 2df7e6ddcdd0a9d3923327480d538135298c1c70..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download FIFA Soccer and Enjoy the New 23 Season on Your Mobile Device.md +++ /dev/null @@ -1,120 +0,0 @@ -
    -

    How to Download FIFA 2022 Mobile Game

    -

    If you are a soccer fan, you probably have heard of FIFA, the most popular and realistic soccer video game series in the world. FIFA is developed by EA Sports, a division of Electronic Arts, and has been releasing new editions every year since 1993. The latest edition, FIFA 2022, is now available for mobile devices, including Android and iOS. In this article, we will show you how to download FIFA 2022 Mobile Game, what features it offers, what requirements it has, and some tips and tricks to enjoy it more.

    -

    FIFA 2022 Mobile Game Features

    -

    FIFA 2022 Mobile Game is not just a simple port of the console or PC version. It is a standalone game that is designed specifically for mobile devices, with optimized graphics, controls, and gameplay. It also has some exclusive features that make it more fun and engaging. Here are some of them:

    -

    download fifa 2022 mobile game


    Download Zip === https://ssurll.com/2uNTAT



    -

    FIFA World Cup 2022 Mode

    -

    This is the only licensed FIFA World Cup 2022 mobile game where you can replay the official tournament brackets with any of the 32 qualified nations. You can also rewrite history and take control of 15 non-qualified teams, such as China, India, or Canada. You can play in authentic World Cup stadiums, wear official kits and badges, use the official match ball, and listen to localized commentary. This mode is a great way to experience the excitement and drama of the biggest soccer event in the world.

    -

    Ultimate Team Mode

    -

    This is the most popular mode in FIFA games, where you can build your dream team with over 15,000 players from the biggest leagues and top teams in the world. You can choose from world-class talent like Kylian Mbappé, Christian Pulisic, Vinicius Jr, and Son Heung-min. You can also train your players, increase their stats and OVR, customize their kits and formations, and compete against other players online. You can also participate in live events that correspond with the real-world soccer season, such as Team of the Week, Seasonal Campaigns, or Special Programs.

    -

    Champions League Mode

    -

    This mode lets you kick off against teams from club football's most prestigious competitions in Europe, such as the UEFA Champions League, the UEFA Europa League, or the UEFA Super Cup. You can play as any of the qualified teams, such as Real Madrid, Manchester City, Bayern Munich, or Paris Saint-Germain. You can also create your own custom tournament with your favorite teams and players. You can enjoy the authentic atmosphere of the Champions League, with its iconic anthem, logo, and trophy.

    -

    Icons and Heroes Mode

    -

    This mode allows you to collect over 100 soccer legends and celebrate their memorable moments. You can play with icons such as Pelé, Maradona, Zidane, Beckham, or Ronaldo. You can also unlock new heroes that represent the current generation of stars, such as Erling Haaland, Bruno Fernandes, or Robert Lewandowski. You can relive some of the most iconic matches and goals in soccer history, such as the 1999 Champions League final, the 2006 World Cup final, or the 2014 World Cup semi-final. You can also create your own fantasy matchups and see how your icons and heroes perform against each other.

    -

    Advanced Passing System

    -

    This feature gives you more control and precision over your passing game. You can use new ways to pass the ball, such as through balls, lobbed passes, driven passes, or backheel passes. You can also adjust the power and direction of your passes with a simple swipe on the screen. You can also use gestures to trigger special moves, such as one-touch passes, flicks, or volleys. With this system, you can dominate possession and create more chances to score.

    -

    FIFA 2022 Mobile Game Requirements

    -

    Before you download FIFA 2022 Mobile Game, you need to make sure that your device meets the minimum specifications to run the game smoothly and without any issues. Here are the requirements for Android and iOS devices:

    -

    Device compatibility

    -

    For Android devices, you need to have at least Android 6.0 or higher and 2 GB of RAM. For iOS devices, you need to have at least iOS 12.0 or higher and an iPhone 6S or newer. You can check your device's compatibility by going to the official website or app store of your device and searching for FIFA Soccer or FIFA Mobile.

    -

    download fifa 2022 mobile game apk
    -download fifa 2022 mobile game for android
    -download fifa 2022 mobile game for ios
    -download fifa 2022 mobile game for pc
    -download fifa 2022 mobile game free
    -download fifa 2022 mobile game mod
    -download fifa 2022 mobile game offline
    -download fifa 2022 mobile game online
    -download fifa 2022 mobile game update
    -download fifa soccer 2022 mobile game
    -how to download fifa 2022 mobile game
    -where to download fifa 2022 mobile game
    -best site to download fifa 2022 mobile game
    -best way to download fifa 2022 mobile game
    -can i download fifa 2022 mobile game
    -can you download fifa 2022 mobile game
    -download and install fifa 2022 mobile game
    -download and play fifa 2022 mobile game
    -download ea sports fifa 2022 mobile game
    -download latest version of fifa 2022 mobile game
    -download the official fifa world cup 2022™ mobile game
    -easy steps to download fifa 2022 mobile game
    -fast and secure download of fifa 2022 mobile game
    -full guide to download fifa 2022 mobile game
    -get ready to download fifa 2022 mobile game
    -is it safe to download fifa 2022 mobile game
    -join the fun and download fifa 2022 mobile game
    -learn how to download fifa 2022 mobile game
    -new features of fifa 2022 mobile game download
    -no ads or surveys to download fifa 2022 mobile game
    -no root or jailbreak required to download fifa 2022 mobile game
    -play with your friends and download fifa 2022 mobile game
    -quick and easy download of fifa 2022 mobile game
    -review of fifa 2022 mobile game download
    -tips and tricks for downloading fifa 2022 mobile game
    -top reasons to download fifa 2022 mobile game
    -what are the benefits of downloading fifa 2022 mobile game
    -what are the requirements to download fifa 2022 mobile game
    -what is the size of fifa 2022 mobile game download
    -what you need to know before downloading fifa 2022 mobile game
    -why you should download fifa 2022 mobile game today
    -enjoy the ultimate soccer experience with fifa 2022 mobile game download
    -build your dream team with fifa 2022 mobile game download
    -compete against the best in pvp modes with fifa 2022 mobile game download
    -relive the world's greatest soccer tournament with fifa world cup mode in fifa 2022 mobile game download
    -unlock soccer stars from all qualified national teams with official licenses in fifa world cup mode in fifa 2022 mobile game download
    -experience realistic stadium sfx and live on-field audio commentary in fifa world cup mode in fifa 2022 mobile game download
    -be the soccer manager of your own dream team with manager mode in fifa world cup mode in fifa world cup mode in

    -

    Storage space

    -

    You also need to have enough storage space on your device to download and install the game. The game size varies depending on your device and region, but it is usually around 1 GB. You can check the exact size of the game by going to the official website or app store of your device and searching for FIFA Soccer or FIFA Mobile. You can also free up some space on your device by deleting unwanted apps or files.

    -

    Internet connection

    -

    You also need to have a stable and fast internet connection to play the game online and access all its features. You can use either Wi-Fi or mobile data, but we recommend using Wi-Fi for a better experience. You can check your internet speed by using a speed test app or website. You can also improve your internet connection by moving closer to your router or switching to a different network.

    -

    FIFA 2022 Mobile Game Download Steps

    -

    Now that you know what FIFA 2022 Mobile Game is all about and what requirements it has, you are ready to download it and start playing. Here are the steps you need to follow:

    -

    Step 1: Go to the official website or app store of your device

    -

    The first step is to go to the official website or app store of your device where you can find and download FIFA Soccer or FIFA Mobile. For Android devices, you need to go to Google Play Store. For iOS devices, you need to go to App Store. You can also use this link to go directly to the download page.

    -

    Step 2: Search for FIFA Soccer or FIFA Mobile

    -

    The next step is to search for FIFA Soccer or FIFA Mobile in the search bar of your website or app store. You should see the game icon with a blue background and a white F logo. Tap on it to open its details page.

    -

    Step 3: Tap on the install button and wait for the download to finish

    -

    The next step is to tap on the install button on the details page of the game. This will start downloading the game on your device. Depending on your internet speed and device storage space, this may take a few minutes or longer. You can check the progress of the download by looking at the status bar on your screen.

    -

    Step 4: Launch the game and follow the instructions to set up your account and preferencesThe final step is to launch the game and follow the instructions to set up your account and preferences. You will need to accept the terms of service and privacy policy, choose your language and region, and log in with your EA account or create a new one. You will also need to choose your favorite team, your player name, and your avatar. You can change these settings later in the game menu. After that, you will be ready to start playing FIFA 2022 Mobile Game.

    -

    FIFA 2022 Mobile Game Tips and Tricks

    -

    FIFA 2022 Mobile Game is a fun and addictive game that can keep you entertained for hours. However, it can also be challenging and competitive, especially when you play against other players online. To help you improve your skills and tactics, here are some tips and tricks that you can use:

    -

    How to earn coins and gems faster

    -

    Coins and gems are the main currencies in FIFA 2022 Mobile Game. You can use them to buy players, packs, items, and more. You can earn coins and gems by playing matches, completing events, achievements, and quests, watching ads, or buying them with real money. However, if you want to earn them faster, here are some ways to do it:

    -
      -
    • Play the World Cup mode as much as possible. This mode gives you more coins and gems than other modes, as well as bonus rewards for winning matches and advancing in the tournament.
    • -
    • Complete the daily and weekly objectives. These objectives are easy to do and give you a lot of coins and gems, as well as other rewards such as players, packs, or items.
    • -
    • Join a league or a tournament. These are social features that allow you to play with or against other players in a group. You can earn coins and gems by participating in league matches, tournaments, or chat rooms.
    • -
    -

    How to improve your skills and tactics

    -

    FIFA 2022 Mobile Game is not just about having the best players or the highest OVR. It is also about how you play on the pitch, how you control your players, how you pass the ball, how you shoot, how you defend, and how you strategize. Here are some ways to improve your skills and tactics:

    -
      -
    • Practice the basic controls. The game has a simple and intuitive control system that allows you to swipe, tap, or gesture on the screen to perform different actions. You can also customize the controls in the game settings. You should practice the basic controls until you master them.
    • -
    • Learn the advanced moves. The game also has some advanced moves that can give you an edge over your opponents. These include skill moves, special passes, finesse shots, chip shots, headers, volleys, tackles, slides, and more. You can learn these moves by watching tutorials in the game menu or by trying them out in the practice mode.
    • -
    • Use the right formation and tactics. The game allows you to choose from different formations and tactics for your team. These affect how your players position themselves on the field, how they attack or defend, how they support each other, and how they react to different situations. You should use the right formation and tactics for your play style and your opponent's play style.
    • -
    -

    How to unlock more players and modes

    -

    FIFA 2022 Mobile Game has a lot of players and modes that you can unlock as you progress in the game. These include new icons and heroes, new leagues and teams, new stadiums and kits, new events and programs, new challenges and achievements, and more. Here are some ways to unlock more players and modes:

    -
      -
    • Play the Ultimate Team mode as much as possible. This mode allows you to build your dream team with over 15,000 players from the biggest leagues and top teams in the world. You can unlock more players by buying packs with coins or gems, by completing events or programs that reward you with players or packs, or by trading players with other players in the market.
    • -
    • Play the Champions League mode as much as possible. This mode allows you to kick off against teams from club football's most prestigious competitions in Europe. You can unlock more teams by winning matches or tournaments in this mode.
    • -
    • Play the Icons and Heroes mode as much as possible. This mode allows you to collect over 100 soccer legends and celebrate their memorable moments. You can unlock more icons and heroes by completing their stories or challenges in this mode.
    • -
    -

    Conclusion

    -

    FIFA 2022 Mobile Game is a must-have game for any soccer fan who wants to enjoy the thrill and excitement of soccer on their mobile devices. It has amazing features that make it more fun and engaging than ever before. You can download FIFA 2022 Mobile Game for free on your Android or iOS device and start playing right away. You can also update the game to the latest version to enjoy new features and improvements. You can also contact the customer support team if you have any questions or issues with the game. In this article, we have shown you how to download FIFA 2022 Mobile Game, what features it offers, what requirements it has, and some tips and tricks to enjoy it more. We hope you found this article helpful and informative. Now, go ahead and download FIFA 2022 Mobile Game and have fun!

    -

    FAQs

    -

    Here are some frequently asked questions about FIFA 2022 Mobile Game that you may find useful:

    -

    Q1: Is FIFA 2022 Mobile Game free to play?

    -

    A1: Yes, FIFA 2022 Mobile Game is free to play. However, it also has some optional in-app purchases that can enhance your gaming experience. You can buy coins or gems with real money to buy players, packs, items, or other features. You can also buy a VIP pass that gives you access to exclusive rewards and benefits. You can disable in-app purchases in your device settings if you don't want to use them.

    -

    Q2: How can I update the game to the latest version?

    -

    A2: You can update the game to the latest version by going to the official website or app store of your device and searching for FIFA Soccer or FIFA Mobile. You should see an update button on the details page of the game. Tap on it to start updating the game. You can also enable automatic updates in your device settings if you want the game to update itself whenever a new version is available.

    -

    Q3: How can I contact the customer support team?

    -

    A3: You can contact the customer support team by going to the game menu and tapping on the settings icon. Then, tap on the help button and choose the option that suits your issue. You can also visit the EA Help website or social media pages for more information and assistance.

    -

    Q4: How can I join a league or a tournament?

    -

    A4: You can join a league or a tournament by going to the game menu and tapping on the league or tournament icon. Then, you can either create your own league or tournament or join an existing one. You can also invite your friends or other players to join your league or tournament. You can compete against other leagues or tournaments for rewards and glory.

    -

    Q5: How can I customize my team and players?

    -

    A5: You can customize your team and players by going to the game menu and tapping on the team or player icon. Then, you can change your team name, logo, kit, formation, tactics, or preferences. You can also change your player name, avatar, skills, stats, OVR, position, or appearance. You can also use items such as training cards, boosters, contracts, or chemistry styles to improve your team and players.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/skf15963/summary/fengshen/examples/pretrain_erlangshen_deberta_v2/pretrain_deberta_base.sh b/spaces/skf15963/summary/fengshen/examples/pretrain_erlangshen_deberta_v2/pretrain_deberta_base.sh deleted file mode 100644 index bf6ad5cb30f14173854aa66bf91d731151ec47d7..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/examples/pretrain_erlangshen_deberta_v2/pretrain_deberta_base.sh +++ /dev/null @@ -1,88 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=pretrain_bart # create a short name for your job -#SBATCH --nodes=1 # node count -#SBATCH --ntasks-per-node=8 # number of tasks to run per node -#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks) -#SBATCH --gres=gpu:8 # number of gpus per node -#SBATCH -o %x-%j.log # output and error log file names (%x for job id) -#SBATCH -x dgx050 - -# pwd=Fengshenbang-LM/fengshen/examples/pretrain_erlangshen -ROOT_DIR=../../workspace -export TORCH_EXTENSIONS_DIR=${ROOT_DIR}/torch_extendsions - -MODEL_NAME=erlangshen-deberta-base -MODEL_ROOT_DIR=$ROOT_DIR/${MODEL_NAME} -if [ ! -d ${MODEL_ROOT_DIR} ];then - mkdir ${MODEL_ROOT_DIR} -fi - -NNODES=1 -GPUS_PER_NODE=1 - -MICRO_BATCH_SIZE=32 - -# 如果你不用Deepspeed的话 下面的一段话都可以删掉 Begin -CONFIG_JSON="$MODEL_ROOT_DIR/${MODEL_NAME}.ds_config.json" -ZERO_STAGE=1 -# Deepspeed figures out GAS dynamically from dynamic GBS via set_train_batch_size() -cat < $CONFIG_JSON -{ - "zero_optimization": { - "stage": ${ZERO_STAGE} - }, - "fp16": { - "enabled": true - }, - "gradient_clipping": 1, - "train_micro_batch_size_per_gpu": $MICRO_BATCH_SIZE -} -EOT -export PL_DEEPSPEED_CONFIG_PATH=$CONFIG_JSON -### End - -DATA_ARGS="\ - --dataloader_workers 2 \ - --train_batchsize $MICRO_BATCH_SIZE \ - --val_batchsize $MICRO_BATCH_SIZE \ - --test_batchsize $MICRO_BATCH_SIZE \ - --datasets_name IDEA-CCNL/PretrainCorpusDemo \ - " -# 如果你有一批数据,可以参照IDEA-CCNL/PretrainCorpusDemo的格式处理,通过参数传入 -# --train_file train.json -# --val_file val.json -# --test_file test.json - -MODEL_ARGS="\ - --model_path $MODEL_ROOT_DIR/pretrain \ - --learning_rate 1e-4 \ - --weight_decay 1e-1 \ - --warmup_ratio 0.01 \ - " - -MODEL_CHECKPOINT_ARGS="\ - --save_last \ - --save_ckpt_path ${MODEL_ROOT_DIR}/ckpt \ - --load_ckpt_path ${MODEL_ROOT_DIR}/ckpt/last.ckpt \ - " - -TRAINER_ARGS="\ - --max_epoch 10 \ - --gpus $GPUS_PER_NODE \ - --num_nodes $NNODES \ - --strategy deepspeed_stage_${ZERO_STAGE} \ - --log_every_n_steps 1 \ - --precision 16 \ - --default_root_dir ${MODEL_ROOT_DIR} \ - --replace_sampler_ddp False \ - " - -export options=" \ - $DATA_ARGS \ - $MODEL_ARGS \ - $MODEL_CHECKPOINT_ARGS \ - $TRAINER_ARGS \ - " - -python3 pretrain_deberta.py $options -#srun -N $NNODES --gres=gpu:$GPUS_PER_NODE --ntasks-per-node=$GPUS_PER_NODE --cpus-per-task=20 python3 pretrain_deberta.py $options diff --git a/spaces/skf15963/summary/fengshen/examples/qa_t5/run_predict.sh b/spaces/skf15963/summary/fengshen/examples/qa_t5/run_predict.sh deleted file mode 100644 index 8b8470ed1136320b75ba6da51209b3c9af9c74d0..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/examples/qa_t5/run_predict.sh +++ /dev/null @@ -1,110 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=predict-cmrc -#SBATCH --nodes=1 -#SBATCH --ntasks-per-node=1 -#SBATCH --gres=gpu:1 # number of gpus -#SBATCH --cpus-per-task=4 # cpu-cores per task (>1 if multi-threaded tasks) -#SBATCH -o $YOUR_SLURM_LOG_PATH/%x-%j.log -#SBATCH -e $YOUR_SLURM_LOG_PATH/%x-%j.err - -# -set -x -e - -echo "START TIME: $(date)" -MICRO_BATCH_SIZE=8 - -ROOT_DIR=$YOUR_PROJECT_DIR -DOWNLOAD_MODEL_PATH=$YOUR_PROJECT_DIR/Randeng-T5-784M-QA-Chinese/ -#YOUR_MODEL_DIR - -if [ ! -d ${ROOT_DIR} ];then - mkdir ${ROOT_DIR} - echo ${ROOT_DIR} created!!!!!!!!!!!!!! -else - echo ${ROOT_DIR} exist!!!!!!!!!!!!!!! -fi - -ZERO_STAGE=1 - -config_json="$ROOT_DIR/ds_config.randeng_t5_dialog_784M.$SLURM_JOBID.json" -export MASTER_PORT=$[RANDOM%10000+30000] - -cat < $config_json -{ - "train_micro_batch_size_per_gpu": ${MICRO_BATCH_SIZE}, - "steps_per_print": 100, - "gradient_clipping": 1.0, - "zero_optimization": { - "stage": $ZERO_STAGE, - "contiguous_gradients": false, - "overlap_comm": true, - "reduce_scatter": true, - "reduce_bucket_size": 50000000, - "allgather_bucket_size": 500000000 - }, -} -EOT - -export PL_DEEPSPEED_CONFIG_PATH=$config_json -export TORCH_EXTENSIONS_DIR=$YOUR_HOME/tmp/torch_extendsions -# strategy=ddp -strategy=deepspeed_stage_1 - -TRAINER_ARGS=" - --max_epochs 10 \ - --gpus 1 \ - --num_nodes 1 \ - --strategy ${strategy} \ - --default_root_dir $ROOT_DIR \ - --save_ckpt_path $ROOT_DIR/ckpt \ - --save_top_k 5 \ - --every_n_train_steps 100\ - --monitor val_rougeL_fmeasure \ - --mode max \ - --save_last \ - --check_val_every_n_epoch 1 \ - --num_workers 4 \ - --dataloader_workers 4 \ - --replace_sampler_ddp False \ - --accumulate_grad_batches 2 \ - --formator t5style \ - --filename model-{epoch:02d}-{val_loss:.4f}-{val_rougeL_fmeasure:.3f} \ - --do_eval_only \ - --prediction_res_path $ROOT_DIR/predictions_sampling.txt \ - --decode_strategy sampling \ - --precision 16 \ -" - -TEST_FILE_PATH=$YOUR_DATA_FILE - -DATA_ARGS=" - --train_batchsize $MICRO_BATCH_SIZE \ - --val_batchsize $MICRO_BATCH_SIZE \ - --test_file $TEST_FILE_PATH \ - --max_seq_length 512 \ - --max_knowledge_length 425 \ - --max_target_length 128 -" -MODEL_ARGS=" - --pretrained_model_path $DOWNLOAD_MODEL_PATH\ - --tokenizer_type t5_tokenizer \ - --learning_rate 1e-4 \ - --weight_decay 1e-2 \ - --warmup_ratio 0.1 \ - --sheduler_type polynomial \ - --min_learning_rate 1e-5 \ -" - -SCRIPTS_PATH=$YOUR_PROJECT_DIR/Fengshenbang-LM/fengshen/examples/qa_t5/finetune_t5_cmrc.py - -export CMD=" \ - $SCRIPTS_PATH \ - $TRAINER_ARGS \ - $MODEL_ARGS \ - $DATA_ARGS \ - " - -echo $CMD -# conda activate fs -# export CUDA_VISIBLE_DEVICES=5 -srun python $CMD diff --git a/spaces/sklearn-docs/post-pruning-decision-trees/app.py b/spaces/sklearn-docs/post-pruning-decision-trees/app.py deleted file mode 100644 index b6e8bdc5ae72eb4d6babc01a4255fc750d3d16b4..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/post-pruning-decision-trees/app.py +++ /dev/null @@ -1,115 +0,0 @@ -import gradio as gr -import numpy as np -import matplotlib.pyplot as plt -from sklearn.model_selection import train_test_split -from sklearn.datasets import load_breast_cancer -from sklearn.tree import DecisionTreeClassifier - -theme = gr.themes.Monochrome( - primary_hue="indigo", - secondary_hue="blue", - neutral_hue="slate", -) -model_card = f""" -## Description - -The **DecisionTreeClassifier** employs a pruning technique that can be configured using the cost complexity parameter, commonly referred to as **ccp_alpha**. -By increasing the value of **ccp_alpha**, a greater number of nodes can be pruned. -In this demo, a DecisionTreeClassifier will be trained on the Breast Cancer dataset. -Then, the effect of **ccp_alpha** in many terms of the tree-based model like the impurity of leaves, depth, number of nodes, and accuracy on train and test data are shown in many figures. -Based on this information, the best number of **ccp_alpha** is chosen. This demo also shows the results of the best **ccp_alpha** with accuracy on train and test datasets. -You can play around with different ``test size`` and ``random state`` - -## Dataset - -Breast Cancer - -""" - -X, y = load_breast_cancer(return_X_y=True) - -def get_ccp(test_size, random_state): - X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=random_state, test_size=test_size) - clf = DecisionTreeClassifier(random_state=random_state) - path = clf.cost_complexity_pruning_path(X_train, y_train) - ccp_alphas, impurities = path.ccp_alphas, path.impurities - - fig1, ax1 = plt.subplots() - ax1.plot(ccp_alphas[:-1], impurities[:-1], marker="o", drawstyle="steps-post") - ax1.set_xlabel("effective alpha") - ax1.set_ylabel("total impurity of leaves") - ax1.set_title("Total Impurity vs effective alpha for training set") - - clfs = [] - for ccp_alpha in ccp_alphas: - clf = DecisionTreeClassifier(random_state=0, ccp_alpha=ccp_alpha) - clf.fit(X_train, y_train) - clfs.append(clf) - clfs = clfs[:-1] - ccp_alphas = ccp_alphas[:-1] - - node_counts = [clf.tree_.node_count for clf in clfs] - depth = [clf.tree_.max_depth for clf in clfs] - - fig2, ax2 = plt.subplots() - ax2.plot(ccp_alphas, node_counts, marker="o", drawstyle="steps-post") - ax2.set_xlabel("alpha") - ax2.set_ylabel("number of nodes") - ax2.set_title("Number of nodes vs alpha") - - fig3, ax3 = plt.subplots() - ax3.plot(ccp_alphas, depth, marker="o", drawstyle="steps-post") - ax3.set_xlabel("alpha") - ax3.set_ylabel("depth of tree") - ax3.set_title("Depth vs alpha") - fig3.tight_layout() - - train_scores = [clf.score(X_train, y_train) for clf in clfs] - test_scores = [clf.score(X_test, y_test) for clf in clfs] - - fig4, ax4 = plt.subplots() - ax4.set_xlabel("alpha") - ax4.set_ylabel("accuracy") - ax4.set_title("Accuracy vs alpha for training and testing sets") - ax4.plot(ccp_alphas, train_scores, marker="o", label="train", drawstyle="steps-post") - ax4.plot(ccp_alphas, test_scores, marker="o", label="test", drawstyle="steps-post") - ax4.legend() - - score_gap = [] - for train_score, test_score, ccp_alpha in zip(test_scores, train_scores, ccp_alphas): - score_gap.append((train_score, test_score, abs(train_score - test_score), ccp_alpha)) - score_gap.sort(key=lambda a: a[2]) - top3_score = score_gap[:3] - top3_score.sort(key=lambda a: a[1], reverse=True) - text = f"Train accuracy: {round(top3_score[0][0], 2)}, Test accuracy: {round(top3_score[0][1], 2)}, The best value of cost complexity parameter alpha (ccp_alpha): {round(top3_score[0][2], 2)}" - return fig1, fig2, fig3, fig4, text - - -with gr.Blocks(theme=theme) as demo: - gr.Markdown(''' -
    -

    ⚒ Post pruning decision trees with cost complexity pruning 🛠

    -
    - ''') - gr.Markdown(model_card) - gr.Markdown("Author: Vu Minh Chien. Based on the example from scikit-learn") - test_size = gr.Slider(minimum=0, maximum=1, step=0.1, value=0.2, label="Test size") - random_state = gr.Slider(minimum=0, maximum=2000, step=1, value=0, label="Random state") - - with gr.Row(): - with gr.Column(): - plot_impurity = gr.Plot() - with gr.Column(): - plot_node = gr.Plot() - - with gr.Row(): - with gr.Column(): - plot_depth = gr.Plot() - with gr.Column(): - plot_compare = gr.Plot() - with gr.Row(): - result = gr.Textbox(label="Resusts") - test_size.change(fn=get_ccp, inputs=[test_size, random_state], outputs=[plot_impurity, plot_node, plot_depth, plot_compare, result]) - random_state.change(fn=get_ccp, inputs=[test_size, random_state], outputs=[plot_impurity, plot_node, plot_depth, plot_compare,result]) - -demo.launch() \ No newline at end of file diff --git a/spaces/smjain/smjainvoice/test_arch.py b/spaces/smjain/smjainvoice/test_arch.py deleted file mode 100644 index 5729474cc66b36a0ea136247b7f6fffbd62dc9dd..0000000000000000000000000000000000000000 --- a/spaces/smjain/smjainvoice/test_arch.py +++ /dev/null @@ -1,65 +0,0 @@ -#!/usr/bin/env python3 -#coding:utf-8 - -import os -import yaml -import paddle -import click -import warnings -warnings.simplefilter('ignore') - -from munch import Munch - -from starganv2vc_paddle.models import build_model - -from starganv2vc_paddle.Utils.ASR.models import ASRCNN -from starganv2vc_paddle.Utils.JDC.model import JDCNet - - -@click.command() -@click.option('-p', '--config_path', default='Configs/config.yml', type=str) - -def main(config_path): - config = yaml.safe_load(open(config_path)) - - # load ASR model - ASR_config = config.get('ASR_config', False) - with open(ASR_config) as f: - ASR_config = yaml.safe_load(f) - ASR_model_config = ASR_config['model_params'] - ASR_model = ASRCNN(**ASR_model_config) - _ = ASR_model.eval() - - # load F0 model - F0_model = JDCNet(num_class=1, seq_len=192) - _ = F0_model.eval() - - # build model - _, model_ema = build_model(Munch(config['model_params']), F0_model, ASR_model) - - asr_input = paddle.randn([4, 80, 192]) - print('ASR model input:', asr_input.shape, 'output:', ASR_model(asr_input).shape) - mel_input = paddle.randn([4, 1, 192, 512]) - print('F0 model input:', mel_input.shape, 'output:', [t.shape for t in F0_model(mel_input)]) - - _ = [v.eval() for v in model_ema.values()] - label = paddle.to_tensor([0,1,2,3], dtype=paddle.int64) - latent_dim = model_ema.mapping_network.shared[0].weight.shape[0] - latent_style = paddle.randn([4, latent_dim]) - ref = model_ema.mapping_network(latent_style, label) - mel_input2 = paddle.randn([4, 1, 192, 512]) - style_ref = model_ema.style_encoder(mel_input2, label) - print('StyleGANv2-VC encoder inputs:', mel_input2.shape, 'output:', style_ref.shape, 'should has the same shape as the ref:', ref.shape) - f0_feat = F0_model.get_feature_GAN(mel_input) - out = model_ema.generator(mel_input, style_ref, F0=f0_feat) - print('StyleGANv2-VC inputs:', label.shape, latent_style.shape, mel_input.shape, 'output:', out.shape) - - paddle.save({k: v.state_dict() for k, v in model_ema.items()}, 'test_arch.pd') - file_size = os.path.getsize('test_arch.pd') / float(1024*1024) - print(f'Main models occupied {file_size:.2f} MB') - os.remove('test_arch.pd') - - return 0 - -if __name__=="__main__": - main() diff --git a/spaces/songwy/VITS-Umamusume-voice-synthesizer/text/korean.py b/spaces/songwy/VITS-Umamusume-voice-synthesizer/text/korean.py deleted file mode 100644 index edee07429a450c55e3d8e246997faaa1e0b89cc9..0000000000000000000000000000000000000000 --- a/spaces/songwy/VITS-Umamusume-voice-synthesizer/text/korean.py +++ /dev/null @@ -1,210 +0,0 @@ -import re -from jamo import h2j, j2hcj -import ko_pron - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (ipa, lazy ipa) pairs: -_ipa_to_lazy_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('t͡ɕ','ʧ'), - ('d͡ʑ','ʥ'), - ('ɲ','n^'), - ('ɕ','ʃ'), - ('ʷ','w'), - ('ɭ','l`'), - ('ʎ','ɾ'), - ('ɣ','ŋ'), - ('ɰ','ɯ'), - ('ʝ','j'), - ('ʌ','ə'), - ('ɡ','g'), - ('\u031a','#'), - ('\u0348','='), - ('\u031e',''), - ('\u0320',''), - ('\u0339','') -]] - - -def latin_to_hangul(text): - for regex, replacement in _latin_to_hangul: - text = re.sub(regex, replacement, text) - return text - - -def divide_hangul(text): - text = j2hcj(h2j(text)) - for regex, replacement in _hangul_divided: - text = re.sub(regex, replacement, text) - return text - - -def hangul_number(num, sino=True): - '''Reference https://github.com/Kyubyong/g2pK''' - num = re.sub(',', '', num) - - if num == '0': - return '영' - if not sino and num == '20': - return '스무' - - digits = '123456789' - names = '일이삼사오육칠팔구' - digit2name = {d: n for d, n in zip(digits, names)} - - modifiers = '한 두 세 네 다섯 여섯 일곱 여덟 아홉' - decimals = '열 스물 서른 마흔 쉰 예순 일흔 여든 아흔' - digit2mod = {d: mod for d, mod in zip(digits, modifiers.split())} - digit2dec = {d: dec for d, dec in zip(digits, decimals.split())} - - spelledout = [] - for i, digit in enumerate(num): - i = len(num) - i - 1 - if sino: - if i == 0: - name = digit2name.get(digit, '') - elif i == 1: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - else: - if i == 0: - name = digit2mod.get(digit, '') - elif i == 1: - name = digit2dec.get(digit, '') - if digit == '0': - if i % 4 == 0: - last_three = spelledout[-min(3, len(spelledout)):] - if ''.join(last_three) == '': - spelledout.append('') - continue - else: - spelledout.append('') - continue - if i == 2: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 3: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 4: - name = digit2name.get(digit, '') + '만' - name = name.replace('일만', '만') - elif i == 5: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - elif i == 6: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 7: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 8: - name = digit2name.get(digit, '') + '억' - elif i == 9: - name = digit2name.get(digit, '') + '십' - elif i == 10: - name = digit2name.get(digit, '') + '백' - elif i == 11: - name = digit2name.get(digit, '') + '천' - elif i == 12: - name = digit2name.get(digit, '') + '조' - elif i == 13: - name = digit2name.get(digit, '') + '십' - elif i == 14: - name = digit2name.get(digit, '') + '백' - elif i == 15: - name = digit2name.get(digit, '') + '천' - spelledout.append(name) - return ''.join(elem for elem in spelledout) - - -def number_to_hangul(text): - '''Reference https://github.com/Kyubyong/g2pK''' - tokens = set(re.findall(r'(\d[\d,]*)([\uac00-\ud71f]+)', text)) - for token in tokens: - num, classifier = token - if classifier[:2] in _korean_classifiers or classifier[0] in _korean_classifiers: - spelledout = hangul_number(num, sino=False) - else: - spelledout = hangul_number(num, sino=True) - text = text.replace(f'{num}{classifier}', f'{spelledout}{classifier}') - # digit by digit for remaining digits - digits = '0123456789' - names = '영일이삼사오육칠팔구' - for d, n in zip(digits, names): - text = text.replace(d, n) - return text - - -def korean_to_lazy_ipa(text): - text = latin_to_hangul(text) - text = number_to_hangul(text) - text=re.sub('[\uac00-\ud7af]+',lambda x:ko_pron.romanise(x.group(0),'ipa').split('] ~ [')[0],text) - for regex, replacement in _ipa_to_lazy_ipa: - text = re.sub(regex, replacement, text) - return text - - -def korean_to_ipa(text): - text = korean_to_lazy_ipa(text) - return text.replace('ʧ','tʃ').replace('ʥ','dʑ') diff --git a/spaces/srikanth-nm/ai_seeker/end_calculate.py b/spaces/srikanth-nm/ai_seeker/end_calculate.py deleted file mode 100644 index d6780026e03f69c167f3300dfb6671b6b19bb2e5..0000000000000000000000000000000000000000 --- a/spaces/srikanth-nm/ai_seeker/end_calculate.py +++ /dev/null @@ -1,24 +0,0 @@ -import json - -def calculate_end_from_file(input_file_path, output_file_path): - with open(input_file_path, 'r') as file: - input_data = json.load(file) - - # Iterate through the list of dictionaries and calculate "end" for each one - for item in input_data: - item["end"] =round(item["start"] + item["duration"],2) - del item["duration"] # Remove the "duration" key from each dictionary - - # Save the updated data to a new JSON file - with open(output_file_path, 'w') as output_file: - json.dump(input_data, output_file) - -# # Replace 'input_file.json' with the actual path to your input JSON file -# input_file_path = '/home/bharathi/langchain_experiments/GenAI/transcript.json' - -# # Replace 'output_file.json' with the desired path and filename for the output JSON file -# output_file_path = '/home/bharathi/langchain_experiments/GenAI/transcript_end.json' - -# # Call the function to calculate the "end" values and remove "duration" and save the new JSON file -# calculate_end_from_file(input_file_path, output_file_path) - diff --git a/spaces/stomexserde/gpt4-ui/Examples/Catia V5 R22 Download [NEW] ((HOT)).md b/spaces/stomexserde/gpt4-ui/Examples/Catia V5 R22 Download [NEW] ((HOT)).md deleted file mode 100644 index fe88cbc6459dcdbbccff28583e1e0b6b71f87f3c..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Catia V5 R22 Download [NEW] ((HOT)).md +++ /dev/null @@ -1,30 +0,0 @@ -
    -

    CATIA V5 R22 Download: A Comprehensive Guide

    -

    CATIA V5 R22 is a software package that allows you to design, engineer, and simulate complex products and systems. It is part of the 3DEXPERIENCE platform, which also includes other products such as 3DEXPERIENCE CATIA, ENOVIA, DELMIA, and SIMULIA. CATIA V5 R22 is also known as CATIA V5-6R2022 or CATIA V5-6R32.

    -

    Catia V5 R22 Download [NEW]


    Download Zip > https://urlgoal.com/2uIbZw



    -

    If you want to download CATIA V5 R22, you need to have a valid license and access to the Dassault Systèmes Downloads portal[^1^]. There, you can find the latest version of the software, as well as the license server and license keys. You can also download other resources such as documentation, service packs, and hotfixes.

    -

    To download CATIA V5 R22, you need to follow these steps:

    -
      -
    1. Go to the Dassault Systèmes Downloads portal[^1^] and log in with your credentials.
    2. -
    3. Select the product line "CATIA V5" and the release "V5-6R2022".
    4. -
    5. Choose the operating system and language that match your system requirements.
    6. -
    7. Download the installation files and extract them to a folder on your computer.
    8. -
    9. Run the setup.exe file and follow the instructions on the screen.
    10. -
    11. Activate your license using the license server and license keys that you downloaded earlier.
    12. -
    -

    Congratulations! You have successfully downloaded and installed CATIA V5 R22 on your computer. You can now start using it to create amazing products and systems.

    -

    CATIA V5 R22 is not just a software for creating 3D models. It also has many features that can help you with different aspects of product development, such as analysis, simulation, machining, and collaboration. Here are some of the features of CATIA V5 R22 that you should know about:

    -
      -
    • Mechanical Design: CATIA V5 R22 provides tools for creating solid, hybrid, and sheet metal parts, as well as assemblies and drawings. You can use intuitive and parametric modeling techniques to create complex shapes and structures. You can also use knowledgeware to capture and reuse design knowledge and rules.
    • -
    • Shape Design & Styling: CATIA V5 R22 offers tools for creating and modifying freeform surfaces and mechanical shapes. You can use sketching, sweeping, lofting, blending, and other methods to create smooth and realistic surfaces. You can also apply materials, textures, and colors to enhance the appearance of your models.
    • -
    • Product Synthesis: CATIA V5 R22 enables you to perform complex product reviews and simulations using digital mock-ups (DMUs). You can check the feasibility, functionality, and performance of your products using various analysis tools. You can also use generative design to optimize your products based on your specifications and constraints.
    • -
    • Equipment & Systems Engineering: CATIA V5 R22 allows you to integrate and exchange electrical, fluid, and electronic systems design information with your mechanical design. You can create wiring diagrams, piping diagrams, circuit diagrams, and other schematics using dedicated tools. You can also simulate the behavior and interactions of your systems using functional mock-ups (FMUs).
    • -
    • Analysis: CATIA V5 R22 helps you to optimize your product performance faster with integrated design analysis. You can perform structural analysis, thermal analysis, fluid analysis, vibration analysis, and other types of analysis using finite element methods (FEM) or computational fluid dynamics (CFD). You can also validate your designs against various standards and regulations.
    • -
    • Machining: CATIA V5 R22 supports the entire machining process from design to manufacturing. You can create machining programs for various types of machines such as milling machines, turning machines, drilling machines, etc. You can also simulate the machining operations and verify the quality of your parts.
    • -
    • Infrastructure: CATIA V5 R22 provides a scalable platform for collaborative product creation and product data management. You can access the 3DEXPERIENCE platform capabilities such as social collaboration, enterprise management, and dashboarding. You can also manage your product data using ENOVIA or other PLM systems.
    • -
    • CAA-RADE: CATIA V5 R22 offers a comprehensive set of tools, guides, and APIs for developing custom applications and extensions. You can use CAA-RADE to customize your user interface, add new features, automate tasks, integrate with other software, etc.
    • -
    • Web-based Learning Solutions: CATIA V5 R22 provides an easy-to-use learning and support system that gives you all the required information and training in one source. You can access online courses, tutorials, videos, documentation, FAQs, forums, etc. to learn how to use CATIA V5 R22 effectively.
    • -
    -

    As you can see, CATIA V5 R22 is a powerful software suite that can help you with every stage of product development. Whether you are a designer, engineer, analyst, or manufacturer, you can benefit from the features of CATIA V5 R22.

    e93f5a0c3f
    -
    -
    \ No newline at end of file diff --git a/spaces/subatomicseer/2022-AdaIN-pytorch-Demo/infer_func.py b/spaces/subatomicseer/2022-AdaIN-pytorch-Demo/infer_func.py deleted file mode 100644 index 55130b2f36ad759302672fca4f739e28608666eb..0000000000000000000000000000000000000000 --- a/spaces/subatomicseer/2022-AdaIN-pytorch-Demo/infer_func.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -import torchvision.transforms -from PIL import Image - -from AdaIN import AdaINNet -from utils import adaptive_instance_normalization, transform, linear_histogram_matching - -device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - - -def style_transfer(content_tensor, style_tensor, encoder, decoder, alpha=1.0): - """ - Given content image and style image, generate feature maps with encoder, apply - neural style transfer with adaptive instance normalization, generate output image - with decoder - - Args: - content_tensor (torch.FloatTensor): Content image - style_tensor (torch.FloatTensor): Style Image - encoder: Encoder (vgg19) network - decoder: Decoder network - alpha (float, default=1.0): Weight of style image feature - - Return: - output_tensor (torch.FloatTensor): Style Transfer output image - """ - - content_enc = encoder(content_tensor) - style_enc = encoder(style_tensor) - - transfer_enc = adaptive_instance_normalization(content_enc, style_enc) - - mix_enc = alpha * transfer_enc + (1 - alpha) * content_enc - return decoder(mix_enc) - - -def convert(content_path, style_path, vgg_weights_path, decoder_weights_path, alpha, color_control): - - vgg = torch.load(vgg_weights_path) - model = AdaINNet(vgg).to(device) - model.decoder.load_state_dict(torch.load(decoder_weights_path)) - model.eval() - - # Prepare image transform - t = transform(512) - - # load images - content_img = Image.open(content_path) - content_tensor = t(content_img).unsqueeze(0).to(device) - style_tensor = t(Image.open(style_path)).unsqueeze(0).to(device) - - if color_control: - style_tensor = linear_histogram_matching(content_tensor, style_tensor) - - with torch.no_grad(): - out_tensor = style_transfer(content_tensor, style_tensor, model.encoder, model.decoder, alpha).cpu() - - outimage_fname = 'output.png' - torchvision.utils.save_image(out_tensor.squeeze(0), outimage_fname) - - return outimage_fname diff --git a/spaces/sunshineatnoon/TextureScraping/swapae/util/util.py b/spaces/sunshineatnoon/TextureScraping/swapae/util/util.py deleted file mode 100644 index 934ec56dbda48d78d3a581d0db437701abe0e4ce..0000000000000000000000000000000000000000 --- a/spaces/sunshineatnoon/TextureScraping/swapae/util/util.py +++ /dev/null @@ -1,556 +0,0 @@ -"""This module contains simple helper functions """ -from __future__ import print_function -import torch -import numbers -import torch.nn as nn -import torchvision -import torch.nn.functional as F -import math -import numpy as np -from PIL import Image -import os -import importlib -import argparse -from argparse import Namespace -from sklearn.decomposition import PCA as PCA - - -def normalize(v): - if type(v) == list: - return [normalize(vv) for vv in v] - - return v * torch.rsqrt((torch.sum(v ** 2, dim=1, keepdim=True) + 1e-8)) - -def slerp(a, b, r): - d = torch.sum(a * b, dim=-1, keepdim=True) - p = r * torch.acos(d * (1 - 1e-4)) - c = normalize(b - d * a) - d = a * torch.cos(p) + c * torch.sin(p) - return normalize(d) - - -def lerp(a, b, r): - if type(a) == list or type(a) == tuple: - return [lerp(aa, bb, r) for aa, bb in zip(a, b)] - return a * (1 - r) + b * r - - -def madd(a, b, r): - if type(a) == list or type(a) == tuple: - return [madd(aa, bb, r) for aa, bb in zip(a, b)] - return a + b * r - -def str2bool(v): - if isinstance(v, bool): - return v - if v.lower() in ('yes', 'true', 't', 'y', '1'): - return True - elif v.lower() in ('no', 'false', 'f', 'n', '0'): - return False - else: - raise argparse.ArgumentTypeError('Boolean value expected.') - - -def copyconf(default_opt, **kwargs): - conf = Namespace(**vars(default_opt)) - for key in kwargs: - setattr(conf, key, kwargs[key]) - return conf - - -def find_class_in_module(target_cls_name, module): - target_cls_name = target_cls_name.replace('_', '').lower() - clslib = importlib.import_module(module) - cls = None - for name, clsobj in clslib.__dict__.items(): - if name.lower() == target_cls_name: - cls = clsobj - - assert cls is not None, "In %s, there should be a class whose name matches %s in lowercase without underscore(_)" % (module, target_cls_name) - - return cls - - -def tile_images(imgs, picturesPerRow=4): - """ Code borrowed from - https://stackoverflow.com/questions/26521365/cleanly-tile-numpy-array-of-images-stored-in-a-flattened-1d-format/26521997 - """ - - # Padding - if imgs.shape[0] % picturesPerRow == 0: - rowPadding = 0 - else: - rowPadding = picturesPerRow - imgs.shape[0] % picturesPerRow - if rowPadding > 0: - imgs = np.concatenate([imgs, np.zeros((rowPadding, *imgs.shape[1:]), dtype=imgs.dtype)], axis=0) - - # Tiling Loop (The conditionals are not necessary anymore) - tiled = [] - for i in range(0, imgs.shape[0], picturesPerRow): - tiled.append(np.concatenate([imgs[j] for j in range(i, i + picturesPerRow)], axis=1)) - - tiled = np.concatenate(tiled, axis=0) - return tiled - - -# Converts a Tensor into a Numpy array -# |imtype|: the desired type of the converted numpy array -def tensor2im(image_tensor, imtype=np.uint8, normalize=True, tile=2): - if isinstance(image_tensor, list): - image_numpy = [] - for i in range(len(image_tensor)): - image_numpy.append(tensor2im(image_tensor[i], imtype, normalize)) - return image_numpy - - if len(image_tensor.shape) == 4: - # transform each image in the batch - images_np = [] - for b in range(image_tensor.shape[0]): - one_image = image_tensor[b] - one_image_np = tensor2im(one_image) - images_np.append(one_image_np.reshape(1, *one_image_np.shape)) - images_np = np.concatenate(images_np, axis=0) - if tile is not False: - tile = max(min(images_np.shape[0] // 2, 4), 1) if tile is True else tile - images_tiled = tile_images(images_np, picturesPerRow=tile) - return images_tiled - else: - return images_np - - if len(image_tensor.shape) == 2: - assert False - #imagce_tensor = image_tensor.unsqueeze(0) - image_numpy = image_tensor.detach().cpu().numpy() if type(image_tensor) is not np.ndarray else image_tensor - if normalize: - image_numpy = (np.transpose(image_numpy, (1, 2, 0)) + 1) / 2.0 * 255.0 - else: - image_numpy = np.transpose(image_numpy, (1, 2, 0)) * 255.0 - image_numpy = np.clip(image_numpy, 0, 255) - if image_numpy.shape[2] == 1: - image_numpy = np.repeat(image_numpy, 3, axis=2) - return image_numpy.astype(imtype) - - -def toPILImage(images, tile=None): - if isinstance(images, list): - if all(['tensor' in str(type(image)).lower() for image in images]): - return toPILImage(torch.cat([im.cpu() for im in images], dim=0), tile) - return [toPILImage(image, tile=tile) for image in images] - - if 'ndarray' in str(type(images)).lower(): - return toPILImage(torch.from_numpy(images)) - - assert 'tensor' in str(type(images)).lower(), "input of type %s cannot be handled." % str(type(images)) - - if tile is None: - max_width = 2560 - tile = min(images.size(0), int(max_width / images.size(3))) - - return Image.fromarray(tensor2im(images, tile=tile)) - - -def diagnose_network(net, name='network'): - """Calculate and print the mean of average absolute(gradients) - - Parameters: - net (torch network) -- Torch network - name (str) -- the name of the network - """ - mean = 0.0 - count = 0 - for param in net.parameters(): - if param.grad is not None: - mean += torch.mean(torch.abs(param.grad.data)) - count += 1 - if count > 0: - mean = mean / count - print(name) - print(mean) - - -def save_image(image_numpy, image_path, aspect_ratio=1.0): - """Save a numpy image to the disk - - Parameters: - image_numpy (numpy array) -- input numpy array - image_path (str) -- the path of the image - """ - - image_pil = Image.fromarray(image_numpy) - h, w, _ = image_numpy.shape - - if aspect_ratio is None: - pass - elif aspect_ratio > 1.0: - image_pil = image_pil.resize((h, int(w * aspect_ratio)), Image.BICUBIC) - elif aspect_ratio < 1.0: - image_pil = image_pil.resize((int(h / aspect_ratio), w), Image.BICUBIC) - image_pil.save(image_path) - - - -def print_numpy(x, val=True, shp=False): - """Print the mean, min, max, median, std, and size of a numpy array - - Parameters: - val (bool) -- if print the values of the numpy array - shp (bool) -- if print the shape of the numpy array - """ - x = x.astype(np.float64) - if shp: - print('shape,', x.shape) - if val: - x = x.flatten() - print('mean = %3.3f, min = %3.3f, max = %3.3f, median = %3.3f, std=%3.3f' % ( - np.mean(x), np.min(x), np.max(x), np.median(x), np.std(x))) - - -def mkdirs(paths): - """create empty directories if they don't exist - - Parameters: - paths (str list) -- a list of directory paths - """ - if isinstance(paths, list) and not isinstance(paths, str): - for path in paths: - mkdir(path) - else: - mkdir(paths) - - -def mkdir(path): - """create a single empty directory if it didn't exist - - Parameters: - path (str) -- a single directory path - """ - if not os.path.exists(path): - os.makedirs(path) - - -def visualize_spatial_code(sp): - device = sp.device - #sp = (sp - sp.min()) / (sp.max() - sp.min() + 1e-7) - if sp.size(1) <= 2: - sp = sp.repeat([1, 3, 1, 1])[:, :3, :, :] - if sp.size(1) == 3: - pass - else: - sp = sp.detach().cpu().numpy() - X = np.transpose(sp, (0, 2, 3, 1)) - B, H, W = X.shape[0], X.shape[1], X.shape[2] - X = np.reshape(X, (-1, X.shape[3])) - X = X - X.mean(axis=0, keepdims=True) - try: - Z = PCA(3).fit_transform(X) - except ValueError: - print("Running PCA on the structure code has failed.") - print("This is likely a bug of scikit-learn in version 0.18.1.") - print("https://stackoverflow.com/a/42764378") - print("The visualization of the structure code on visdom won't work.") - return torch.zeros(B, 3, H, W, device=device) - sp = np.transpose(np.reshape(Z, (B, H, W, -1)), (0, 3, 1, 2)) - sp = (sp - sp.min()) / (sp.max() - sp.min()) * 2 - 1 - sp = torch.from_numpy(sp).to(device) - return sp - - -def blank_tensor(w, h): - return torch.ones(1, 3, h, w) - - - -class RandomSpatialTransformer: - def __init__(self, opt, bs): - self.opt = opt - #self.resample_transformation(bs) - - - def create_affine_transformation(self, ref, rot, sx, sy, tx, ty): - return torch.stack([-ref * sx * torch.cos(rot), -sy * torch.sin(rot), tx, - -ref * sx * torch.sin(rot), sy * torch.cos(rot), ty], axis=1) - - def resample_transformation(self, bs, device, reflection=None, rotation=None, scale=None, translation=None): - dev = device - zero = torch.zeros((bs), device=dev) - if reflection is None: - #if "ref" in self.opt.random_transformation_mode: - ref = torch.round(torch.rand((bs), device=dev)) * 2 - 1 - #else: - # ref = 1.0 - else: - ref = reflection - - if rotation is None: - #if "rot" in self.opt.random_transformation_mode: - max_rotation = 30 * math.pi / 180 - rot = torch.rand((bs), device=dev) * (2 * max_rotation) - max_rotation - #else: - # rot = 0.0 - else: - rot = rotation - - if scale is None: - #if "scale" in self.opt.random_transformation_mode: - min_scale = 1.0 - max_scale = 1.0 - sx = torch.rand((bs), device=dev) * (max_scale - min_scale) + min_scale - sy = torch.rand((bs), device=dev) * (max_scale - min_scale) + min_scale - #else: - # sx, sy = 1.0, 1.0 - else: - sx, sy = scale - - tx, ty = zero, zero - - A = torch.stack([ref * sx * torch.cos(rot), -sy * torch.sin(rot), tx, - ref * sx * torch.sin(rot), sy * torch.cos(rot), ty], axis=1) - return A.view(bs, 2, 3) - - - - def forward_transform(self, x, size): - if type(x) == list: - return [self.forward_transform(xx) for xx in x] - - affine_param = self.resample_transformation(x.size(0), x.device) - affine_grid = F.affine_grid(affine_param, (x.size(0), x.size(1), size[0], size[1]), align_corners=False) - x = F.grid_sample(x, affine_grid, padding_mode='reflection', align_corners=False) - - return x - - -def apply_random_crop(x, target_size, scale_range, num_crops=1, return_rect=False): - # build grid - B = x.size(0) * num_crops - flip = torch.round(torch.rand(B, 1, 1, 1, device=x.device)) * 2 - 1.0 - unit_grid_x = torch.linspace(-1.0, 1.0, target_size, device=x.device)[np.newaxis, np.newaxis, :, np.newaxis].repeat(B, target_size, 1, 1) - unit_grid_y = unit_grid_x.transpose(1, 2) - unit_grid = torch.cat([unit_grid_x * flip, unit_grid_y], dim=3) - - - #crops = [] - x = x.unsqueeze(1).expand(-1, num_crops, -1, -1, -1).flatten(0, 1) - #for i in range(num_crops): - scale = torch.rand(B, 1, 1, 2, device=x.device) * (scale_range[1] - scale_range[0]) + scale_range[0] - offset = (torch.rand(B, 1, 1, 2, device=x.device) * 2 - 1) * (1 - scale) - sampling_grid = unit_grid * scale + offset - crop = F.grid_sample(x, sampling_grid, align_corners=False) - #crops.append(crop) - #crop = torch.stack(crops, dim=1) - crop = crop.view(B // num_crops, num_crops, crop.size(1), crop.size(2), crop.size(3)) - - return crop - - - - -def five_crop_noresize(A): - Y, X = A.size(2) // 3, A.size(3) // 3 - H, W = Y * 2, X * 2 - return torch.stack([A[:, :, 0:0+H, 0:0+W], - A[:, :, Y:Y+H, 0:0+W], - A[:, :, Y:Y+H, X:X+W], - A[:, :, 0:0+H, X:X+W], - A[:, :, Y//2:Y//2+H, X//2:X//2+W]], - dim=1) # return 5-dim tensor - - -def random_crop_noresize(A, crop_size): - offset_y = np.random.randint(A.size(2) - crop_size[0]) - offset_x = np.random.randint(A.size(3) - crop_size[1]) - return A[:, :, offset_y:offset_y + crop_size[0], offset_x:offset_x + crop_size[1]], (offset_y, offset_x) - - -def random_crop_with_resize(A, crop_size): - #size_y = np.random.randint(crop_size[0], A.size(2) + 1) - #size_x = np.random.randint(crop_size[1], A.size(3) + 1) - #size_y, size_x = crop_size - size_y = max(crop_size[0], np.random.randint(A.size(2) // 3, A.size(2) + 1)) - size_x = max(crop_size[1], np.random.randint(A.size(3) // 3, A.size(3) + 1)) - offset_y = np.random.randint(A.size(2) - size_y + 1) - offset_x = np.random.randint(A.size(3) - size_x + 1) - crop_rect = (offset_y, offset_x, size_y, size_x) - resized = crop_with_resize(A, crop_rect, crop_size) - #print('resized %s to %s' % (A.size(), resized.size())) - return resized, crop_rect - - -def crop_with_resize(A, crop_rect, return_size): - offset_y, offset_x, size_y, size_x = crop_rect - crop = A[:, :, offset_y:offset_y + size_y, offset_x:offset_x + size_x] - resized = F.interpolate(crop, size=return_size, mode='bilinear', align_corners=False) - #print('resized %s to %s' % (A.size(), resized.size())) - return resized - - -def compute_similarity_logit(x, y, p=1, compute_interdistances=True): - - def compute_dist(x, y, p): - if p == 2: - return ((x - y) ** 2).sum(dim=-1).sqrt() - else: - return (x - y).abs().sum(dim=-1) - C = x.shape[-1] - - if len(x.shape) == 2: - if compute_interdistances: - dist = torch.cdist(x[None, :, :], y[None, :, :], p)[0] - else: - dist = compute_dist(x, y, p) - if len(x.shape) == 3: - if compute_interdistances: - dist = torch.cdist(x, y, p) - else: - dist = compute_dist(x, y, p) - - if p == 1: - dist = 1 - dist / math.sqrt(C) - elif p == 2: - dist = 1 - 0.5 * (dist ** 2) - - return dist / 0.07 - - -def set_diag_(x, value): - assert x.size(-2) == x.size(-1) - L = x.size(-2) - identity = torch.eye(L, dtype=torch.bool, device=x.device) - identity = identity.view([1] * (len(x.shape) - 2) + [L, L]) - x.masked_fill_(identity, value) - - -def to_numpy(metric_dict): - new_dict = {} - for k, v in metric_dict.items(): - if "numpy" not in str(type(v)): - v = v.detach().cpu().mean().numpy() - new_dict[k] = v - return new_dict - - -def is_custom_kernel_supported(): - version_str = str(torch.version.cuda).split(".") - major = version_str[0] - minor = version_str[1] - return int(major) >= 10 and int(minor) >= 1 - - -def shuffle_batch(x): - B = x.size(0) - perm = torch.randperm(B, dtype=torch.long, device=x.device) - return x[perm] - - -def unravel_index(index, shape): - out = [] - for dim in reversed(shape): - out.append(index % dim) - index = index // dim - return tuple(reversed(out)) - - -def quantize_color(x, num=64): - return (x * num / 2).round() * (2 / num) - - -def resize2d_tensor(x, size_or_tensor_of_size): - if torch.is_tensor(size_or_tensor_of_size): - size = size_or_tensor_of_size.size() - elif isinstance(size_or_tensor_of_size, np.ndarray): - size = size_or_tensor_of_size.shape - else: - size = size_or_tensor_of_size - - if isinstance(size, tuple) or isinstance(size, list): - return F.interpolate(x, size[-2:], - mode='bilinear', align_corners=False) - else: - raise ValueError("%s is unrecognized" % str(type(size))) - - -def correct_resize(t, size, mode=Image.BICUBIC): - device = t.device - t = t.detach().cpu() - resized = [] - for i in range(t.size(0)): - one_t = t[i:i+1] - one_image = Image.fromarray(tensor2im(one_t, tile=1)).resize(size, Image.BICUBIC) - resized_t = torchvision.transforms.functional.to_tensor(one_image) * 2 - 1.0 - resized.append(resized_t) - return torch.stack(resized, dim=0).to(device) - - - - -class GaussianSmoothing(nn.Module): - """ - Apply gaussian smoothing on a - 1d, 2d or 3d tensor. Filtering is performed seperately for each channel - in the input using a depthwise convolution. - Arguments: - channels (int, sequence): Number of channels of the input tensors. Output will - have this number of channels as well. - kernel_size (int, sequence): Size of the gaussian kernel. - sigma (float, sequence): Standard deviation of the gaussian kernel. - dim (int, optional): The number of dimensions of the data. - Default value is 2 (spatial). - """ - def __init__(self, channels, kernel_size, sigma, dim=2): - super(GaussianSmoothing, self).__init__() - if isinstance(kernel_size, numbers.Number): - self.pad_size = kernel_size // 2 - kernel_size = [kernel_size] * dim - else: - raise NotImplementedError() - - if isinstance(sigma, numbers.Number): - sigma = [sigma] * dim - - # The gaussian kernel is the product of the - # gaussian function of each dimension. - kernel = 1 - meshgrids = torch.meshgrid( - [ - torch.arange(size, dtype=torch.float32) - for size in kernel_size - ] - ) - for size, std, mgrid in zip(kernel_size, sigma, meshgrids): - mean = (size - 1) / 2 - kernel *= 1 / (std * math.sqrt(2 * math.pi)) * \ - torch.exp(-((mgrid - mean) / std) ** 2 / 2) - - # Make sure sum of values in gaussian kernel equals 1. - kernel = kernel / (torch.sum(kernel)) - - - # Reshape to depthwise convolutional weight - kernel = kernel.view(1, 1, *kernel.size()) - kernel = kernel.repeat(channels, *[1] * (kernel.dim() - 1)) - - self.register_buffer('weight', kernel) - self.groups = channels - - if dim == 1: - self.conv = F.conv1d - elif dim == 2: - self.conv = F.conv2d - elif dim == 3: - self.conv = F.conv3d - else: - raise RuntimeError( - 'Only 1, 2 and 3 dimensions are supported. Received {}.'.format(dim) - ) - - - def forward(self, input): - """ - Apply gaussian filter to input. - Arguments: - input (torch.Tensor): Input to apply gaussian filter on. - Returns: - filtered (torch.Tensor): Filtered output. - """ - x = F.pad(input, [self.pad_size] * 4, mode="reflect") - return self.conv(x, weight=self.weight, groups=self.groups) diff --git a/spaces/supertori/files/stable-diffusion-webui/modules/extensions.py b/spaces/supertori/files/stable-diffusion-webui/modules/extensions.py deleted file mode 100644 index 5918b9c4ec578421e70f09f754f2c9b2dafef03a..0000000000000000000000000000000000000000 --- a/spaces/supertori/files/stable-diffusion-webui/modules/extensions.py +++ /dev/null @@ -1,107 +0,0 @@ -import os -import sys -import traceback - -import time -import git - -from modules import paths, shared - -extensions = [] -extensions_dir = os.path.join(paths.data_path, "extensions") -extensions_builtin_dir = os.path.join(paths.script_path, "extensions-builtin") - -if not os.path.exists(extensions_dir): - os.makedirs(extensions_dir) - -def active(): - return [x for x in extensions if x.enabled] - - -class Extension: - def __init__(self, name, path, enabled=True, is_builtin=False): - self.name = name - self.path = path - self.enabled = enabled - self.status = '' - self.can_update = False - self.is_builtin = is_builtin - self.version = '' - - repo = None - try: - if os.path.exists(os.path.join(path, ".git")): - repo = git.Repo(path) - except Exception: - print(f"Error reading github repository info from {path}:", file=sys.stderr) - print(traceback.format_exc(), file=sys.stderr) - - if repo is None or repo.bare: - self.remote = None - else: - try: - self.remote = next(repo.remote().urls, None) - self.status = 'unknown' - head = repo.head.commit - ts = time.asctime(time.gmtime(repo.head.commit.committed_date)) - self.version = f'{head.hexsha[:8]} ({ts})' - - except Exception: - self.remote = None - - def list_files(self, subdir, extension): - from modules import scripts - - dirpath = os.path.join(self.path, subdir) - if not os.path.isdir(dirpath): - return [] - - res = [] - for filename in sorted(os.listdir(dirpath)): - res.append(scripts.ScriptFile(self.path, filename, os.path.join(dirpath, filename))) - - res = [x for x in res if os.path.splitext(x.path)[1].lower() == extension and os.path.isfile(x.path)] - - return res - - def check_updates(self): - repo = git.Repo(self.path) - for fetch in repo.remote().fetch(dry_run=True): - if fetch.flags != fetch.HEAD_UPTODATE: - self.can_update = True - self.status = "behind" - return - - self.can_update = False - self.status = "latest" - - def fetch_and_reset_hard(self): - repo = git.Repo(self.path) - # Fix: `error: Your local changes to the following files would be overwritten by merge`, - # because WSL2 Docker set 755 file permissions instead of 644, this results to the error. - repo.git.fetch(all=True) - repo.git.reset('origin', hard=True) - - -def list_extensions(): - extensions.clear() - - if not os.path.isdir(extensions_dir): - return - - paths = [] - for dirname in [extensions_dir, extensions_builtin_dir]: - if not os.path.isdir(dirname): - return - - for extension_dirname in sorted(os.listdir(dirname)): - path = os.path.join(dirname, extension_dirname) - if not os.path.isdir(path): - continue - - paths.append((extension_dirname, path, dirname == extensions_builtin_dir)) - - for dirname, path, is_builtin in paths: - extension = Extension(name=dirname, path=path, enabled=dirname not in shared.opts.disabled_extensions, is_builtin=is_builtin) - extensions.append(extension) - diff --git a/spaces/supertori/files/stable-diffusion-webui/modules/gfpgan_model.py b/spaces/supertori/files/stable-diffusion-webui/modules/gfpgan_model.py deleted file mode 100644 index bc0c5f738e086225505af9738862fde4eecfa4a9..0000000000000000000000000000000000000000 --- a/spaces/supertori/files/stable-diffusion-webui/modules/gfpgan_model.py +++ /dev/null @@ -1,116 +0,0 @@ -import os -import sys -import traceback - -import facexlib -import gfpgan - -import modules.face_restoration -from modules import paths, shared, devices, modelloader - -model_dir = "GFPGAN" -user_path = None -model_path = os.path.join(paths.models_path, model_dir) -model_url = "https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth" -have_gfpgan = False -loaded_gfpgan_model = None - - -def gfpgann(): - global loaded_gfpgan_model - global model_path - if loaded_gfpgan_model is not None: - loaded_gfpgan_model.gfpgan.to(devices.device_gfpgan) - return loaded_gfpgan_model - - if gfpgan_constructor is None: - return None - - models = modelloader.load_models(model_path, model_url, user_path, ext_filter="GFPGAN") - if len(models) == 1 and "http" in models[0]: - model_file = models[0] - elif len(models) != 0: - latest_file = max(models, key=os.path.getctime) - model_file = latest_file - else: - print("Unable to load gfpgan model!") - return None - if hasattr(facexlib.detection.retinaface, 'device'): - facexlib.detection.retinaface.device = devices.device_gfpgan - model = gfpgan_constructor(model_path=model_file, upscale=1, arch='clean', channel_multiplier=2, bg_upsampler=None, device=devices.device_gfpgan) - loaded_gfpgan_model = model - - return model - - -def send_model_to(model, device): - model.gfpgan.to(device) - model.face_helper.face_det.to(device) - model.face_helper.face_parse.to(device) - - -def gfpgan_fix_faces(np_image): - model = gfpgann() - if model is None: - return np_image - - send_model_to(model, devices.device_gfpgan) - - np_image_bgr = np_image[:, :, ::-1] - cropped_faces, restored_faces, gfpgan_output_bgr = model.enhance(np_image_bgr, has_aligned=False, only_center_face=False, paste_back=True) - np_image = gfpgan_output_bgr[:, :, ::-1] - - model.face_helper.clean_all() - - if shared.opts.face_restoration_unload: - send_model_to(model, devices.cpu) - - return np_image - - -gfpgan_constructor = None - - -def setup_model(dirname): - global model_path - if not os.path.exists(model_path): - os.makedirs(model_path) - - try: - from gfpgan import GFPGANer - from facexlib import detection, parsing - global user_path - global have_gfpgan - global gfpgan_constructor - - load_file_from_url_orig = gfpgan.utils.load_file_from_url - facex_load_file_from_url_orig = facexlib.detection.load_file_from_url - facex_load_file_from_url_orig2 = facexlib.parsing.load_file_from_url - - def my_load_file_from_url(**kwargs): - return load_file_from_url_orig(**dict(kwargs, model_dir=model_path)) - - def facex_load_file_from_url(**kwargs): - return facex_load_file_from_url_orig(**dict(kwargs, save_dir=model_path, model_dir=None)) - - def facex_load_file_from_url2(**kwargs): - return facex_load_file_from_url_orig2(**dict(kwargs, save_dir=model_path, model_dir=None)) - - gfpgan.utils.load_file_from_url = my_load_file_from_url - facexlib.detection.load_file_from_url = facex_load_file_from_url - facexlib.parsing.load_file_from_url = facex_load_file_from_url2 - user_path = dirname - have_gfpgan = True - gfpgan_constructor = GFPGANer - - class FaceRestorerGFPGAN(modules.face_restoration.FaceRestoration): - def name(self): - return "GFPGAN" - - def restore(self, np_image): - return gfpgan_fix_faces(np_image) - - shared.face_restorers.append(FaceRestorerGFPGAN()) - except Exception: - print("Error setting up GFPGAN:", file=sys.stderr) - print(traceback.format_exc(), file=sys.stderr) diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/1220 Rld Avatar Keygen V1 01 Rar ((BETTER)).md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/1220 Rld Avatar Keygen V1 01 Rar ((BETTER)).md deleted file mode 100644 index c82326bae6dae8aecab78730ebb1457b44388ff0..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/1220 Rld Avatar Keygen V1 01 Rar ((BETTER)).md +++ /dev/null @@ -1,64 +0,0 @@ -

    1220 Rld Avatar Keygen V1 01 Rar


    Downloadhttps://cinurl.com/2uEXoU



    - -A good starting point is to create a small list of questions to ask yourself about your work overload. Reflect on the questions below to gain insights about your current situation: - -1. How is my current workload?. - -2. How much time do I spend doing my job?. - -3. How much time do I spend on administrative work?. - -4. How much time do I spend in meetings?. - -5. How much time do I spend on teaching?. - -6. How much time do I spend on research?. - -7. How much time do I spend on supervision?. - -8. How much time do I spend on other voluntary activities?. - -9. How do I feel about my current workload?. - -10. How do I feel about my current work-life balance?. - -11. How do I feel about my supervisory role?. - -12. How do I feel about my job security?. - -13. How do I feel about my institutional support for teaching and research?. - -14. How do I feel about the amount of teaching I do?. - -15. How do I feel about the amount of research I do?. - -16. How do I feel about the amount of administrative work I do?. - -17. How do I feel about the amount of time I spend on supervision?. - -18. How do I feel about the amount of time I spend on other voluntary activities?. - -19. How do I feel about the amount of time I spend on teaching?. - -20. How do I feel about the amount of time I spend on research?. - -21. How do I feel about the amount of time I spend on administrative work?. - -22. How do I feel about the amount of time I spend in meetings?. - -23. How do I feel about the amount of time I spend on supervising?. - -24. How do I feel about the amount of time I spend on other voluntary activities?. - -25. How do I feel about my job security?. - -26. How do I feel about the amount of time I spend on my job?. - -27. How do I feel about my overall work-life balance?. - -28. How do I feel about my job?. - -29 4fefd39f24
    -
    -
    -

    diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Def Jam Fight For Ny Full Game Download Pc.rar Fixed.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Def Jam Fight For Ny Full Game Download Pc.rar Fixed.md deleted file mode 100644 index 849a8410018d3a8f0629fa969e7aeea1c886c012..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Def Jam Fight For Ny Full Game Download Pc.rar Fixed.md +++ /dev/null @@ -1,12 +0,0 @@ -

    def jam fight for ny full game download pc.rar


    Download Ziphttps://cinurl.com/2uEXY4



    -
    -Def Jam Fight For NY [Xbox Classic] Download xbox game iso, xbox game Jtag-rgh, google drive direct links, torrent xbox 360 game, xbox pal game, xbox game ... Def Jam Fight For NY game [Xbox Classic] Download xbox game iso, xbox game Jtag-rgh, google drive direct links, torrent xbox 360 game, xbox pal game, xbox game ... -Def Jam Fight For NY Game [Xbox Classic] Download xbox game iso, xbox game Jtag-rgh, google drive direct links, torrent xbox 360 game, xbox pal game, xbox game... -Buy Def Jam Fight For NY - Gamecity. -Delivery across Russia. -Price in the online store. -Reviews, reviews. -Def Jam Fight For NY X360 - YouTube Def Jam Fight For NY [Xbox 360]. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Download Film Trial Run Indowebster 11.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Download Film Trial Run Indowebster 11.md deleted file mode 100644 index 66310f7a8276fec7663459b547ab0e2fb67b221d..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Download Film Trial Run Indowebster 11.md +++ /dev/null @@ -1,50 +0,0 @@ -
    -

    Download Film Trial Run Indowebster 11: A Review

    -

    If you are looking for a romantic drama film that will make you cry, laugh and fall in love, you should download Film Trial Run Indowebster 11. This film is based on the bestselling novel by Nicholas Sparks, the author of The Notebook, A Walk to Remember and The Last Song. In this film, you will follow the story of Noah Calhoun, a successful trial lawyer who has no memory of his mother, and Lucinda, a veterinarian who works at a racetrack where Noah visits regularly.

    -

    Download Film Trial Run Indowebster 11


    Download Ziphttps://cinurl.com/2uEYAN



    -

    The film is set in Boston and Tennessee, and it shows how Noah and Lucinda meet, fall in love and face challenges in their relationship. The film also explores the themes of memory, family and destiny. The film stars Ryan Gosling as Noah and Rachel McAdams as Lucinda, who have great chemistry on screen. The film also features James Garner as Deacon, Noah's father, and Gena Rowlands as Allie, Noah's mother.

    -

    How to Download Film Trial Run Indowebster 11

    -

    If you want to watch this film at the comfort of your home, you can download Film Trial Run Indowebster 11 from various online platforms. Indowebster is one of the most popular sites that offer free downloads of movies, TV shows, music and games. You can download Film Trial Run Indowebster 11 from Indowebster by following these simple steps:

    -
      -
    1. Go to https://www.indowebster.com/ and search for Film Trial Run.
    2. -
    3. Select the film from the list of results and click on the download button.
    4. -
    5. Choose the quality and format of the film that you want to download.
    6. -
    7. Wait for the download to finish and enjoy watching the film.
    8. -
    -

    Why You Should Download Film Trial Run Indowebster 11

    -

    There are many reasons why you should download Film Trial Run Indowebster 11 and watch it with your loved ones. Here are some of them:

    -
      -
    • The film is a beautiful adaptation of Nicholas Sparks' novel, which is known for its emotional and romantic stories.
    • -
    • The film has amazing performances by Ryan Gosling and Rachel McAdams, who bring their characters to life with their charm and talent.
    • -
    • The film has stunning cinematography and music that enhance the mood and atmosphere of the film.
    • -
    • The film has a touching and inspiring message about love, memory and fate that will make you think and feel.
    • -
    -

    So what are you waiting for? Download Film Trial Run Indowebster 11 today and enjoy this wonderful film with your friends or family. You will not regret it!

    -

    What People Are Saying About Film Trial Run Indowebster 11

    -

    Film Trial Run Indowebster 11 has received positive reviews from critics and audiences alike. The film has been praised for its emotional and romantic story, its excellent acting and its beautiful cinematography. Here are some of the comments that people have made about the film:

    -

    -
    -

    "Film Trial Run Indowebster 11 is a masterpiece of romance and drama. It made me cry, laugh and swoon. Ryan Gosling and Rachel McAdams are perfect as Noah and Lucinda. They have such a natural and believable chemistry that I felt like I was watching a real couple. The film is also visually stunning, with gorgeous scenes of Boston and Tennessee. The music is also very fitting and adds to the mood of the film. I highly recommend this film to anyone who loves Nicholas Sparks' novels or romantic films in general." - Anna, a film lover

    -
    -
    -

    "I downloaded Film Trial Run Indowebster 11 from Indowebster and I was not disappointed. The film is a wonderful adaptation of Nicholas Sparks' novel, which is one of my favorites. The film captures the essence of the novel and adds some twists and surprises that make it even more interesting. The film is very well-acted, especially by Ryan Gosling and Rachel McAdams, who portray Noah and Lucinda with such depth and emotion. The film also explores some important themes, such as memory, family and destiny, that make it more than just a love story. The film is a must-see for fans of Nicholas Sparks or romantic dramas." - Ben, a book lover

    -
    -
    How to Enjoy Film Trial Run Indowebster 11 Even More
    -

    If you want to enhance your experience of watching Film Trial Run Indowebster 11, here are some tips that you can follow:

    -
      -
    • Watch the film with someone you love or care about. The film is a great way to bond with your partner, friend or family member over a romantic and touching story.
    • -
    • Read the novel by Nicholas Sparks before or after watching the film. The novel will give you more insight into the characters, the plot and the themes of the film. You can also compare and contrast the differences and similarities between the novel and the film.
    • -
    • Listen to the soundtrack of the film. The soundtrack features songs by artists such as Ed Sheeran, Taylor Swift, John Legend and Sam Smith. The songs complement the scenes and emotions of the film very well.
    • -
    -

    Download Film Trial Run Indowebster 11 today and enjoy this amazing film that will make you feel all kinds of emotions. You will not regret it!

    -
    Where to Find More Information About Film Trial Run Indowebster 11
    -

    If you want to learn more about Film Trial Run Indowebster 11, you can visit some of the following websites that offer more information about the film, the novel, the actors and the author:

    - -

    These websites will help you get a better understanding of Film Trial Run Indowebster 11 and appreciate it more. You can also share your thoughts and opinions about the film with other fans and viewers on these websites.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Tekken Tag Tournament 2 Pc Download _HOT_ Highly Compressed.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Tekken Tag Tournament 2 Pc Download _HOT_ Highly Compressed.md deleted file mode 100644 index 754708e63918dd7151f68ddcef42c7582d4d5699..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Tekken Tag Tournament 2 Pc Download _HOT_ Highly Compressed.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Tekken Tag Tournament 2 Pc Download Highly Compressed


    DOWNLOAD ☆☆☆☆☆ https://cinurl.com/2uEY5k



    - -Download Tekken Tag Tournament 1 game for PC 100% working full ... this installment was published were Arcade, Play Station 2 and Play ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/demo/analyze/features/basic.py b/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/demo/analyze/features/basic.py deleted file mode 100644 index f7d3dbae466c5aa3f33866b82cd51e470b7111b1..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/demo/analyze/features/basic.py +++ /dev/null @@ -1,40 +0,0 @@ - -class ObjectAnalyzed: - - def __init__(self): - # Processor addons - self.attributes = [] - self.drawers = [] - - def has_processor(self): - if len(self.attributes) > 0: - return True - else: - return False - - def plot_features(self, image, plotter, show_attributes): - for drawer in self.drawers: - image = drawer(image, self, plotter, show_attributes) - return image - - def get_attributes(self, names=None): - - # Initialization by input type - single_name = False - if names is None: - names = self.attributes - elif isinstance(names, str): - names = [names] - single_name = True - - attributes = {} - attribute = [] - for name in names: - if name in self.attributes and name in self.__dict__.keys(): - attribute = getattr(self, name) - attributes[name] = attribute - - if single_name: - return attribute - else: - return attributes \ No newline at end of file diff --git a/spaces/tcfly/Flowise/Dockerfile b/spaces/tcfly/Flowise/Dockerfile deleted file mode 100644 index 6497969be0c085f3f46e7c8acea8932ca44a8278..0000000000000000000000000000000000000000 --- a/spaces/tcfly/Flowise/Dockerfile +++ /dev/null @@ -1,34 +0,0 @@ -FROM node:18-alpine - -RUN mkdir -p /data && mkdir -p /data/flowise && mkdir -p /data/flowise/logs && chmod -R 777 /data -# Set home to the /data directory -ENV HOME=/data - -WORKDIR $HOME/flowise - -RUN apk add --no-cache git -RUN apk add --no-cache python3 py3-pip make g++ -# needed for pdfjs-dist -RUN apk add --no-cache build-base cairo-dev pango-dev - -# Install Chromium -RUN apk add --no-cache chromium - -ENV PUPPETEER_SKIP_DOWNLOAD=true -ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser -ENV DATABASE_PATH=$HOME/flowise -ENV APIKEY_PATH=$HOME/flowise -ENV LOG_PATH=$HOME/flowise/logs -ENV LOG_LEVEL=debug -#ENV DEBUG=true - -# You can install a specific version like: flowise@1.0.0 -RUN npm install -g flowise - -RUN mkdir -p /usr/local/lib/node_modules/flowise && mkdir -p /usr/local/lib/node_modules/flowise/uploads && chmod -R 777 /usr/local/lib/node_modules/flowise - -USER 1000 - -EXPOSE 3000 - -CMD /bin/sh -c "sleep 3; flowise start" \ No newline at end of file diff --git a/spaces/teamtom/RockPaperScissors/app.py b/spaces/teamtom/RockPaperScissors/app.py deleted file mode 100644 index 72d158effccbf2d278033c99f0f03a2001fd1066..0000000000000000000000000000000000000000 --- a/spaces/teamtom/RockPaperScissors/app.py +++ /dev/null @@ -1,34 +0,0 @@ -from fastai.vision.all import * -import gradio as gr -import pathlib, os - - -classes = ['rock', 'paper', 'scissors'] # c0, c1, c2 - -def classify_image(img, model='rock-paper-scissors-resnet34.pkl'): - if os.name == 'nt': # workaround for Windows - pathlib.PosixPath = pathlib.WindowsPath - if os.name == 'posix': # workaround for Linux - pathlib.WindowsPath = pathlib.PosixPath - - learn = load_learner(model) - pred,idx,probs = learn.predict(img) - return dict(zip(classes, map(float, probs))) - -models = ['rock-paper-scissors-squeezenet.pkl','rock-paper-scissors-resnet34.pkl'] -model = gr.Dropdown(models, label="Select Model") - -image = gr.inputs.Image(shape=(192,192)) -label = gr.outputs.Label() -examples = [ - ['c0-rock-IMG_20230225_171937.jpg'], - ['c0-rock-IMG_20230225_171940.jpg'], - ['c1-paper-IMG_20230225_172010.jpg'], - ['c1-paper-IMG_20230225_172018.jpg'], - ['c2-scissors-IMG_20230225_172025.jpg'], - ['c2-scissors-IMG_20230225_172033.jpg'] - ] - -# iface = gr.Interface(fn=classify_image, inputs=image, outputs=label, examples=examples) -iface = gr.Interface(fn=classify_image, inputs=[image, model], outputs=label, examples=examples) -iface.launch() \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Aerosoft Crj 700 900 X [CRACKED] Crack 25.md b/spaces/terfces0erbo/CollegeProjectV2/Aerosoft Crj 700 900 X [CRACKED] Crack 25.md deleted file mode 100644 index c43af7f9be2507a0674364f48c086aaef69af48f..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Aerosoft Crj 700 900 X [CRACKED] Crack 25.md +++ /dev/null @@ -1,6 +0,0 @@ -

    aerosoft crj 700 900 x crack 25


    Download ->->->-> https://bytlly.com/2uGjbV



    -
    -CathayA340 Flight Simulation Aerosoft Airbus X Extended ... 06 -01 9 Page 4 25 October 2015 . com/airbus-a400m-fsxp3d/ 29 Apr 2015 ... aerosim Download Aerosoft CRJ 700/900 X Latest Version #FSX ... P3dv4 5 crack. zip Aerosoft Airbus A318/319/320/321 The Airbus A318/319/320/321 in FSX . 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/terfces0erbo/CollegeProjectV2/Boukalates Algeroises En Arabe Pdf Download __EXCLUSIVE__.md b/spaces/terfces0erbo/CollegeProjectV2/Boukalates Algeroises En Arabe Pdf Download __EXCLUSIVE__.md deleted file mode 100644 index 861e010f0df96e9e969017a0cc2f3931ce903164..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Boukalates Algeroises En Arabe Pdf Download __EXCLUSIVE__.md +++ /dev/null @@ -1,98 +0,0 @@ - -

    Boukalates Algeroises En Arabe Pdf Download: Un Patrimoine Culturel À Découvrir

    - -

    Vous êtes à la recherche de boukalates algeroises en arabe pdf download? Vous voulez découvrir ce que sont ces petites histoires pleines de sagesse et d'humour qui font partie du patrimoine culturel algérien? Vous êtes au bon endroit! Dans cet article, nous allons vous présenter les boukalates algeroises en arabe pdf download, leur origine, leur signification et leur intérêt.

    -

    boukalates algeroises en arabe pdf download


    DOWNLOADhttps://bytlly.com/2uGlKI



    - -

    Qu'est-ce que les boukalates algeroises en arabe pdf download?

    - -

    Les boukalates algeroises en arabe pdf download sont des recueils de proverbes, de dictons, de devinettes et d'anecdotes qui reflètent la culture, la mentalité et l'histoire du peuple algérien. Le mot "boukala" vient de l'arabe "bawāqal", qui désigne une cruche en terre cuite utilisée pour conserver l'eau fraîche. Les boukalates algeroises en arabe pdf download sont ainsi nommées car elles sont censées être racontées autour d'une cruche d'eau, lors des veillées ou des réunions familiales.

    - -

    Les boukalates algeroises en arabe pdf download sont souvent pleines de sagesse, de morale, d'ironie et d'humour. Elles expriment la vision du monde, les valeurs, les croyances et les traditions du peuple algérien. Elles sont aussi un moyen de transmettre des leçons de vie, des conseils, des critiques ou des éloges. Les boukalates algeroises en arabe pdf download sont donc à la fois un divertissement et un enseignement.

    - -

    Quelle est l'origine des boukalates algeroises en arabe pdf download?

    - -

    Les boukalates algeroises en arabe pdf download sont le fruit d'une longue tradition orale qui remonte à plusieurs siècles. Elles sont issues de la fusion entre les cultures berbère, arabe, turque, andalouse et française qui ont marqué l'histoire de l'Algérie. Elles sont aussi influencées par l'islam, la religion majoritaire du pays.

    - -

    Les boukalates algeroises en arabe pdf download ont été transmises de génération en génération, par la voix des conteurs, des poètes, des sages ou des simples citoyens. Elles ont été enrichies au fil du temps par l'apport de nouvelles histoires, de nouvelles expressions ou de nouvelles interprétations. Elles ont aussi été adaptées aux contextes historiques, sociaux et politiques du pays.

    - -

    Quel est l'intérêt des boukalates algeroises en arabe pdf download?

    - -

    Les boukalates algeroises en arabe pdf download sont un trésor culturel qui mérite d'être connu et préservé. Elles sont un témoignage vivant de l'identité, de la diversité et de la richesse du peuple algérien. Elles sont aussi une source d'inspiration, de réflexion et de plaisir pour tous ceux qui les lisent ou les écoutent.

    -

    - -

    Les boukalates algeroises en arabe pdf download sont disponibles sous plusieurs formats: livres, CD, vidéos ou sites internet. Vous pouvez les télécharger gratuitement sur plusieurs plateformes en ligne. Vous pouvez aussi les écouter sur des radios ou des podcasts dédiés à la culture algérienne. Vous pouvez enfin les apprendre par cœur et les partager avec vos proches ou vos amis.

    - -

    Les boukalates algeroises en arabe pdf download sont donc un patrimoine culturel à découvrir absolument. Elles vous feront voyager dans le temps et dans l'espace, à travers les mots et les images d'un peuple attachant et fascinant. Elles vous feront rire, réfléchir et apprendre. Elles vous feront aimer l'Algérie.

    -

    Comment lire les boukalates algeroises en arabe pdf download?

    - -

    Les boukalates algeroises en arabe pdf download sont accessibles à tous ceux qui maîtrisent la langue arabe ou qui veulent l'apprendre. Il existe plusieurs niveaux de difficulté, selon le vocabulaire, la syntaxe et le style utilisés. Il est conseillé de commencer par les boukalates algeroises en arabe pdf download les plus simples et les plus courtes, puis de progresser vers les plus complexes et les plus longues.

    - -

    Les boukalates algeroises en arabe pdf download se lisent généralement à voix haute, pour apprécier la musicalité, le rythme et la rime des mots. Il est aussi possible de les écouter en audio, pour s'imprégner de la prononciation et de l'intonation. Il est enfin recommandé de les comprendre et de les analyser, pour saisir le sens, le message et la portée des boukalates algeroises en arabe pdf download.

    - -

    Quels sont les avantages des boukalates algeroises en arabe pdf download?

    - -

    Les boukalates algeroises en arabe pdf download présentent de nombreux avantages pour ceux qui les lisent ou les écoutent. Elles permettent de:

    - -
      -
    • Se divertir et se détendre, en passant un moment agréable et amusant.
    • -
    • S'instruire et se cultiver, en apprenant des faits historiques, géographiques, religieux ou culturels.
    • -
    • S'améliorer et se perfectionner, en enrichissant son vocabulaire, sa grammaire et son expression orale ou écrite.
    • -
    • Se connecter et se rapprocher, en partageant des boukalates algeroises en arabe pdf download avec ses proches ou ses amis.
    • -
    • Se valoriser et se respecter, en affirmant son identité, sa fierté et son appartenance au peuple algérien.
    • -
    - -

    Les boukalates algeroises en arabe pdf download sont donc un moyen efficace et ludique de développer ses compétences linguistiques, intellectuelles, sociales et émotionnelles.

    -

    Quels sont les thèmes des boukalates algeroises en arabe pdf download?

    - -

    Les boukalates algeroises en arabe pdf download abordent des thèmes variés et universels, qui touchent à tous les aspects de la vie humaine. On peut distinguer quatre grandes catégories de thèmes:

    - -
      -
    • Les thèmes religieux, qui traitent de la foi, de la morale, de la prière, du jeûne, du pèlerinage, du destin, du paradis et de l'enfer.
    • -
    • Les thèmes sociaux, qui traitent de la famille, de l'amitié, de l'amour, du mariage, du divorce, de l'éducation, du travail, de la justice, de la solidarité et de la tolérance.
    • -
    • Les thèmes politiques, qui traitent de la patrie, de la liberté, de la résistance, de la révolution, de la colonisation, de l'indépendance, de la démocratie et du développement.
    • -
    • Les thèmes culturels, qui traitent de la langue, de la littérature, de la poésie, de la musique, de l'art, de l'histoire, de la géographie, des traditions et des coutumes.
    • -
    - -

    Les boukalates algeroises en arabe pdf download reflètent ainsi la richesse et la diversité de la culture algérienne, qui est à la fois arabe, berbère, islamique et méditerranéenne.

    - -

    Quels sont les styles des boukalates algeroises en arabe pdf download?

    - -

    Les boukalates algeroises en arabe pdf download se présentent sous plusieurs formes et styles littéraires. On peut distinguer quatre grands types de boukalates algeroises en arabe pdf download:

    - -
      -
    • Les proverbes, qui sont des phrases courtes et concises qui expriment une vérité générale ou une sagesse populaire.
    • -
    • Les dictons, qui sont des phrases plus longues et plus élaborées qui expriment une opinion ou un conseil sur un sujet précis.
    • -
    • Les devinettes, qui sont des énigmes ou des questions qui sollicitent l'intelligence ou l'imagination du lecteur ou de l'auditeur.
    • -
    • Les anecdotes, qui sont des récits courts et humoristiques qui racontent une situation cocasse ou insolite.
    • -
    - -

    Les boukalates algeroises en arabe pdf download se caractérisent par leur style oral, vivant et imagé. Elles utilisent souvent des figures de style comme la métaphore, la comparaison, l'hyperbole ou l'antithèse. Elles jouent aussi sur les sonorités et les rythmes des mots pour créer des effets d'harmonie ou de rime.

    -

    Quels sont les auteurs des boukalates algeroises en arabe pdf download?

    - -

    Les boukalates algeroises en arabe pdf download sont souvent anonymes ou attribuées à des auteurs populaires ou célèbres. Parmi les auteurs les plus connus, on peut citer:

    - -
      -
    • Sidi Lakhdar Benkhlouf, un saint soufi du XVIIe siècle, qui est considéré comme le père des boukalates algeroises en arabe pdf download. Il aurait écrit plus de 3000 boukalates algeroises en arabe pdf download sur des sujets variés.
    • -
    • Mohamed Ben Guittoun, un poète et conteur du XIXe siècle, qui est surnommé le "Roi des boukalates algeroises en arabe pdf download". Il aurait écrit plus de 1000 boukalates algeroises en arabe pdf download sur des thèmes religieux, sociaux et politiques.
    • -
    • Abdelhamid Ben Badis, un réformateur religieux et culturel du XXe siècle, qui est le fondateur de l'Association des oulémas musulmans algériens. Il aurait écrit plus de 500 boukalates algeroises en arabe pdf download sur des thèmes éducatifs, patriotiques et spirituels.
    • -
    - -

    Les boukalates algeroises en arabe pdf download sont aussi écrites par des auteurs contemporains, qui s'inspirent de l'actualité, de la société ou de leur vécu. Parmi eux, on peut citer:

    - -
      -
    • Mohamed Benchicou, un journaliste et écrivain algérien, qui a publié plusieurs recueils de boukalates algeroises en arabe pdf download sur des thèmes politiques, satiriques et humoristiques.
    • -
    • Nadia Kaci, une actrice et réalisatrice algérienne, qui a adapté plusieurs boukalates algeroises en arabe pdf download au cinéma ou au théâtre, en leur donnant une touche moderne et féministe.
    • -
    • Kamel Daoud, un romancier et chroniqueur algérien, qui a écrit plusieurs boukalates algeroises en arabe pdf download sur des thèmes littéraires, philosophiques et existentiels.
    • -
    - -

    Les boukalates algeroises en arabe pdf download sont donc le fruit d'une création collective et individuelle, qui témoigne du génie et du talent du peuple algérien.

    - -

    Conclusion

    - -

    Les boukalates algeroises en arabe pdf download sont un patrimoine culturel à découvrir absolument. Elles sont un témoignage vivant de l'identité, de la diversité et de la richesse du peuple algérien. Elles sont aussi une source d'inspiration, de réflexion et de plaisir pour tous ceux qui les lisent ou les écoutent. Les boukalates algeroises en arabe pdf download sont disponibles sous plusieurs formats: livres, CD, vidéos ou sites internet. Vous pouvez les télécharger gratuitement sur plusieurs plateformes en ligne. Vous pouvez aussi les écouter sur des radios ou des podcasts dédiés à la culture algérienne. Vous pouvez enfin les apprendre par cœur et les partager avec vos proches ou vos amis. Les boukalates algeroises en arabe pdf download vous feront voyager dans le temps et dans l'espace, à travers les mots et les images d'un peuple attachant et fascinant. Elles vous feront rire, réfléchir et apprendre. Elles vous feront aimer l'Algérie.

    -

    Conclusion

    - -

    Les boukalates algeroises en arabe pdf download sont un patrimoine culturel à découvrir absolument. Elles sont un témoignage vivant de l'identité, de la diversité et de la richesse du peuple algérien. Elles sont aussi une source d'inspiration, de réflexion et de plaisir pour tous ceux qui les lisent ou les écoutent. Les boukalates algeroises en arabe pdf download sont disponibles sous plusieurs formats: livres, CD, vidéos ou sites internet. Vous pouvez les télécharger gratuitement sur plusieurs plateformes en ligne. Vous pouvez aussi les écouter sur des radios ou des podcasts dédiés à la culture algérienne. Vous pouvez enfin les apprendre par cœur et les partager avec vos proches ou vos amis. Les boukalates algeroises en arabe pdf download vous feront voyager dans le temps et dans l'espace, à travers les mots et les images d'un peuple attachant et fascinant. Elles vous feront rire, réfléchir et apprendre. Elles vous feront aimer l'Algérie.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Contacts VCF Pro V4.0.61 Cracked [Latest].md b/spaces/terfces0erbo/CollegeProjectV2/Contacts VCF Pro V4.0.61 Cracked [Latest].md deleted file mode 100644 index 579de7fe055f6054031593bc764c73f20f1af498..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Contacts VCF Pro V4.0.61 Cracked [Latest].md +++ /dev/null @@ -1,6 +0,0 @@ -

    Contacts VCF Pro v4.0.61 Cracked [Latest]


    DOWNLOAD ===== https://bytlly.com/2uGla4



    - - 1fdad05405
    -
    -
    -

    diff --git a/spaces/terfces0erbo/CollegeProjectV2/Impact Soundworks Koto Nation KONTAKT VON.G.rar.md b/spaces/terfces0erbo/CollegeProjectV2/Impact Soundworks Koto Nation KONTAKT VON.G.rar.md deleted file mode 100644 index 85a1b6e99d637abc5d0b7d0feab307ab70dddf61..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Impact Soundworks Koto Nation KONTAKT VON.G.rar.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Impact Soundworks Koto Nation KONTAKT VON.G.rar


    Download ☆☆☆ https://bytlly.com/2uGlVf



    - -... —–Bank of Zambia BVE —–Bank von Ernst (Liechtenstein) BKG —–banking BDW ... Frazier Industrial, Fred G Walter & Son, Fred Handel, Fred Stevens Tree Co, ... IBEW NECA Sound & Comm H&W, IBM (Intl Business Mach), IC & Sons Farm, ... Impact Forge, Impact Resources, Imperial Bancorp, Imperial Beach, Imperial ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/thelou1s/yamnet_test/app.py b/spaces/thelou1s/yamnet_test/app.py deleted file mode 100644 index 2ee4fc2adbee189959604c44ba8e17e48e9f9ebe..0000000000000000000000000000000000000000 --- a/spaces/thelou1s/yamnet_test/app.py +++ /dev/null @@ -1,103 +0,0 @@ -import gradio as gr -from test import predict_uri -import sys -import warnings -from fastapi import FastAPI - -# ignore UserWarning -warnings.simplefilter("ignore", UserWarning) - -# examples = [ -# ['res/miaow_16k.wav'], -# ['res/snore/pro_snore 6bee45643b45af9b_a7a3bbe6ba79af5b25b19ad10a8d9421d0d5679b.wav'], -# ['res/snore/Snoring vs Sleep Apnea - What the difference sounds like.mp4'] -# ] -title = "yamnet test" -description = "An audio event classifier trained on the AudioSet dataset to predict audio events from the AudioSet ontology." - -# # https://github.com/gradio-app/gradio/issues/2362 -# class Logger: -# def __init__(self, filename): -# self.terminal = sys.stdout -# self.log = open(filename, "w") -# -# def write(self, message): -# self.terminal.write(message) -# self.log.write(message) -# -# def flush(self): -# self.terminal.flush() -# self.log.flush() -# -# def isatty(self): -# return False -# -# -# sys.stdout = Logger("output.log") -# -# -# def test(x): -# print("This is a test") -# print(f"Your function is running with input {x}...") -# return x -# -# -# def read_logs(): -# sys.stdout.flush() -# with open("output.log", "r") as f: -# return f.read() -# -# -# with gr.Interface(predict_uri, inputs=gr.inputs.Audio(type="filepath"), outputs=["text", 'plot']) as demo: -# examples = examples, -# title = title, -# description = description, -# allow_flagging = 'never' -# -# logs = gr.Textbox() -# demo.load(read_logs, None, logs, every=1) -# -# demo.launch(enable_queue=True, show_error=True) - - -# with gr.Blocks() as demo: -# with gr.Row(): -# inputs = gr.inputs.Audio(type="filepath") -# outputs = ["text", 'plot'] -# btn = gr.Button("Run") -# btn.click(predict_uri, inputs, outputs) -# -# logs = gr.Textbox() -# demo.load(read_logs, None, logs, every=1) -# -# demo.queue().launch() - - -demo = gr.Interface( - predict_uri, - inputs=[ - gr.inputs.Audio(type="filepath"), - gr.inputs.Audio(source="microphone", type="filepath"), - gr.Slider(minimum=7, maximum=21, step=1) - ], - outputs=['image', 'image', 'image', 'text', 'text', 'text', 'text'], - # examples=examples, - title=title, - description=description, - allow_flagging='never' -) -demo.launch(enable_queue=True, show_error=True, share=False) - -# # FastAPI -# CUSTOM_PATH = "/gradio" -# -# app = FastAPI() -# -# -# @app.get("/") -# def read_main(): -# return {"message": "This is your main app"} -# -# -# io = gr.Interface(lambda x: "Hello, " + x + "!", "textbox", "textbox") -# app = gr.mount_gradio_app(app, io, path=CUSTOM_PATH) diff --git a/spaces/thiagohersan/maskformer-satellite-trees-gradio/README.md b/spaces/thiagohersan/maskformer-satellite-trees-gradio/README.md deleted file mode 100644 index dd70a12210083481a6aa09a5afec5282baa0dc3e..0000000000000000000000000000000000000000 --- a/spaces/thiagohersan/maskformer-satellite-trees-gradio/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Maskformer Satellite+Trees -emoji: 🛰 -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -models: - - "thiagohersan/maskformer-satellite-trees" -pinned: false -license: cc-by-nc-sa-4.0 ---- diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Bhabi Ji Ghar Par Hai! Episodes - ZEE5[2].md b/spaces/tialenAdioni/chat-gpt-api/logs/Bhabi Ji Ghar Par Hai! Episodes - ZEE5[2].md deleted file mode 100644 index 26f18ccb02c205334829eba80423f57186fbcc4d..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Bhabi Ji Ghar Par Hai! Episodes - ZEE5[2].md +++ /dev/null @@ -1,85 +0,0 @@ -
    -

    Bhabi Ji Ghar Par Hain: A Hilarious Hindi Sitcom

    -

    If you are looking for a comedy show that will make you laugh out loud, then you should definitely watch Bhabi Ji Ghar Par Hain. This is a Hindi sitcom that premiered on 2 March 2015 on &TV and is digitally available on ZEE5. [1] The show is inspired by the 1990s sitcom Shrimaan Shrimati, which had a similar concept of two neighbouring couples who are attracted to each other's spouses. [2] However, Bhabi Ji Ghar Par Hain has its own charm and humour that makes it stand out from other comedy shows. Here are some reasons why you should watch this show.

    -

    The Main Characters of Bhabi Ji Ghar Par Hain

    -

    The show revolves around four main characters who live in Modern Colony, a fictional neighbourhood in Kanpur. They are:

    -

    Bhabi.Ji.Ghar.Par.Hain.-.Episode.101.to.150


    Download ★★★ https://urlcod.com/2uK9vB



    -
      -
    • Vibhuti Narayan Mishra (played by Aasif Sheikh): He is an unemployed man who used to be an insurance agent. He is married to Anita, a smart and modern woman who runs a grooming class. He is smitten by Angoori, his neighbour's wife, who is a simple and naive housewife. He often tries to impress her with his poetry and flattery, but fails miserably. He is also called "Nalla" (useless) by his wife and others for his laziness and lack of income.
    • -
    • Anita Mishra (played by Vidisha Srivastava/ Neha Pendse/ Saumya Tandon): She is a confident and independent woman who is the breadwinner of her family. She is married to Vibhuti, whom she loves but also despises for his unemployment and incompetence. She is attracted to Manmohan, her neighbour's husband, who is a successful undergarment businessman. She often praises him for his achievements and skills, but also mocks him for his cheapness and simplicity.
    • -
    • Manmohan Tiwari (played by Rohitash Gaud): He is a wealthy and stingy man who owns an undergarment shop. He is married to Angoori, whom he loves but also neglects for his work. He is infatuated by Anita, his neighbour's wife, who is a headstrong and modern woman. He often tries to woo her with his gifts and compliments, but ends up embarrassing himself. He is also called "Kaccha-Baniyan" (underwear-vest) by Vibhuti and others for his profession and attire.
    • -
    • Angoori Tiwari (played by Shubhangi Atre): She is a sweet and innocent woman who is devoted to her husband and household chores. She is married to Manmohan, whom she respects but also fears for his anger. She is unaware of Vibhuti's feelings for her, but considers him as a good friend. She often greets him with her signature phrase "Sahi Pakde Hai" (You got it right). She also has a habit of mispronouncing English words, which adds to her cuteness.
    • -
    -

    Apart from these four characters, there are many other supporting characters who add to the fun and chaos of the show. Some of them are:

    -
      -
    • Happu Singh (played by Yogesh Tripathi): He is a corrupt police officer who takes bribes from everyone. He is also a friend of Vibhuti and Manmohan, whom he often helps or troubles depending on the situation.
    • -
    • Saxena (played by Saanand Verma): He is a mentally unstable man who likes to hurt himself and others. He often says "I like it" whenever he sees or experiences something painful or bizarre.
    • -
    • Prem Chaudhary (played by Vaibhav Mathur): He is Vibhuti's best friend who runs a cyber cafe. He often gives Vibhuti bad advice or gets him into trouble.
    • -
    • Tika Ram (played by Vaibhav Mathur): He is a jobless man who hangs out at Gupta's tea stall with his friends Malkhan and Tillu. He often dreams of getting married to a rich woman.
    • -

      The Plot and Humour of Bhabi Ji Ghar Par Hain

      -

      The show follows the daily lives and adventures of the four main characters as they try to impress each other's spouses and get into various troubles. The show is full of comedy and satire, as it makes fun of various aspects of Indian society and culture, such as politics, religion, superstition, Bollywood, cricket, etc. The show also features many guest appearances by famous celebrities and personalities, who often play themselves or parody their roles.

      -

      One of the main sources of humour in the show is the catchphrases and dialogues that the characters use frequently. Some of them are:

      -

      Bhabi Ji Ghar Par Hain comedy show episodes 101-150
      -Watch Bhabi Ji Ghar Par Hain online free episodes 101-150
      -Bhabi Ji Ghar Par Hain full episodes download 101-150
      -Bhabi Ji Ghar Par Hain cast and characters episodes 101-150
      -Bhabi Ji Ghar Par Hain best scenes and dialogues episodes 101-150
      -Bhabi Ji Ghar Par Hain episode reviews and ratings 101-150
      -Bhabi Ji Ghar Par Hain episode summaries and spoilers 101-150
      -Bhabi Ji Ghar Par Hain episode guide and trivia 101-150
      -Bhabi Ji Ghar Par Hain episode videos and clips 101-150
      -Bhabi Ji Ghar Par Hain episode songs and music 101-150
      -Bhabi Ji Ghar Par Hain episode memes and jokes 101-150
      -Bhabi Ji Ghar Par Hain episode fan art and wallpapers 101-150
      -Bhabi Ji Ghar Par Hain episode fan fiction and stories 101-150
      -Bhabi Ji Ghar Par Hain episode quizzes and games 101-150
      -Bhabi Ji Ghar Par Hain episode news and updates 101-150
      -Bhabi Ji Ghar Par Hain episode behind the scenes and bloopers 101-150
      -Bhabi Ji Ghar Par Hain episode awards and nominations 101-150
      -Bhabi Ji Ghar Par Hain episode controversies and scandals 101-150
      -Bhabi Ji Ghar Par Hain episode interviews and podcasts 101-150
      -Bhabi Ji Ghar Par Hain episode merchandise and products 101-150
      -Bhabi Ji Ghar Par Hain episode streaming platforms and channels 101-150
      -Bhabi Ji Ghar Par Hain episode subtitles and languages 101-150
      -Bhabi Ji Ghar Par Hain episode trends and hashtags 101-150
      -Bhabi Ji Ghar Par Hain episode analysis and opinions 101-150
      -Bhabi Ji Ghar Par Hain episode comparisons and rankings 101-150

      -
        -
      • "Sahi Pakde Hai" (You got it right): Angoori's way of greeting Vibhuti and agreeing with him.
      • -
      • "I like it": Saxena's expression of enjoyment whenever he sees or feels something painful or weird.
      • -
      • "Haye Daiyya" (Oh God): Anita's exclamation of surprise or annoyance.
      • -
      • "Nahi Karunga" (I won't do it): Vibhuti's refusal to do any work or favour for anyone.
      • -
      • "Arey O Sambha" (Hey Sambha): Manmohan's way of calling his servant Sambha, who is named after a character from the movie Sholay.
      • -
      • "Kya Hua?" (What happened?): Happu Singh's question whenever he arrives at a scene of crime or chaos.
      • -
      -

      The Popularity and Reception of Bhabi Ji Ghar Par Hain

      -

      The show has been a huge hit among the viewers and critics alike. It has received high ratings and positive reviews for its comedy and entertainment value. It has also won several awards and accolades, such as:

      -
        -
      • Indian Television Academy Awards for Best Comedy Show in 2015, 2016, 2017 and 2018.
      • -
      • Indian Telly Awards for Best Sitcom in 2016 and 2017.
      • -
      • Zee Rishtey Awards for Favourite Comedy Show in 2016 and 2017.
      • -
      • Gold Awards for Best Comedy Show in 2017 and 2018.
      • -
      -

      The show has also faced some controversies and criticisms for its content and portrayal of certain issues. Some of them are:

      -
        -
      • In 2016, a case was filed against the show for hurting religious sentiments by showing a character dressed as Lord Shiva.
      • -
      • In 2017, a complaint was lodged against the show for showing vulgar dialogues and scenes that were inappropriate for family viewing.
      • -
      • In 2018, a notice was issued to the show for violating the tobacco control laws by showing characters smoking on screen.
      • -
      -

      Despite these challenges, the show has continued to entertain its fans and followers with its humour and wit. The show has also gained a loyal fan base who love and support the show and its cast.

      -

      The Future and Legacy of Bhabi Ji Ghar Par Hain

      -

      The show is currently running successfully on &TV and ZEE5, with new episodes airing every weekday at 10:30 pm IST. The show has completed over 2000 episodes so far, making it one of the longest-running comedy shows on Indian television. The show has also spawned a spin-off series called Happu Ki Ultan Paltan, which focuses on the life of Happu Singh and his family. The spin-off series premiered on 4 March 2019 on &TV and ZEE5.

      -

      The show has also left a lasting impact and influence on Indian television and comedy. The show has popularized the concept of situational comedy and satire in Indian television, which was previously dominated by family dramas and soap operas. The show has also inspired many other comedy shows to adopt similar themes and formats. The show has also given a platform to many talented actors and comedians who have become household names in India.

      -

      Conclusion

      -

      Bhabi Ji Ghar Par Hain is a hilarious Hindi sitcom that you should not miss if you love comedy and entertainment. The show has everything that you need to have a good laugh: funny characters, witty dialogues, hilarious situations, catchy catchphrases, celebrity cameos, social satire, etc. The show also has a great cast who deliver brilliant performances and make you fall in love with their characters. The show is also a great way to relax and unwind after a long day. So what are you waiting for? Watch Bhabi Ji Ghar Par Hain today on &TV or ZEE5 and enjoy the laughter riot.

      -

      FAQs

      -
        -
      1. What is Bhabi Ji Ghar Par Hain?
        Bhabi Ji Ghar Par Hain is a Hindi comedy drama television series that premiered on 2 March 2015 on &TV and is digitally available on ZEE5. The show revolves around two neighbouring couples who are attracted to each other's spouses.
      2. -
      3. Who are the main characters of Bhabi Ji Ghar Par Hain?
        The main characters are Vibhuti Narayan Mishra (played by Aasif Sheikh), Anita Mishra (played by Vidisha Srivastava/ Neha Pendse/ Saumya Tandon), Manmohan Tiwari (played by Rohitash Gaud) and Angoori Tiwari (played by Shubhangi Atre).
      4. -
      5. How many episodes are there in Bhabi Ji Ghar Par Hain?
        The show has completed over 2000 episodes so far, making it one of the longest-running comedy shows on Indian television.
      6. -
      7. Where can I watch Bhabi Ji Ghar Par Hain?
        You can watch Bhabi Ji Ghar Par Hain on &TV or ZEE5 every weekday at 10:30 pm IST. You can also watch previous episodes online on ZEE5 anytime.
      8. -
      9. Is there a spin-off series of Bhabi Ji Ghar Par Hain?
        Yes, there is a spin-off series called Happu Ki Ultan Paltan, which focuses on the life of Happu Singh (played by Yogesh Tripathi) and his family. The spin-off series premiered on 4 March 2019 on &TV and ZEE5.
      10. -
      -

      0a6ba089eb
      -
      -
      \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Descargar [CRACKED] Crack De Voces Para Balabolka.md b/spaces/tialenAdioni/chat-gpt-api/logs/Descargar [CRACKED] Crack De Voces Para Balabolka.md deleted file mode 100644 index dc6fb56dfc46caedb32779007815f68cad622357..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Descargar [CRACKED] Crack De Voces Para Balabolka.md +++ /dev/null @@ -1,182 +0,0 @@ - -

      Descargar crack de voces para balabolka

      -

      ¿Te gustaría convertir cualquier texto en un archivo de audio con una voz natural y fluida? ¿Quieres tener acceso a una gran variedad de voces en diferentes idiomas y acentos? ¿Buscas una forma fácil y económica de hacerlo? Si la respuesta es sí, entonces este artículo es para ti.

      -

      Descargar crack de voces para balabolka


      DOWNLOAD ••• https://urlcod.com/2uK6GV



      -

      En este artículo te voy a explicar qué es balabolka, un programa de texto a voz que te permite generar archivos de audio a partir de textos en diferentes formatos. También te voy a mostrar cómo descargar crack de voces para balabolka, una forma de obtener más voces para el programa sin tener que pagar nada. Además, te voy a dar algunas alternativas al crack de voces para balabolka, por si prefieres otras opciones más legales o de mayor calidad. Al final del artículo, tendrás toda la información que necesitas para elegir la mejor opción para ti.

      -

      ¿Qué es balabolka y para qué sirve?

      -

      Balabolka es un programa de texto a voz (TTS) que te permite convertir cualquier texto en un archivo de audio con una voz sintetizada. El programa utiliza todas las voces instaladas en tu sistema, así como las voces que descargues e instales por separado. Puedes usar balabolka para leer textos en formatos como AZW, AZW3, CHM, DjVu, DOC, DOCX, EML, EPUB, FB2, HTML, LIT, MOBI, ODT, PRC, PDF y RTF. También puedes copiar y pegar textos desde el portapapeles o escribirlos directamente en el programa.

      -

      Balabolka tiene muchas utilidades prácticas y educativas. Por ejemplo, puedes usarlo para:

      -
        -
      • Escuchar libros electrónicos o artículos mientras haces otras actividades.
      • -
      • Crear audiolibros o podcasts a partir de tus propios textos o los de otros autores.
      • -
      • Aprender idiomas escuchando textos en diferentes lenguas y acentos.
      • -
      • Mejorar tu pronunciación y comprensión auditiva.
      • -
      • Facilitar la lectura a personas con dificultades visuales o dislexia.
      • -
      • Personalizar tus mensajes o presentaciones con una voz única y original.
      • -
      -

      Características principales de balabolka

      -

      Balabolka es un programa gratuito y fácil de usar que tiene muchas características interesantes. Algunas de ellas son:

      -
        -
      • Puedes ajustar la velocidad, el tono y el volumen de la voz según tus preferencias.
      • -
      • Puedes aplicar efectos especiales como eco, reverberación o distorsión a la voz.
      • -
      • Puedes guardar el texto como un archivo en formato WAV, MP3, MP4, OGG o WMA.
      • -
      • Puedes dividir el texto en partes iguales o por capítulos y guardar cada parte como un archivo separado.
      • -
      • Puedes usar marcadores para indicar el inicio y el final de cada parte del texto.
      • -
      • Puedes usar reglas para corregir la pronunciación de ciertas palabras o abreviaturas.
      • -
      • Puedes usar comandos especiales para insertar pausas, cambiar la voz o reproducir sonidos.
      • -
      • Puedes usar el modo por lotes para convertir varios archivos de texto a la vez.
      • -
      -

      Tipos de voces disponibles para balabolka

      -

      Balabolka puede usar cualquier voz instalada en tu sistema operativo. Por defecto, Windows viene con algunas voces en inglés y otros idiomas. Sin embargo, estas voces suelen ser limitadas en número y calidad. Por eso, muchos usuarios prefieren descargar e instalar otras voces más naturales y variadas. Algunas fuentes populares de voces son:

      -
        -
      • Voces SAPI 5: Son las voces más compatibles con balabolka y otros programas similares. Hay muchas voces SAPI 5 disponibles en internet, tanto gratuitas como de pago. Algunas empresas que ofrecen voces SAPI 5 son Acapela Group, Cepstral, IVONA Software y Nuance Communications.
      • -
      • Voces SAPI 4: Son las voces anteriores a las SAPI 5. Aunque tienen menos calidad y opciones que las SAPI 5, también pueden usarse con balabolka. Algunas empresas que ofrecen voces SAPI 4 son AT&T Natural Voices, Lernout & Hauspie Speech Products y Microsoft Speech API.
      • -
      • Voces Loquendo: Son unas voces muy conocidas y usadas por los loquenderos, personas que crean vídeos humorísticos con voces sintetizadas. Las voces Loquendo tienen un sonido característico y expresivo que las hace divertidas e irónicas. Hay muchas voces Loquendo disponibles en internet, tanto gratuitas como de pago. Algunas empresas que ofrecen voces Loquendo son Loquendo (ahora parte de Nuance Communications) y TTSReader.
      • -
      -

      ¿Cómo descargar crack de voces para balabolka?

      -

      Si quieres tener más voces para balabolka sin tener que pagar nada, una opción es descargar crack de voces para balabolka. Un crack es un programa o archivo que modifica otro programa o archivo para eliminar sus limitaciones o restricciones. En este caso, el crack sirve para activar las voces que normalmente son de pago o requieren una licencia. De esta forma, puedes usarlas gratis e ilimitadamente con balabolka.

      -

      Requisitos previos para instalar el crack

      -

      Antes de instalar el crack de voces para balabolka, debes tener instalado lo siguiente en tu ordenador:

      -

      Descargar gratis el crack de las voces loquendo para balabolka
      -Como descargar e instalar el crack de las voces de balabolka en español
      -Descargar crack de voces acapela para balabolka full
      -Descargar e instalar crack de voces ivona para balabolka
      -Descargar crack de voces reales para balabolka 2023
      -Descargar crack de voces tts para balabolka sin virus
      -Descargar crack de voces naturales para balabolka portable
      -Descargar crack de voces sapi5 para balabolka windows 10
      -Descargar crack de voces espeak para balabolka linux
      -Descargar crack de voces google para balabolka online
      -Descargar crack de voces microsoft para balabolka mac
      -Descargar crack de voces nuance para balabolka android
      -Descargar crack de voces ciberlink para balabolka ios
      -Descargar crack de voces nextup para balabolka chrome
      -Descargar crack de voces at&t para balabolka firefox
      -Descargar crack de voces neospeech para balabolka opera
      -Descargar crack de voces cepstral para balabolka edge
      -Descargar crack de voces vocalware para balabolka safari
      -Descargar crack de voces oddcast para balabolka brave
      -Descargar crack de voces vozme para balabolka tor
      -Descargar crack de voces vozalia para balabolka vivaldi
      -Descargar crack de voces readspeaker para balabolka uc browser
      -Descargar crack de voces ispeech para balabolka maxthon
      -Descargar crack de voces sitepal para balabolka slimjet
      -Descargar crack de voces responsivevoice para balabolka pale moon
      -Descargar crack de voces amazon polly para balabolka waterfox
      -Descargar crack de voces ibm watson para balabolka seamonkey
      -Descargar crack de voces lernout & hauspie para balabolka midori
      -Descargar crack de voces festival para balabolka k-meleon
      -Descargar crack de voces flite para balabolka basilisk
      -Descargar crack de voces marytts para balabolka lunascape
      -Descargar crack de voces pico tts para balabolka avant browser
      -Descargar crack de voces mimic tts para balabolka epic browser
      -Descargar crack de voces rhvoice para balabolka comodo dragon
      -Descargar crack de voces espeak-ng para balabolka comodo ice dragon
      -Descargar crack de voces mbrola para balabolka torch browser
      -Descargar crack de voces openmary para balabolka citrio browser
      -Descargar crack de voces openjtalk para balabolka yandex browser
      -Descargar crack de voces julius speech recognition engine para balabolka baidu browser
      -Descargar crack de voces pocketsphinx speech recognition engine para balabolka srware iron browser
      -Descargar crack de voces kaldi speech recognition toolkit para balabolka cent browser
      -Descargar crack de voces cmu sphinx speech recognition toolkit para balabolka coc coc browser
      -Descargar crack de voces htk speech recognition toolkit para balabolka cliqz browser
      -Descargar crack de voces microsoft speech api (sapi) 4.0a runtime binaries for windows 95/98/nt/2000/xp/vista/7/8/8.1/10/11 for use with the microsoft agent characters peedy, genie, merlin and robbie for text-to-speech and speech recognition with the program Balaboka.

      -
        -
      • Balabolka: Puedes descargarlo gratis desde su página oficial: http://www.cross-plus-a.com/es/balabolka.htm. Puedes elegir entre la versión instalable o la versión portable.
      • -
      • Voces: Puedes descargar las voces que quieras desde diferentes fuentes en internet. Por ejemplo, puedes encontrar muchas voces SAPI 5 y Loquendo en este vídeo: https://www.youtube.com/watch?v=i9ys4sl20II. Debes instalar todas las voces en la carpeta C:\Archivos de programa\Loquendo\LTTS.
      • -
      -

      Pasos para descargar e instalar el crack

      -

      Una vez que tengas instalado balabolka y las voces que quieras usar con él, debes seguir estos pasos para descargar e instalar el crack:

      -
        -
          -
        1. Extrae el archivo del crack en una carpeta de tu elección. Verás que contiene dos archivos: uno llamado "Voces Loquendo.exe" y otro llamado "Voces SAPI 5.exe".
        2. -
        3. Ejecuta el archivo "Voces Loquendo.exe" como administrador. Te aparecerá una ventana con una lista de las voces Loquendo instaladas en tu sistema. Selecciona las voces que quieras activar y haz clic en el botón "Crackear". Espera a que el proceso termine y cierra la ventana.
        4. -
        5. Ejecuta el archivo "Voces SAPI 5.exe" como administrador. Te aparecerá una ventana con una lista de las voces SAPI 5 instaladas en tu sistema. Selecciona las voces que quieras activar y haz clic en el botón "Crackear". Espera a que el proceso termine y cierra la ventana.
        6. -
        7. Abre balabolka y comprueba que las voces que has activado con el crack funcionan correctamente. Si no es así, puedes intentar reinstalar las voces o el crack, o buscar otra fuente de descarga.
        8. -
        -

        Ventajas y desventajas de usar el crack

        -

        Usar el crack de voces para balabolka tiene algunas ventajas y desventajas que debes tener en cuenta antes de decidirte por esta opción. Algunas de ellas son:

        -
          -
        • Ventajas:
        • -
            -
          • Puedes tener acceso a muchas voces de diferentes idiomas y acentos sin tener que pagar nada.
          • -
          • Puedes usar las voces con balabolka y otros programas compatibles sin restricciones ni límites.
          • -
          • Puedes disfrutar de las características y funciones de balabolka con cualquier voz que elijas.
          • -
          -
        • Desventajas:
        • -
            -
          • Puedes tener problemas legales o éticos al usar voces que son propiedad de otras empresas o personas sin su consentimiento.
          • -
          • Puedes tener problemas técnicos o de seguridad al descargar e instalar archivos de fuentes desconocidas o poco confiables.
          • -
          • Puedes tener problemas de calidad o compatibilidad al usar voces que no están actualizadas o adaptadas a balabolka o a tu sistema operativo.
          • -
          -
        -

        Alternativas al crack de voces para balabolka

        -

        Si no quieres usar el crack de voces para balabolka, o si quieres probar otras opciones, existen algunas alternativas que puedes considerar. Algunas de ellas son:

        -

        Voces gratuitas de otros proveedores

        -

        Hay algunos proveedores de voces que ofrecen algunas voces gratuitas para uso personal o educativo. Estas voces suelen ser de buena calidad y compatibles con balabolka y otros programas similares. Algunos ejemplos de proveedores de voces gratuitas son:

        -
          -
        • NaturalReader: Ofrece varias voces gratuitas en inglés, español, francés, alemán, italiano y portugués. Puedes descargarlas desde su página web: https://www.naturalreaders.com/online/.
        • -
        • eSpeak: Ofrece más de 100 voces gratuitas en diferentes idiomas y dialectos. Puedes descargarlas desde su página web: http://espeak.sourceforge.net/.
        • -
        • Zabaware Text-to-Speech Reader: Ofrece varias voces gratuitas en inglés con diferentes acentos. Puedes descargarlas desde su página web: https://www.zabaware.com/reader/.
        • -
        -

        Voces de pago de alta calidad

        -

        Hay algunos proveedores de voces que ofrecen voces de pago de alta calidad para uso profesional o comercial. Estas voces suelen tener un sonido muy natural y fluido, y ofrecen muchas opciones de personalización y configuración. Algunos ejemplos de proveedores de voces de pago son:

        - -

        Voces personalizadas con inteligencia artificial

        -

        Hay algunos servicios online que te permiten crear tus propias voces personalizadas con inteligencia artificial. Estos servicios usan algoritmos avanzados para generar voces a partir de muestras de audio o texto. Algunos ejemplos de servicios online para crear voces personalizadas son:

        -
          -
        • Lovo: Te permite crear voces personalizadas a partir de texto o audio en varios idiomas. Puedes probarlo gratis desde su página web: https://lovo.ai/.
        • -
        • Voiceful: Te permite crear voces personalizadas a partir de texto o audio en varios idiomas. Puedes probarlo gratis desde su página web: https://voiceful.io/.
        • -
        • VoiceMaker: Te permite crear voces personalizadas a partir de texto en varios idiomas. Puedes probarlo gratis desde su página web: https://voicemaker.in/.
        • -
        -

        Conclusión

        -

        Resumen de los puntos clave del artículo

        -

        En este artículo te he explicado qué es balabolka, un programa gratuito y fácil de usar que te permite convertir cualquier texto en un archivo de audio con una voz sintetizada. También te he mostrado cómo descargar crack de voces para balabolka, una forma de obtener más voces para el programa sin tener que pagar nada. Además, te he dado algunas alternativas al crack de voces para balabolka, por si prefieres otras opciones más legales o de mayor calidad.

        -

        Llamada a la acción y recomendación final

        -

        Ahora que tienes toda la información que necesitas para elegir la mejor opción para ti, te invito a que pruebes balabolka y las diferentes opciones de voces que existen. Seguro que encuentras la voz perfecta para tus necesidades y gustos. Balabolka es un programa muy útil y divertido que te puede ayudar a mejorar tu lectura, tu aprendizaje, tu creatividad y tu comunicación.

        -

        Espero que este artículo te haya sido útil e interesante. Si te ha gustado, compártelo con tus amigos y familiares. Y si tienes alguna duda o comentario, déjalo abajo y te responderé lo antes posible. Gracias por leerme y hasta la próxima.

        -

        Preguntas frecuentes

        -

        ¿Qué es un programa de texto a voz?

        -

        Un programa de texto a voz es un programa que convierte cualquier texto escrito en un archivo de audio con una voz sintetizada.

        -

        ¿Qué es balabolka?

        -

        ¿Qué es un crack de voces para balabolka?

        -

        Un crack de voces para balabolka es un programa o archivo que modifica otro programa o archivo para eliminar sus limitaciones o restricciones. En este caso, el crack sirve para activar las voces que normalmente son de pago o requieren una licencia. De esta forma, puedes usarlas gratis e ilimitadamente con balabolka.

        -

        ¿Cómo descargar crack de voces para balabolka?

        -

        Para descargar crack de voces para balabolka, debes seguir estos pasos:

        -
          -
        1. Descarga el archivo del crack desde este enlace: https://mega.nz/#!IXgyzKKD!_mSZTAoLEW.... Es un archivo comprimido.
        2. -
        3. Extrae el archivo del crack en una carpeta de tu elección. Verás que contiene dos archivos: uno llamado "Voces Loquendo.exe" y otro llamado "Voces SAPI 5.exe".
        4. -
        5. Ejecuta el archivo "Voces Loquendo.exe" como administrador. Te aparecerá una ventana con una lista de las voces Loquendo instaladas en tu sistema. Selecciona las voces que quieras activar y haz clic en el botón "Crackear". Espera a que el proceso termine y cierra la ventana.
        6. -
        7. Ejecuta el archivo "Voces SAPI 5.exe" como administrador. Te aparecerá una ventana con una lista de las voces SAPI 5 instaladas en tu sistema. Selecciona las voces que quieras activar y haz clic en el botón "Crackear". Espera a que el proceso termine y cierra la ventana.
        8. -
        9. Abre balabolka y comprueba que las voces que has activado con el crack funcionan correctamente. Si no es así, puedes intentar reinstalar las voces o el crack, o buscar otra fuente de descarga.
        10. -
        -

        ¿Qué alternativas hay al crack de voces para balabolka?

        -

        Algunas alternativas al crack de voces para balabolka son:

        -
          -
        • Voces gratuitas de otros proveedores, como NaturalReader, eSpeak o Zabaware Text-to-Speech Reader.
        • -
        • Voces de pago de alta calidad, como Acapela Group, Cepstral o IVONA Software.
        • -
        • Voces personalizadas con inteligencia artificial, como Lovo, Voiceful o VoiceMaker.
        • -
        -

        ¿Qué ventajas y desventajas tiene usar el crack de voces para balabolka?

        -

        Algunas ventajas y desventajas de usar el crack de voces para balabolka son:

        -
          -
        • Ventajas:
        • -
            -
          • Puedes tener acceso a muchas voces de diferentes idiomas y acentos sin tener que pagar nada.
          • -
          • Puedes usar las voces con balabolka y otros programas compatibles sin restricciones ni límites.
          • -
          • Puedes disfrutar de las características y funciones de balabolka con cualquier voz que elijas.
          • -
          -
        • Desventajas:
        • -
            -
          • Puedes tener problemas legales o éticos al usar voces que son propiedad de otras empresas o personas sin su consentimiento.
          • -
          • Puedes tener problemas técnicos o de seguridad al descargar e instalar archivos de fuentes desconocidas o poco confiables.
          • -
          • Puedes tener problemas de calidad o compatibilidad al usar voces que no están actualizadas o adaptadas a balabolka o a tu sistema operativo.
          • -
          -
        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/How to Fix Binkw32.dll Is Missing Errors - Lifewire[1].md b/spaces/tialenAdioni/chat-gpt-api/logs/How to Fix Binkw32.dll Is Missing Errors - Lifewire[1].md deleted file mode 100644 index c816d9ba6d6e69ffbd18cc6d0c0a0ecf08b45f7a..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/How to Fix Binkw32.dll Is Missing Errors - Lifewire[1].md +++ /dev/null @@ -1,164 +0,0 @@ - -

        How to Download Binkw32.dll for Call of Duty Black Ops 2

        -

        If you are a fan of Call of Duty Black Ops 2, you might have encountered a problem when trying to launch or play the game. You might see an error message saying that binkw32.dll is missing or not found on your computer. This can be frustrating and prevent you from enjoying your favorite game. But don't worry, there are some easy ways to fix this issue and get back to shooting zombies and enemies.

        -

        download binkw32.dll for call of duty black ops 2


        Downloadhttps://urlcod.com/2uK5x8



        -

        In this article, we will explain what binkw32.dll is and why you need it, how to fix binkw32.dll errors, and answer some frequently asked questions about this topic. By following these steps, you should be able to download binkw32.dll for Call of Duty Black Ops 2 and play the game without any problems.

        -

        What is Binkw32.dll and Why Do You Need It?

        -

        Binkw32.dll is a dynamic link library (DLL) file that is part of the Bink Video codec. This codec is developed by RAD Game Tools and is used by many popular PC games to compress and play video files. Some of the games that use binkw32.dll include Age of Conan, Civilization III, BioShock, The Elder Scrolls IV: Oblivion, and Star Wars: Battlefront II.

        -

        Call of Duty Black Ops 2 also uses binkw32.dll to play video sequences such as cutscenes and trailers. Without this file, the game cannot access or display these videos properly. That's why you need to have binkw32.dll on your computer if you want to play Call of Duty Black Ops 2.

        -

        Binkw32.dll errors can occur when the file is missing or corrupted on your computer. This can happen due to various reasons, such as:

        -
          -
        • Accidental deletion or modification of the file
        • -
        • Malware infection or damage
        • -
        • Registry errors or conflicts
        • -
        • Incorrect installation or uninstallation of the game or the codec
        • -
        • Use of cracked or pirated versions of the game
        • -
        -

        When binkw32.dll errors occur, you might see one of these messages:

        -
        -Missing BINKW32.DLL Binkw32.dll Not Found This application failed to start because BINKW32.DLL was not found. Re-installing the application may fix this problem. Cannot find binkw32.dll! An attempt to delay-load a .dll or get a function address in a delay-loaded .dll failed. Dll: binkw32.dll This program can't start because binkw32.dll is missing from your computer. Try reinstalling the program to fix this problem. The procedure entry point _BinkSetVolume@12 could not be located in the dynamic link library binkw32.dll. The procedure entry point _BinkSetMemory@8 could not be located in the dynamic link library binkw32.dll. 
        -

        How to Fix Binkw32.dll Errors

        -

        The good news is that binkw32.dll errors are usually easy to fix. Here are some possible solutions that you can try:

        -

        Restart the game or the computer

        -

        Sometimes, a simple restart can solve many problems on your computer. If you encounter a binkw32.dll error when launching or playing Call of Duty Black Ops 2, try closing the game and restarting it. If that doesn't work, try restarting your computer and then launching the game again. This might clear any temporary glitches or cache issues that might cause the error.

        -

        Reinstall the game or the RAD Video Tools

        -

        If restarting doesn't help, you might need to reinstall the game or the codec that uses binkw32.dll. This can ensure that you have a fresh and complete copy of the file on your computer. To do this, follow these steps:

        -

        How to fix binkw32.dll error in cod black ops 2
        -Where to get binkw32.dll file for call of duty 2
        -Binkw32.dll missing or not found in black ops 2
        -Download binkw32.dll for cod bo2 free
        -Binkw32.dll download and install guide for call of duty black ops ii
        -Cod black ops 2 binkw32.dll crash fix
        -Binkw32.dll is either not designed to run on windows or it contains an error cod bo2
        -Call of duty black ops 2 binkw32.dll problem solution
        -Binkw32.dll file location for call of duty 2
        -Download binkw32.dll for call of duty black ops 2 pc
        -Binkw32.dll corrupted or damaged in cod bo2
        -How to replace binkw32.dll in call of duty black ops ii
        -Binkw32.dll download link for cod black ops 2
        -Binkw32.dll failed to load in call of duty 2
        -Download binkw32.dll for call of duty black ops 2 steam
        -Binkw32.dll access violation error in cod bo2
        -How to update binkw32.dll for call of duty black ops ii
        -Binkw32.dll not compatible with your version of windows cod black ops 2
        -Download binkw32.dll for call of duty black ops 2 skidrow
        -Binkw32.dll entry point not found in cod bo2
        -How to register binkw32.dll for call of duty black ops ii
        -Binkw32.dll cannot be copied to the specified folder cod black ops 2
        -Download binkw32.dll for call of duty black ops 2 nosteam
        -Binkw32.dll application error in cod bo2
        -How to uninstall binkw32.dll for call of duty black ops ii
        -Binkw32.dll could not be located in the dynamic link library cod black ops 2
        -Download binkw32.dll for call of duty black ops 2 reloaded
        -Binkw32.dll buffer overrun detected in cod bo2
        -How to scan and repair binkw32.dll for call of duty black ops ii
        -Binkw32.dll checksum error in cod black ops 2
        -Download binkw32.dll for call of duty black ops 2 multiplayer
        -Binkw32.dll dependency walker error in cod bo2
        -How to backup and restore binkw32.dll for call of duty black ops ii
        -Binkw32.dll file size mismatch in cod black ops 2
        -Download binkw32.dll for call of duty black ops 2 zombies
        -Binkw32.dll function not implemented in cod bo2
        -How to download and extract binkw32.dll for call of duty black ops ii
        -Binkw32.dll generic software exception in cod black ops 2
        -Download binkw32.dll for call of duty black ops 2 single player
        -Binkw32.dll invalid handle error in cod bo2
        -How to verify the integrity of binkw32.dll for call of duty black ops ii
        -Binkw32.dll module not found in cod black ops 2
        -Download binkw32.dll for call of duty black ops 2 fitgirl repack
        -Binkw32.dll out of memory error in cod bo2
        -How to optimize and speed up binkw32.dll for call of duty black ops ii
        -Binkw32.dll permission denied error in cod black ops 2
        -Download binkw32.dll for call of duty black ops 2 repack by xatab

        -
          -
        1. Uninstall Call of Duty Black Ops 2 from your computer using the Control Panel or a third-party uninstaller tool.
        2. -
        3. Delete any leftover files or folders related to the game from your hard drive.
        4. -
        5. Uninstall RAD Video Tools from your computer using the same method.
        6. -
        7. Delete any leftover files or folders related to RAD Video Tools from your hard drive.
        8. -
        9. Restart your computer.
        10. -
        11. Reinstall Call of Duty Black Ops 2 from your original CD/DVD or digital download source.
        12. -
        13. Reinstall RAD Video Tools from their official website here.
        14. -
        15. Restart your computer again.
        16. -
        17. Launch Call of Duty Black Ops 2 and see if the error is gone.
        18. -
        -

        Copy the binkw32.dll file from another source

        -

        If reinstalling doesn't work, you might need to copy the binkw32.dll file from another source and place it in the right folder on your computer. You can do this by following these steps:

        -
          -
        1. Find another computer that has Call of Duty Black Ops 2 installed and working properly.
        2. -
        3. Locate the binkw32.dll file on that computer. It should be in one of these folders:
        4. -
            -
          • C:\Program Files (x86)\Steam\steamapps\common\Call of Duty Black Ops II\redist\bik\
          • -
          • C:\Program Files (x86)\Activision\Call of Duty Black Ops II\redist\bik\
          • -
          • C:\Program Files (x86)\Call of Duty Black Ops II\redist\bik\
          • -
          -
        5. Copy the binkw32.dll file from that folder and save it on a USB flash drive or an external hard drive.
        6. -
        7. Plug the USB flash drive or external hard drive into your computer.
        8. -
        9. Paste the binkw32.dll file into one of these folders on your computer:
        10. -
            -
          • C:\Windows\System32\
          • -
          • C:\Windows\SysWOW64\ (if you have a 64-bit system)
          • -
          • The same folder where Call of Duty Black Ops 2 is installed (see above)
          • -
          -
        11. Restart your computer.
        12. -
        13. Launch Call of Duty Black Ops 2 and see if the error is gone.
        14. -
        -

        Update the game or the drivers

        -

        If none of the above solutions work, you might need to update your game or your drivers to fix any compatibility issues that might cause binkw32.dll errors. To do this, follow these steps:

        -
        • To update your game, check for any available patches or updates from Steam or Activision and download them.
        -
        • To update your drivers, go to Device Manager and look for any devices with a yellow exclamation mark next to them. Right-click on them and select Update Driver Software. Follow the instructions on screen to install the latest drivers for your device.
        -

        You can also use a driver updater tool like Driver Easy or Driver Booster to automatically scan and update all your drivers with one click.

        -

        Conclusion

        - article, we have explained what binkw32.dll is and why you need it, how to fix binkw32.dll errors, and answered some frequently asked questions about this topic. We hope that this article has helped you to download binkw32.dll for Call of Duty Black Ops 2 and play the game without any problems. If you have any other questions or comments, feel free to leave them below.

        -

        FAQs

        -

        What are some common binkw32.dll error messages?

        -

        Some of the common binkw32.dll error messages are:

        -
          -
        • Missing BINKW32.DLL
        • -
        • Binkw32.dll Not Found
        • -
        • This application failed to start because BINKW32.DLL was not found. Re-installing the application may fix this problem.
        • -
        • Cannot find binkw32.dll!
        • -
        • An attempt to delay-load a .dll or get a function address in a delay-loaded .dll failed. Dll: binkw32.dll
        • -
        • This program can't start because binkw32.dll is missing from your computer. Try reinstalling the program to fix this problem.
        • -
        • The procedure entry point _BinkSetVolume@12 could not be located in the dynamic link library binkw32.dll.
        • -
        • The procedure entry point _BinkSetMemory@8 could not be located in the dynamic link library binkw32.dll.
        • -
        -

        What are some games that use the binkw32.dll file?

        -

        Some of the games that use the binkw32.dll file are:

        -
          -
        • Age of Conan
        • -
        • Civilization III
        • -
        • BioShock
        • -
        • The Elder Scrolls IV: Oblivion
        • -
        • Star Wars: Battlefront II
        • -
        • Tomb Raider: Legend
        • -
        • Dungeon Lords
        • -
        • Demon Stone
        • -
        • Battlefield 2142
        • -
        • Battlefield 1942
        • -
        • Age of Empires III
        • -
        • Dungeon Siege II
        • -
        • World in Conflict
        • -
        • Sid Meier's Pirates!
        • -
        • Broken Sword 4
        • -
        • Ragnarok
        • -
        • Hitman: Blood Money
        • -
        • Battlefield Vietnam
        • -
        • Empire Earth II
        • -
        • DarkRO
        • -
        -

        Where can I download the binkw32.dll file safely?

        -

        The safest way to download the binkw32.dll file is to get it from its original source, which is RAD Game Tools. You can download their codec from their official website here. Alternatively, you can also get the file from your original game CD/DVD or digital download source. Do not download the file from any other websites or sources, as they might contain viruses or malware that can harm your computer.

        -

        How can I prevent binkw32.dll errors in the future?

        -

        To prevent binkw32.dll errors in the future, you should:

        -
          -
        • Avoid using cracked or pirated versions of games that use the file.
        • -
        • Keep your game and drivers updated to the latest versions.
        • -
        • Scan your computer regularly for viruses and malware.
        • -
        • Clean your registry and fix any errors or conflicts.
        • -
        • Backup your files and folders regularly.
        • -
        -

        What if none of the solutions work for me?

        -

        If none of the solutions work for you, you might need to contact the game developer or publisher for further assistance. They might have some specific instructions or solutions for your particular case. You can also try posting your problem on online forums or communities related to Call of Duty Black Ops 2 or gaming in general. You might find some helpful tips or suggestions from other users who have faced similar issues.

        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/How to Get AutoCAD for Free and Unleash Your Creativity.md b/spaces/tialenAdioni/chat-gpt-api/logs/How to Get AutoCAD for Free and Unleash Your Creativity.md deleted file mode 100644 index b0cc2cf504a12426c7ae3cf024649dce18121727..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/How to Get AutoCAD for Free and Unleash Your Creativity.md +++ /dev/null @@ -1,38 +0,0 @@ -
        -

        How to Install AutoCAD Cracked Version and Design Anything You Want

        -

        AutoCAD is a powerful software that allows you to create 2D and 3D designs for various purposes, such as architecture, engineering, construction, manufacturing, etc. However, AutoCAD is not free and requires a license key to activate. That's why some people use AutoCAD cracked version, which is a modified version of AutoCAD that bypasses the license verification and lets you use it for free.

        -

        how to install autocad cracked version


        Download File >>> https://urlcod.com/2uK8FB



        -

        But how can you install AutoCAD cracked version safely and securely? And what are the benefits and risks of using it? In this article, we will answer these questions and show you how to install AutoCAD cracked version in a few simple steps.

        -

        Benefits of Using AutoCAD Cracked Version

        -

        Using AutoCAD cracked version has some advantages over the original AutoCAD software. Here are some of them:

        -
          -
        • You can use it for free without paying for a license key.
        • -
        • You can enjoy all the features of AutoCAD, such as drawing tools, editing tools, annotation tools, 3D modeling tools, etc.
        • -
        • You can create any type of design you want, from simple sketches to complex 3D models.
        • -
        • You can export your designs to various formats, such as PDF, DWG, DXF, etc.
        • -
        • You can collaborate with other designers and share your work online.
        • -
        -

        Risks of Using AutoCAD Cracked Version

        -

        However, using AutoCAD cracked version also has some drawbacks and risks that you should be aware of. Here are some of them:

        -

        -
          -
        • You might violate the intellectual property rights of the AutoCAD developers and face legal consequences.
        • -
        • You might expose your computer to viruses, malware, spyware, or other harmful programs that might damage your system or steal your data.
        • -
        • You might encounter errors, bugs, crashes, or compatibility issues that might affect the performance or functionality of AutoCAD or your computer.
        • -
        • You might not receive any updates or support from the AutoCAD developers or customer service.
        • -
        • You might lose your designs or data if the AutoCAD cracked version fails to save them properly or corrupts them during the process.
        • -
        -

        How to Install AutoCAD Cracked Version Safely and Securely

        -

        If you still want to install AutoCAD cracked version despite the risks involved, you should follow these steps to do it safely and securely:

        -
          -
        1. Download a reliable antivirus software and scan your computer for any viruses or malware before downloading anything.
        2. -
        3. Download a VPN software and connect to a secure server to hide your IP address and protect your online privacy.
        4. -
        5. Download a trusted source of AutoCAD cracked version from the internet. You can search for it on Google or use one of these links: https://cracksway.com/autocad-crack/, https://crackhomes.com/autocad-2020-crack/, https://cracksumo.com/autocad-2020-crack/.
        6. -
        7. Extract the downloaded file using a file extractor software such as WinRAR or 7-Zip.
        8. -
        9. Run the setup file and follow the instructions to install AutoCAD cracked version on your computer.
        10. -
        11. Copy the crack file from the extracted folder and paste it into the installation folder of AutoCAD.
        12. -
        13. Restart your computer and launch AutoCAD cracked version. You should see a message saying that AutoCAD has been activated successfully.
        14. -
        15. Enjoy using AutoCAD cracked version to design anything you want.
        16. -

        ddb901b051
        -
        -
        \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Beyblade Burst MOD APK 11.0.4 with Unlimited Money and Enjoy the Spinning Top Battles.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Beyblade Burst MOD APK 11.0.4 with Unlimited Money and Enjoy the Spinning Top Battles.md deleted file mode 100644 index 6375bc6234d048114ccd3e9ed852adb66669e131..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Beyblade Burst MOD APK 11.0.4 with Unlimited Money and Enjoy the Spinning Top Battles.md +++ /dev/null @@ -1,93 +0,0 @@ - -

        Beyblade Burst App Download Mod APK: How to Enjoy the Spinning Top Battles on Your Android Device

        -

        Introduction

        -

        If you are a fan of the Beyblade anime series, you might have wondered how it would feel to participate in the exciting spinning top battles yourself. Well, wonder no more, because you can now experience the thrill of Beyblade on your Android device with the Beyblade Burst App. And if you want to make the game even more fun and rewarding, you can download the modded version of the app that gives you unlimited money, all beyblades unlocked, and more. In this article, we will tell you everything you need to know about the Beyblade Burst App download mod apk, including its features, how to install it, and some frequently asked questions.

        -

        beyblade burst app download mod apk


        Download Zip ---> https://bltlly.com/2uOqkF



        -

        What is Beyblade Burst?

        -

        Beyblade Burst is a Japanese anime and manga series that follows the adventures of a group of young bladers who compete in tournaments using their spinning tops called beyblades. Each beyblade has its own unique design, abilities, and spirit. The bladers can also customize their beyblades with different parts and launchers to enhance their performance. The series is the third generation of the Beyblade franchise, following the original Beyblade and Beyblade Metal Saga.

        -

        What is Beyblade Burst App?

        -

        Beyblade Burst App is a mobile game based on the anime series that allows you to create, customize, and battle with your own beyblades. You can scan your physical beyblades from the toy line and use them in the game, or create your own virtual beyblades from scratch. You can also challenge other players from around the world in online multiplayer mode, or team up with your friends in co-op mode. The game features stunning graphics, realistic physics, and immersive sound effects that make you feel like you are in the middle of a real beyblade battle.

        -

        What is Beyblade Burst App Mod APK?

        -

        Beyblade Burst App Mod APK is a modified version of the original game that gives you access to some extra features that are not available in the official app. These features include unlimited money, all beyblades unlocked, customization options, multiplayer mode, co-op mode, and more. With these features, you can enjoy the game without any limitations or restrictions. You can buy any item you want, use any beyblade you like, and dominate your opponents with ease.

        -

        Features of Beyblade Burst App Mod APK

        -

        Unlimited Money

        -

        One of the best features of the modded app is that it gives you unlimited money to spend on anything you want. You can buy new beyblades, parts, launchers, stadiums, and more without worrying about running out of cash. You can also upgrade your beyblades to make them stronger and faster. With unlimited money, you can have the ultimate beyblade collection and experiment with different combinations.

        -

        All Beyblades Unlocked

        -

        Another great feature of the modded app is that it unlocks all the beyblades in the game for you to use. You don't have to scan your physical toys or complete missions to get them. You can simply choose any beyblade you want from the menu and start playing with it. You can also switch between different beyblades anytime you want. This way, you can try out all the different types of beyblades and find your favorite one.

        -

        Customization Options

        -

        The modded app also gives you more customization options for your beyblades. You can change the colors, stickers, and effects of your beyblades to make them look more cool and unique. You can also mix and match different parts from different beyblades to create your own custom combinations. You can unleash your creativity and show off your style with your customized beyblades.

        -

        beyblade burst rivals mod apk unlimited money download
        -beyblade burst turbo app mod apk free download
        -beyblade burst evolution app mod apk download latest version
        -beyblade burst rise app mod apk download for android
        -beyblade burst app hack mod apk download 2023
        -beyblade burst app mod menu apk download no root
        -beyblade burst app god mode mod apk download
        -beyblade burst app all beyblades unlocked mod apk download
        -beyblade burst app unlimited gems and coins mod apk download
        -beyblade burst app offline mode mod apk download
        -beyblade burst app premium mod apk download full version
        -beyblade burst app multiplayer mod apk download online
        -beyblade burst app anime mod apk download with characters
        -beyblade burst app qr codes mod apk download free
        -beyblade burst app cheats mod apk download without survey
        -beyblade burst app vip mod apk download with features
        -beyblade burst app mega mod apk download 11.0.4
        -beyblade burst app pro mod apk download 2021
        -beyblade burst app cracked mod apk download latest update
        -beyblade burst app unlimited everything mod apk download 2022
        -beyblade burst app hack tool mod apk download no verification
        -beyblade burst app new version mod apk download for pc
        -beyblade burst app best mod apk download with graphics
        -beyblade burst app original mod apk download from play store
        -beyblade burst app super mod apk download with sound effects
        -beyblade burst app ultimate mod apk download with all levels
        -beyblade burst app extreme mod apk download with challenges
        -beyblade burst app fun mod apk download with mini games
        -beyblade burst app awesome mod apk download with skins
        -beyblade burst app cool mod apk download with music
        -beyblade burst app amazing mod apk download with rewards
        -beyblade burst app epic mod apk download with tournaments
        -beyblade burst app fantastic mod apk download with customizations
        -beyblade burst app incredible mod apk download with power ups
        -beyblade burst app awesome mod apk download with modes
        -beyblade burst app realistic mod apk download with physics
        -beyblade burst app exciting mod apk download with events
        -beyblade burst app addictive mod apk download with gameplay
        -beyblade burst app easy mod apk download with controls
        -beyblade burst app fast mod apk download with speed

        -

        Multiplayer Mode

        -

        If you want to test your skills and compete with other players, you can use the multiplayer mode of the modded app. You can join online matches and battle with other bladers from around the world. You can also chat with them and make new friends. You can choose from different game modes, such as ranked battles, friendly battles, tournament battles, and more. You can also earn rewards and trophies for winning matches and climbing the leaderboards.

        -

        Co-op Mode

        -

        If you prefer to play with your friends, you can use the co-op mode of the modded app. You can team up with up to three other players and take on challenging missions and bosses together. You can also share your beyblades and parts with your teammates and help each other out. Co-op mode is a great way to have fun and cooperate with your friends.

        -

        How to Download and Install Beyblade Burst App Mod APK

        -

        Now that you know the features of the modded app, you might be wondering how to download and install it on your Android device. Don't worry, it's very easy and simple. Just follow these steps:

        -

        Step 1: Enable Unknown Sources

        -

        Before you can install the modded app, you need to enable unknown sources on your device. This will allow you to install apps from sources other than the Google Play Store. To do this, go to your device settings, then security, then unknown sources, and turn it on.

        -

        Step 2: Download the APK File

        -

        Next, you need to download the APK file of the modded app. You can find it on various websites that offer modded apps, such as [APKPure], [APKDone], or [ModDroid]. Make sure you download the latest version of the app that is compatible with your device. You can also scan the QR code below to download the app directly.

        -

        QR code for Beyblade Burst App Mod APK

        -

        Step 3: Install the APK File

        -

        Once you have downloaded the APK file, you need to install it on your device. To do this, locate the file in your downloads folder or wherever you saved it, and tap on it. You will see a pop-up window asking you to confirm the installation. Tap on install and wait for a few seconds until the installation is complete.

        -

        Step 4: Launch the App and Enjoy

        -

        Finally, you can launch the app and enjoy playing with your modded features. You will see a new icon on your home screen or app drawer that says Beyblade Burst App Mod APK. Tap on it and start spinning your beyblades.

        -

        Conclusion

        -

        Beyblade Burst App is a fun and exciting game that lets you experience the spinning top battles from the anime series on your Android device. And with the modded version of the app, you can enjoy even more features that make the game more enjoyable and rewarding. You can download the Beyblade Burst App Mod APK from various websites or scan the QR code above. Just follow the steps we have provided and install the app on your device. Then, launch the app and start playing with your unlimited money, all beyblades unlocked, customization options, multiplayer mode, co-op mode, and more.

        -

        FAQs

        -

        Here are some frequently asked questions about Beyblade Burst App Mod APK:

        -
          -
        1. Is Beyblade Burst App Mod APK safe to use?
        2. -

          Yes, Beyblade Burst App Mod APK is safe to use as long as you download it from a trusted source. However, we recommend that you scan the file with an antivirus program before installing it just to be sure.

          -
        3. Do I need to root my device to use Beyblade Burst App Mod APK?
        4. -

          No, you don't need to root your device to use Beyblade Burst App Mod APK. The app works fine on both rooted and non-rooted devices.

          -
        5. Will I get banned for using Beyblade Burst App Mod APK?li>Will I get banned for using Beyblade Burst App Mod APK?
        6. -

          There is a slight risk of getting banned for using Beyblade Burst App Mod APK, especially if you use it in online mode. However, the chances are very low and you can avoid them by using the app wisely. Don't abuse the modded features or cheat in the game. Also, don't brag about your modded app or share it with others. Be respectful and fair to other players and enjoy the game.

          -
        7. Can I update Beyblade Burst App Mod APK?
        8. -

          Yes, you can update Beyblade Burst App Mod APK whenever there is a new version available. However, you need to download the new modded APK file from the same source you got the previous one. You can also check for updates within the app itself. Just make sure you backup your data before updating to avoid losing your progress.

          -
        9. Can I use Beyblade Burst App Mod APK on other devices?
        10. -

          Yes, you can use Beyblade Burst App Mod APK on any Android device that meets the minimum requirements of the game. However, you need to download and install the app separately on each device. You can also transfer your data from one device to another by using the cloud save feature in the app.

          -
        11. Can I play Beyblade Burst App Mod APK offline?
        12. -

          Yes, you can play Beyblade Burst App Mod APK offline without an internet connection. However, some features of the app may not work properly or at all in offline mode. For example, you won't be able to access the multiplayer mode, co-op mode, or scan your physical beyblades. You will also miss out on some updates and events that require an internet connection.

          -

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Gladiator True Story - The Game with a Dinosaur Called Hugo.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Gladiator True Story - The Game with a Dinosaur Called Hugo.md deleted file mode 100644 index 30db94f1c324c8a6c0d3eacd81f63b71244ca704..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Gladiator True Story - The Game with a Dinosaur Called Hugo.md +++ /dev/null @@ -1,118 +0,0 @@ - -

        Download Gladiator True Story: How to Watch the Epic Historical Film Online

        -

        Introduction

        -

        If you are a fan of historical epics, action-packed dramas, or Russell Crowe, you might want to download Gladiator True Story, one of the most acclaimed and popular films of the 21st century. Gladiator True Story is a 2000 film directed by Ridley Scott and starring Russell Crowe as Maximus, a Roman general who becomes a gladiator after being betrayed by the emperor Commodus. The film is loosely based on some real historical figures and events, but it also takes some artistic liberties and fictionalizes many aspects of the story. In this article, we will tell you what Gladiator True Story is about, why you should watch it, and how you can download it online legally and safely.

        -

        What is Gladiator True Story?

        -

        A brief summary of the plot and the main characters

        -

        Gladiator True Story is set in the year 180 AD, during the final days of the Roman Empire. The film follows Maximus, a loyal and successful general who leads the Roman army to victory against the Germanic tribes. However, his life changes dramatically when the emperor Marcus Aurelius dies and his son Commodus ascends to the throne. Commodus, who is jealous and paranoid of Maximus, orders his execution and kills his family. Maximus escapes, but he is captured by slave traders and sold to Proximo, a gladiator trainer. Maximus adopts the name Spaniard and becomes a famous gladiator in North Africa. He eventually makes his way to Rome, where he participates in the gladiatorial games sponsored by Commodus. There, he reveals his identity to Commodus and challenges him to a duel in the Colosseum. Maximus seeks revenge for his family and freedom for Rome, while Commodus tries to eliminate him and secure his power.

        -

        download gladiator true story


        Download File 🆓 https://bltlly.com/2uOkG4



        -

        The historical accuracy and facts behind the film

        -

        Gladiator True Story is not a historically accurate film, but it is inspired by some real people and events from Roman history. For example, Commodus and Lucilla were actual siblings who ruled Rome after their father Marcus Aurelius died. Commodus was indeed a cruel and unstable emperor who liked to fight as a gladiator in the arena. He was also assassinated by a wrestler named Narcissus, who was hired by his advisers. However, many details of their lives and personalities are changed or exaggerated in the film. For instance, Commodus did not kill his father, nor did he have an incestuous relationship with his sister. He also ruled for 12 years, not three months as shown in the film.

        -

        Maximus, on the other hand, is a fictional character who is based on several historical figures. He is partly inspired by Narcissus, who killed Commodus; Cincinnatus, who was a farmer who became a dictator and then returned to his farm; Spartacus, who led a slave rebellion; and Marcus Nonius Macrinus, who was a general and consul under Marcus Aurelius. The film also depicts some aspects of Roman society, such as the role of gladiators, the gladiatorial games, and the religions and beliefs, but it does not follow the historical facts closely.

        -

        Why should you watch Gladiator True Story?

        -

        The awards and accolades that the film received

        -

        Gladiator True Story is one of the most successful films of all time, both critically and commercially. It won five Academy Awards, including Best Picture, Best Actor for Russell Crowe , and Best Visual Effects; four BAFTA Awards, including Best Film and Best Cinematography; and two Golden Globe Awards, including Best Motion Picture – Drama and Best Original Score. It also received many other nominations and honors from various film festivals and organizations. It was praised for its epic scope, compelling story, emotional impact, and technical excellence.

        -

        The stunning visuals and sound effects that create an immersive experience

        -

        Gladiator True Story is a feast for the eyes and ears, as it transports you to the ancient world of Rome and its provinces. The film features spectacular scenes of battles, landscapes, and monuments, as well as realistic and detailed costumes, props, and sets. The film used a combination of practical effects, such as real locations, sets, and props; computer-generated imagery (CGI), such as digital crowds, animals, and backgrounds; and miniatures, such as models of the Colosseum and the Roman Forum. The film also used innovative techniques, such as motion capture, facial animation, and digital grading, to enhance the realism and quality of the visuals.

        -

        The film also boasts a powerful and memorable soundtrack composed by Hans Zimmer and Lisa Gerrard. The music blends orchestral, ethnic, and electronic elements to create a rich and varied musical landscape that matches the mood and tone of the film. The music also features vocals by Lisa Gerrard, whose haunting voice adds a layer of emotion and depth to the score. The film also uses sound effects to create a realistic and immersive sound environment that enhances the action and drama of the film.

        -

        The powerful performances and themes that resonate with the audience

        -

        Gladiator True Story is not just a spectacle, but also a story that touches the hearts and minds of the audience. The film features outstanding performances by a talented cast of actors who bring their characters to life with passion and skill. Russell Crowe delivers a career-defining performance as Maximus, who is a complex and sympathetic hero who struggles with loss, grief, anger, loyalty, honor, and justice. Joaquin Phoenix is equally impressive as Commodus, who is a twisted and conflicted villain who suffers from insecurity, jealousy, madness, and ambition. The film also features strong supporting roles by Connie Nielsen as Lucilla, Oliver Reed as Proximo, Djimon Hounsou as Juba, Richard Harris as Marcus Aurelius, Derek Jacobi as Gracchus, and many others.

        -

        The film also explores themes that resonate with the audience on a universal level. The film deals with themes such as freedom, courage, revenge, family, love, friendship, faith, and destiny. The film shows how Maximus fights for his freedom and his ideals, how he avenges his family and his emperor, how he forms bonds with his fellow gladiators and his allies, how he loves his wife and his son, how he believes in the gods and the afterlife, and how he fulfills his destiny as a hero of Rome. The film also shows how Commodus struggles with his insecurities and his desires, how he abuses his power and his people, how he alienates his sister and his advisers, how he lusts after his sister and his glory, how he rejects the gods and the history, and how he meets his downfall as a tyrant of Rome. The film also portrays the contrast between the decadence and corruption of Rome and the simplicity and virtue of Maximus's home. The film also raises questions about the nature and meaning of life, death, and history.

        -

        download gladiator true story app
        -download gladiator true story apk
        -download gladiator true story game
        -download gladiator true story android
        -download gladiator true story pc
        -download gladiator true story gameloop
        -download gladiator true story xform games
        -download gladiator true story hack
        -download gladiator true story mod
        -download gladiator true story cheats
        -download gladiator true story offline
        -download gladiator true story online
        -download gladiator true story free
        -download gladiator true story full version
        -download gladiator true story latest version
        -download gladiator true story update
        -download gladiator true story review
        -download gladiator true story rating
        -download gladiator true story gameplay
        -download gladiator true story trailer
        -download gladiator true story tips
        -download gladiator true story guide
        -download gladiator true story walkthrough
        -download gladiator true story wiki
        -download gladiator true story reddit
        -download gladiator true story forum
        -download gladiator true story blog
        -download gladiator true story news
        -download gladiator true story article
        -download gladiator true story video
        -download gladiator true story youtube
        -download gladiator true story facebook
        -download gladiator true story twitter
        -download gladiator true story instagram
        -download gladiator true story pinterest
        -download gladiator true story quora
        -download gladiator true story medium
        -download gladiator true story amazon
        -download gladiator true story google play
        -download gladiator true story apkcombo
        -how to download gladiator true story
        -where to download gladiator true story
        -why to download gladiator true story
        -what is gladiator true story
        -who made gladiator true story
        -when was gladiator true story released
        -is gladiator true story based on historical facts
        -is gladiator true story worth playing
        -is gladiator true story safe to download
        -is gladiator true story compatible with my device

        -

        How to download Gladiator True Story online?

        -

        The legal and safe ways to stream or buy the film online

        -

        If you want to download Gladiator True Story online, you should always do it legally and safely. There are many websites and platforms that offer the film for streaming or downloading, but not all of them are authorized or secure. Some of them may contain viruses, malware, or spyware that can harm your device or your data. Some of them may also violate the intellectual property rights of the filmmakers and the distributors. Therefore, you should always use reputable and reliable sources that have the proper licenses and permissions to offer the film online.

        -

        Some of the legal and safe ways to stream or buy Gladiator True Story online are:

        -
          -
        • Amazon Prime Video: You can stream or buy Gladiator True Story on Amazon Prime Video, which is a subscription-based service that offers a wide range of movies and shows. You can also download the film to watch offline on compatible devices. You can get a 30-day free trial of Amazon Prime Video if you are a new user.
        • -
        • Netflix: You can stream Gladiator True Story on Netflix, which is another subscription-based service that offers a huge library of movies and shows. You can also download the film to watch offline on compatible devices. You can get a free trial of Netflix if you are a new user.
        • -
        • iTunes: You can buy or rent Gladiator True Story on iTunes, which is a digital media store that offers movies, music, podcasts, and more. You can also download the film to watch offline on compatible devices.
        • -
        • Google Play: You can buy or rent Gladiator True Story on Google Play, which is a digital media store that offers movies, apps, games, books, and more. You can also download the film to watch offline on compatible devices.
        • -
        • Vudu: You can buy or rent Gladiator True Story on Vudu, which is a digital media store that offers movies, TV shows, and more. You can also download the film to watch offline on compatible devices.
        • -
        -

        The best platforms and devices to watch the film on

        -

        Once you have downloaded Gladiator True Story online, you can watch it on various platforms and devices depending on your preference and convenience. Some of the best platforms and devices to watch the film on are:

        -
          -
        • Smart TV: You can watch Gladiator True Story on your smart TV if it has access to any of the streaming services or digital media stores mentioned above. You can also connect your smart TV to your laptop or mobile device using an HDMI cable or a wireless connection.
        • -
        • Laptop: You can watch Gladiator True Story on your laptop if it has access to any of the streaming services or digital media stores mentioned above. You can also connect your laptop to your TV or projector using an HDMI cable or a wireless connection.
        • -
        • Mobile device: You can watch Gladiator True Story on your mobile device if it has access to any of the streaming services or digital media stores mentioned above. You can also connect your mobile device to your TV or projector using an HDMI cable or a wireless connection.
        • -
        • VR headset: You can watch Gladiator True Story on your VR headset if it has access to any of the streaming services or digital media stores mentioned above. You can also use a VR app or software that allows you to watch 2D or 3D movies in VR mode.
        • -
        -

        The tips and tricks to enhance your viewing experience

        -

        To make the most out of your viewing experience of Gladiator True Story, you should follow some tips and tricks that will enhance your enjoyment and understanding of the film. Some of these tips and tricks are:

        -
          -
        • Choose a high-quality video format: You should choose a video format that has a high resolution, frame rate, bitrate, and audio quality. This will ensure that you see every detail and hear every sound of the film clearly and smoothly . You can check the video format and quality on the streaming service or digital media store that you use. You can also adjust the settings on your device or platform to optimize the video quality.
        • -
        • Use a good sound system: You should use a good sound system that has a high volume, clarity, and surround sound effect. This will ensure that you hear every dialogue, music, and sound effect of the film loudly and clearly. You can use speakers, headphones, earphones, or soundbars that have a good sound quality and compatibility with your device or platform. You can also adjust the settings on your device or platform to optimize the sound quality.
        • -
        • Watch the film in a dark and quiet environment: You should watch the film in a dark and quiet environment that has no distractions or interruptions. This will ensure that you focus on the film and immerse yourself in the story and the visuals. You can use curtains, blinds, or shades to block out any external light sources. You can also use earplugs, noise-canceling headphones, or soundproofing materials to block out any external noise sources. You can also turn off or mute any notifications, alerts, or calls on your device or platform.
        • -
        • Watch the film with subtitles: You should watch the film with subtitles if you have difficulty understanding the language, accent, or dialogue of the film. This will ensure that you don't miss any important information or nuance of the film. You can choose subtitles in your preferred language or format on the streaming service or digital media store that you use. You can also adjust the settings on your device or platform to customize the subtitles.
        • -
        • Watch the film with friends or family: You should watch the film with friends or family if you want to share your opinions, emotions, or reactions with others. This will ensure that you have a fun and social experience of watching the film. You can watch the film together in person or online using a video call or a watch party app. You can also chat, comment, or discuss the film with others during or after watching it.
        • -
        -

        Conclusion

        -

        Gladiator True Story is a masterpiece of filmmaking that deserves to be watched by everyone who loves historical epics, action-packed dramas, or Russell Crowe. The film tells a captivating and inspiring story of a hero who fights for his freedom, his family, and his country against a tyrant who oppresses his people, his sister, and his history. The film showcases amazing visuals and sound effects that create an immersive and realistic depiction of the ancient world of Rome and its provinces. The film also features brilliant performances and themes that resonate with the audience on a universal level. The film is available for streaming or downloading online from various legal and safe sources that offer high-quality video and audio formats. The film can be watched on various platforms and devices depending on your preference and convenience. The film can also be enhanced by following some tips and tricks that will optimize your viewing experience.

        -

        If you want to download Gladiator True Story online, you should do it now and enjoy this epic historical film that will leave you breathless and amazed.

        -

        FAQs

        -
          -
        • Q: When was Gladiator True Story released?
        • -
        • A: Gladiator True Story was released on May 5, 2000 in the United States and on May 11, 2000 in the United Kingdom.
        • -
        • Q: How long is Gladiator True Story?
        • -
        • A: Gladiator True Story has a runtime of 155 minutes for the theatrical version and 171 minutes for the extended version.
        • -
        • Q: Who directed Gladiator True Story?
        • -
        • A: Gladiator True Story was directed by Ridley Scott, who is also known for directing films such as Alien, Blade Runner, Thelma & Louise, Black Hawk Down, The Martian, and more.
        • -
        • Q: Who wrote Gladiator True Story?
        • -
        • A: Gladiator True Story was written by David Franzoni, John Logan, and William Nicholson.
        • -
        • Q: How much did Gladiator True Story cost to make?
        • -
        • A: Gladiator True Story had a budget of $103 million.
        • -
        • Q: How much did Gladiator True Story earn at the box office?
        • -
        • A: Gladiator True Story earned $460 million worldwide at the box office.
        • -

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/7554 - SKIDROW [ PC ][ Www.EliteDescargas.Com ] Fitgirl Repack.md b/spaces/tioseFevbu/cartoon-converter/scripts/7554 - SKIDROW [ PC ][ Www.EliteDescargas.Com ] Fitgirl Repack.md deleted file mode 100644 index f2cfbee1429a61f7735a2ba845f689b96183f9d2..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/7554 - SKIDROW [ PC ][ Www.EliteDescargas.Com ] Fitgirl Repack.md +++ /dev/null @@ -1,15 +0,0 @@ - -

        7554 - SKIDROW: A Historical Shooter Game Set in Vietnam

        -

        7554 - SKIDROW is a first-person shooter game that depicts the events of the First Indochina War between Vietnam and France. The game is developed by Emobi Games JSC, a Vietnamese studio, and released in 2011. The title refers to the date of May 7, 1954, when the French army surrendered at Dien Bien Phu.

        -

        The game follows four Vietnamese soldiers who participate in various battles of the war, such as Hanoi, Hoa Binh, and Dien Bien Phu. The game features realistic graphics, weapons, and environments, as well as historical footage and voice-overs. The game aims to show the perspective and patriotism of the Vietnamese people during the war.

        -

        7554 - SKIDROW [ PC ][ Www.EliteDescargas.Com ] Fitgirl Repack


        Download Zip ⚙⚙⚙ https://urlcod.com/2uHxLP



        -

        7554 - SKIDROW is available for Windows PC and can be downloaded from various torrent sites. One of them is Www.EliteDescargas.Com, which offers a Fitgirl Repack version of the game. Fitgirl Repacks are compressed versions of games that reduce the download size and installation time. They are also verified for safety and working conditions.

        -

        If you are interested in historical shooter games or want to learn more about the First Indochina War, you might want to check out 7554 - SKIDROW. It is a rare example of a game that portrays the Vietnam War from a Vietnamese point of view.

        The First Indochina War was a result of the nationalist and communist movements that emerged in Vietnam after World War II. The Việt Minh, led by Hồ Chí Minh, fought against the French colonial rule and sought to establish an independent and unified Vietnam. The French, supported by the United States and other allies, tried to maintain their control over Indochina and prevent the spread of communism.

        -

        The war was marked by several major battles and campaigns, such as the Battle of Hanoi (1946), the Battle of Route Coloniale 4 (1950), the Battle of Vinh Yen (1951), the Battle of Nà Sản (1952), and the Battle of Dien Bien Phu (1954). The war also involved guerrilla warfare, air strikes, naval operations, and political negotiations.

        -

        The war ended with the Geneva Accords of 1954, which divided Vietnam into two zones: a northern zone under Việt Minh control and a southern zone under French control. The accords also recognized the independence of Laos and Cambodia, which were also part of French Indochina. The accords also called for a general election in 1956 to reunify Vietnam, but this never took place.

        -

        The First Indochina War had significant consequences for the region and the world. It weakened the French influence in Asia and Africa, and strengthened the nationalist and communist movements in other colonies. It also set the stage for the Second Indochina War, or the Vietnam War, which would involve the United States, China, and the Soviet Union in a prolonged and costly conflict.

        -

        The Second Indochina War, also known as the Vietnam War, was a continuation of the conflict that started in the First Indochina War. The war occurred in Vietnam, Laos, and Cambodia from 1955 to 1975. The war involved the South Vietnamese government, backed by the United States and other anti-communist allies, against the North Vietnamese government, led by Hồ Chí Minh and supported by China and the Soviet Union. The war also involved the communist guerrillas of the National Liberation Front (NLF), also known as the Viet Cong, who fought against the South Vietnamese regime and its allies.

        -

        The war was triggered by several factors, such as the failure of the Geneva Accords to reunify Vietnam peacefully, the rise of nationalism and communism in Southeast Asia, the Cold War rivalry between the United States and the Soviet Union, and the domino theory that feared the spread of communism in Asia. The war was marked by several major events and phases, such as the Gulf of Tonkin incident (1964), which led to the escalation of U.S. military involvement in Vietnam; the Tet Offensive (1968), which was a massive communist attack on South Vietnamese cities and bases; the Vietnamization policy (1969–1973), which aimed to transfer more responsibility for the war to the South Vietnamese army; and the Paris Peace Accords (1973), which ended U.S. military involvement in Vietnam.

        -

        The war ended with the fall of Saigon, the capital of South Vietnam, to the North Vietnamese forces on April 30, 1975. The war resulted in the reunification of Vietnam under a communist regime on July 2, 1976. The war also had significant consequences for the region and the world. It caused millions of deaths, injuries, and displacements among civilians and combatants. It also destabilized Laos and Cambodia, where communist movements emerged and fought against their governments. It also affected the relations between the United States and its allies, as well as between China and the Soviet Union. The war also sparked anti-war protests and social movements in many countries, especially in the United States.

        7b8c122e87
        -
        -
        \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Crack ((NEW)) Autocad 2013 Mac Keygen.md b/spaces/tioseFevbu/cartoon-converter/scripts/Crack ((NEW)) Autocad 2013 Mac Keygen.md deleted file mode 100644 index 3dac16a7df8170ff2ec01a0e943c1e8f38bedfaf..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Crack ((NEW)) Autocad 2013 Mac Keygen.md +++ /dev/null @@ -1,73 +0,0 @@ -
        -

        How to Use AutoCAD for Mac Legally and Safely (and Some Alternatives)

        -

        AutoCAD is one of the most popular and powerful CAD (computer-aided design) software in the world. It is used by architects, engineers, designers, drafters, and other professionals to create 2D and 3D drawings, models, and designs. However, AutoCAD is also one of the most expensive and complex CAD software in the market. It requires a high-end computer system, a valid license, and a steep learning curve to use it effectively.

        -

        Some people may be tempted to crack AutoCAD 2013 Mac keygen, which is a program that generates serial numbers and activation codes for AutoCAD 2013 on Mac OS. They may think that cracking AutoCAD 2013 Mac keygen will allow them to use AutoCAD for free or without any limitations. However, this is not true. Cracking AutoCAD 2013 Mac keygen is an illegal and unethical activity that violates the terms of use and license agreement of Autodesk, the developer of AutoCAD. It also exposes them to various risks, such as malware infection, data loss, poor performance, legal and financial consequences, and more.

        -

        Crack Autocad 2013 Mac Keygen


        Download ––– https://urlcod.com/2uHxaE



        -

        In this article, we will not show you how to crack AutoCAD 2013 Mac keygen, but we will provide you with some information and tips on how to use AutoCAD for Mac legally and safely, as well as some suggestions on free or affordable CAD software for Mac that you can try instead. We hope you will find this article helpful and informative.

        -

        How to Use AutoCAD for Mac Legally and Safely

        -

        If you want to use AutoCAD for Mac legally and safely, you need to follow these steps:

        -
          -
        1. Check the system requirements for AutoCAD for Mac. According to Autodesk, the system requirements for AutoCAD for Mac vary depending on the version you want to use. For example, the system requirements for AutoCAD 2020 for Mac are:
            -
          • Operating System: Apple® macOS® Catalina v10.15; Mojave v10.14; High Sierra v10.13
          • -
          • Model: Apple Mac Pro® 4.1 or later; MacBook Pro® 5.1 or later; iMac® 8.1 or later; Mac mini® 3.1 or later; MacBook Air® 2.1 or later; MacBook® 5.1 or later
          • -
          • Memory: 4 GB of RAM (8 GB or above recommended)
          • -
          • Display Card: - Display Card: 64-bit capable graphics card with 1 GB of VRAM or more
          • -
          • Disk Space: 3 GB of available disk space for download and installation
          • -
          • Pointing Device: Apple® Mouse, Apple Magic Mouse, Magic Trackpad, MacBook® Pro trackpad, or Microsoft-compliant mouse
          • -
          • Printer: Mac OS X-compliant printer
          • -
          -
        2. -
        3. Download and install AutoCAD for Mac. You can download AutoCAD for Mac from the Autodesk website. You can choose to download a free trial version, which lasts for 30 days, or a full version, which requires a subscription. To install AutoCAD for Mac, you need to follow the instructions on the screen. You may need to enter your serial number and product key, which you can find in your Autodesk account or email confirmation.
        4. -
        5. Activate and register AutoCAD for Mac. After installing AutoCAD for Mac, you need to activate and register it to verify your license and access all the features and functionality. You can activate and register AutoCAD for Mac online or offline. To activate and register online, you need to sign in with your Autodesk ID and password. To activate and register offline, you need to request an activation code from Autodesk.
        6. -
        7. Optimize the performance and security of AutoCAD for Mac. To ensure that AutoCAD for Mac runs smoothly and securely on your computer, you need to do some optimization tasks, such as:
            -
          • Update your operating system and software regularly. This will help you fix any bugs, improve compatibility, and enhance security.
          • -
          • Use antivirus software and firewall. This will help you protect your computer from viruses, malware, and hackers.
          • -
          • Clean up your disk space and memory. This will help you free up some space and speed up your computer.
          • -
          • Adjust your settings and preferences. This will help you customize your AutoCAD for Mac experience according to your needs and preferences.
          • -
          -
        8. -
        9. Subscribe to Genuine AutoCAD. If you want to use AutoCAD for Mac beyond the trial period, you need to subscribe to Genuine AutoCAD. Genuine AutoCAD is the official and legal version of AutoCAD that offers many benefits, such as:
            -
          • Access to the latest updates and features
          • -
          • Technical support and customer service
          • -
          • Cloud storage and collaboration tools
          • -
          • Flexible payment options and plans
          • -
          • Discounts and offers
          • -
          -You can subscribe to Genuine AutoCAD from the Autodesk website or from an authorized reseller. You can choose from different subscription types, such as monthly, yearly, or multi-year. You can also choose from different subscription levels, such as standard, premium, or ultimate.
        10. -
        -

        Free or Affordable CAD Software for Mac Alternatives

        -

        If you are looking for some free or affordable CAD software for Mac alternatives that have similar or better features and functionality than AutoCAD, you may want to check out these options:

        - - - - - - - -
        NameDescriptionProsCons
        LibreCADA free and open-source 2D CAD software that is compatible with Mac OS X 10.7 or later.- Easy to use and learn
        - Supports many file formats
        - Has a large community of users and developers
        - Has many tools and features for 2D drawing and design
        - Does not support 3D modeling
        - Has limited customization options
        - Has some bugs and glitches
        FreeCADA free and open-source 3D CAD software that is compatible with Mac OS X 10.11 or later.- Supports parametric modeling
        - Supports many file formats
        - Has a modular architecture that allows adding extensions and plugins
        - Has many tools and features for 3D modeling and design
        - Has a steep learning curve
        - Has a complex user interface
        - Has some stability issues
        NanoCADA low-cost 2D/3D CAD software that is compatible with Mac OS X 10.12 or later.- Has a familiar user interface similar to AutoCAD
        - Supports DWG/DXF file formats
        - Has many tools and features for 2D/3D drawing and design
        - Has a free version with basic functionality
        - Requires an internet connection for activation
        - Has limited technical support
        - Has some compatibility issues with other software
        SketchUpSketchUpA popular 3D CAD software that is compatible with Mac OS X 10.12 or later.- Has a simple and intuitive user interface
        - Supports many file formats and 3D printing
        - Has a large library of 3D models and materials
        - Has a free web-based version with basic functionality
        - Requires an internet connection for the web-based version
        - Has limited tools and features for 2D drawing and design
        - Has a high subscription cost for the pro version
        QCADA free and open-source 2D CAD software that is compatible with Mac OS X 10.7 or later.- Has a user-friendly and customizable user interface
        - Supports many file formats and scripting languages
        - Has many tools and features for 2D drawing and design
        - Has a pro version with advanced functionality
        - Does not support 3D modeling
        - Has a low subscription cost for the pro version
        - Has some performance issues
        -

        These are just some of the free or affordable CAD software for Mac alternatives that you can try instead of cracking AutoCAD 2013 Mac keygen. You can find more options by searching online or asking other CAD users for recommendations. You can also compare and contrast the features, functionality, compatibility, and cost of different CAD software for Mac by using online tools or reviews.

        -

        Conclusion

        -

        In this article, we have discussed why cracking AutoCAD 2013 Mac keygen is illegal, unethical, and risky, and how to use AutoCAD for Mac legally and safely. We have also provided you with some free or affordable CAD software for Mac alternatives that have similar or better features and functionality than AutoCAD. We hope you have learned something new and useful from this article.

        -

        If you want to use AutoCAD for Mac, we suggest that you subscribe to Genuine AutoCAD from the Autodesk website or from an authorized reseller. This will give you access to the latest updates and features, technical support and customer service, cloud storage and collaboration tools, flexible payment options and plans, discounts and offers, and more. You will also avoid the risks of malware infection, data loss, poor performance, legal and financial consequences, and more that come with cracking AutoCAD 2013 Mac keygen.

        -

        -

        If you want to try some free or affordable CAD software for Mac alternatives, we suggest that you check out LibreCAD, FreeCAD, NanoCAD, SketchUp, QCAD, or any other option that suits your needs and preferences. You can download or access them from their official websites or from other trusted sources. You can also compare and contrast their pros and cons by using online tools or reviews.

        -

        Whatever CAD software for Mac you choose to use, we hope that you will enjoy creating 2D and 3D drawings, models, and designs with it. We also hope that you will respect the intellectual property rights of the developers and follow the terms of use and license agreement of the software. Thank you for reading this article.

        -

        FAQs

        -

        Here are some frequently asked questions about AutoCAD for Mac and its alternatives:

        -
          -
        1. What is the difference between AutoCAD for Mac and AutoCAD for Windows?
          AutoCAD for Mac and AutoCAD for Windows are both versions of AutoCAD that run on different operating systems. They have similar core functionality, but they also have some differences in user interface, features, commands, shortcuts, file formats, compatibility, and performance. For example, AutoCAD for Mac has a more streamlined user interface that follows the Mac OS design guidelines, while AutoCAD for Windows has a more traditional user interface that follows the Windows design guidelines. AutoCAD for Mac also has some features that are not available in AutoCAD for Windows, such as Touch Bar support, Quick View Layouts, Quick View Drawings, etc., while AutoCAD for Windows has some features that are not available in AutoCAD for Mac, such as Dynamic Input, Sheet Set Manager, Express Tools, etc.
        2. -
        3. Can I use AutoCAD for Mac on multiple computers?
          Yes, you can use AutoCAD for Mac on multiple computers if you have a subscription to Genuine AutoCAD. You can install AutoCAD for Mac on up to three computers or devices using the same Autodesk ID. However, you can only use one computer or device at a time. If you want to use another computer or device, you need to sign out from the current one first.
        4. -
        5. Can I open files created in AutoCAD for Windows in AutoCAD for Mac?
          Yes, you can open files created in AutoCAD for Windows in AutoCAD for Mac if they are saved in DWG or DXF formats, which are the native file formats of AutoCAD. However, you may encounter some compatibility issues, such as missing fonts, linetypes, hatch patterns, etc., that are specific to the Windows platform. To avoid these issues, you can use the eTransmit command in AutoCAD for Windows to create a transmittal package that includes all the files and resources needed to open the file in AutoCAD for Mac. You can also use the Save As command in AutoCAD for Mac to save the file in an older or newer version of DWG or DXF format that is compatible with AutoCAD for Windows.
        6. -
        7. Can I use AutoCAD for Mac offline?
          Yes, you can use AutoCAD for Mac offline if you have a subscription to Genuine AutoCAD. However, you need to connect to the internet at least once every 30 days to verify your license and access the cloud services. If you do not connect to the internet within 30 days, your AutoCAD for Mac will run in reduced functionality mode, which means that you can only view and print your files, but not edit or save them.
        8. -
        9. How can I learn AutoCAD for Mac?
          There are many ways to learn AutoCAD for Mac, such as:
            -
          • Using the built-in help system and tutorials in AutoCAD for Mac. You can access them from the Help menu or by pressing F1.
          • -
          • Visiting the Autodesk website and browsing the learning resources, such as articles, videos, webinars, forums, blogs, etc.
          • -
          • Enrolling in an online course or a classroom training program that covers AutoCAD for Mac.
          • -
          • Reading a book or a guide that teaches AutoCAD for Mac.
          • -
          • Asking a friend or a colleague who knows AutoCAD for Mac to teach you or give you some tips.
          • -
          • Practicing and experimenting with AutoCAD for Mac on your own projects or assignments.
          • -
          -
        10. -

        b2dd77e56b
        -
        -
        \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Gsa Seo Indexer Crack 14.md b/spaces/tioseFevbu/cartoon-converter/scripts/Gsa Seo Indexer Crack 14.md deleted file mode 100644 index 8f9bb91dd6448e53c9d97b09fcfd64b2e3a79cd9..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Gsa Seo Indexer Crack 14.md +++ /dev/null @@ -1,22 +0,0 @@ - -

        How to Use GSA SEO Indexer to Boost Your Rankings

        -

        GSA SEO Indexer is a powerful tool that can help you get your website indexed by search engines like Google or Bing more quickly. It works by submitting your site to thousands of whois and statistics websites, resulting in many backlinks. Backlinks are important for SEO because they increase your authority and relevance in the eyes of the search engines.

        -

        However, not all backlinks are created equal. Some backlinks may not be recognized by the search engines if they are not indexed. That's why you need GSA SEO Indexer to make sure that your backlinks are visible and effective. GSA SEO Indexer can index your backlinks in minutes, saving you time and money compared to other indexing services that charge you per URL.

        -

        Gsa Seo Indexer Crack 14


        DOWNLOADhttps://urlcod.com/2uHxSb



        -

        In this article, we will show you how to use GSA SEO Indexer to boost your rankings. You will need a license key to activate the software, which you can buy from the official website[^1^] or get for free from some crack sites[^2^] [^3^]. However, we do not recommend using cracked versions as they may contain viruses or malware that can harm your computer or compromise your data.

        -

        Step 1: Download and Install GSA SEO Indexer

        -

        The first step is to download and install GSA SEO Indexer on your computer. You can download it from the official website[^1^] or from any trusted source. The installation process is simple and straightforward. Just follow the instructions on the screen and accept the terms and conditions.

        -

        Step 2: Activate GSA SEO Indexer

        -

        The next step is to activate GSA SEO Indexer with your license key. You can buy a license key from the official website[^1^] or get one for free from some crack sites[^2^] [^3^]. To activate the software, open it and click on the "Help" menu. Then select "Enter Registration Details" and enter your name and license key. Click on "OK" and you should see a message saying that your software is activated.

        -

        Step 3: Add Your URLs to GSA SEO Indexer

        -

        The final step is to add your URLs to GSA SEO Indexer and start indexing them. You can add your URLs manually or import them from a file. To add them manually, click on the "Add URL" button and enter your URL in the box. You can also enter multiple URLs separated by commas or spaces. To import them from a file, click on the "Import" button and select your file. The file should contain one URL per line.

        -

        After adding your URLs, you can choose the indexing mode from the drop-down menu. There are four modes available: Full, Quick, Custom, and Ping Only. Full mode submits your URLs to all available sites, Quick mode submits them to only a few sites, Custom mode lets you select which sites to submit them to, and Ping Only mode only pings your URLs without submitting them. We recommend using Full mode for maximum results.

        -

        Step 4: Start Indexing Your URLs

        -

        Once you have added your URLs and chosen the indexing mode, you can start indexing them by clicking on the "Start" button. You will see a progress bar showing how many URLs have been submitted and indexed. You can also see the details of each submission by clicking on the "Log" tab. You can pause or stop the indexing process at any time by clicking on the "Pause" or "Stop" buttons.

        -

        Step 5: Check Your Results

        -

        After indexing your URLs, you can check your results by clicking on the "Results" tab. You will see a list of your URLs with their status and index rate. The status shows whether the URL was submitted successfully or not, and the index rate shows how many sites have indexed it. You can also export your results to a file by clicking on the "Export" button.

        -

        Conclusion

        -

        GSA SEO Indexer is a great tool that can help you get your website indexed by search engines more quickly and easily. It can save

        -

        81aa517590
        -
        -
        \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/History Of The Christmas Songs [BETTER].md b/spaces/tioseFevbu/cartoon-converter/scripts/History Of The Christmas Songs [BETTER].md deleted file mode 100644 index 853f1eba9415415920b5b1157365ecea81df0cb4..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/History Of The Christmas Songs [BETTER].md +++ /dev/null @@ -1,19 +0,0 @@ -
        -

        The History of Christmas Songs

        -

        Christmas songs are an integral part of the festive season, but have you ever wondered how they came to be? In this article, we will explore the origins and evolution of some of the most popular and beloved Christmas songs in history.

        -

        The Earliest Christmas Songs

        -

        The earliest Christmas songs were not songs at all, but chants and hymns that celebrated the birth of Jesus Christ. The first recorded Christmas song is thought to be Veni redemptor gentium (Come, Redeemer of the Nations), composed by Saint Ambrose in the fourth century. This Latin hymn was sung in churches during Advent, the period of preparation for Christmas.

        -

        history of the christmas songs


        DOWNLOADhttps://urlcod.com/2uHvtI



        -

        Another early Christmas song is Adeste Fideles (O Come, All Ye Faithful), which dates back to the 13th century. This song was originally a Latin hymn that praised the mystery of the Incarnation. It was later translated into English by John Francis Wade in the 18th century.

        -

        The Rise of Carols

        -

        Carols are songs that express joy and celebration, often with a religious theme. The word carol comes from the French word carole, which means a circle dance with singing. Carols were popular in medieval Europe, especially in France and England, where they were sung by wandering minstrels and folk singers during Christmas and other festivals.

        -

        Some of the oldest carols are The Holly and the Ivy, God Rest Ye Merry, Gentlemen, and Good King Wenceslas. These carols have pagan origins and reflect the ancient customs of decorating with evergreens and giving alms to the poor during winter. They were later Christianized and adapted to fit the Christmas story.

        -

        The Golden Age of Christmas Songs

        -

        The 19th and 20th centuries saw a boom in the production and popularity of Christmas songs, thanks to the influence of composers, poets, and musicians from various countries and cultures. Some of the most famous Christmas songs from this period are Silent Night, Jingle Bells, O Holy Night, White Christmas, and Rudolph the Red-Nosed Reindeer.

        -

        These songs reflect the diverse aspects of Christmas, such as peace, joy, love, nostalgia, humor, and fantasy. They also capture the spirit of the times, such as the social changes brought by industrialization, urbanization, and globalization. Some of these songs have become classics that are sung and played every year around the world.

        -

        -

        The Future of Christmas Songs

        -

        Christmas songs are not only a part of our past, but also our present and future. They continue to evolve and innovate with new styles, genres, and messages. Some of the recent examples are All I Want for Christmas Is You, Last Christmas, Do They Know It's Christmas?, and Fairytale of New York.

        -

        These songs reflect the modern challenges and opportunities of celebrating Christmas in a diverse and complex world. They also express our hopes and dreams for a better future for ourselves and others. As long as we have music and imagination, we will always have new Christmas songs to enjoy and share.

        81aa517590
        -
        -
        \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Jaws 12 Crack Free Download ((FREE)).md b/spaces/tioseFevbu/cartoon-converter/scripts/Jaws 12 Crack Free Download ((FREE)).md deleted file mode 100644 index d5fcf8eaca2e06419b9d58a6cff45e457b7c0b5b..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Jaws 12 Crack Free Download ((FREE)).md +++ /dev/null @@ -1,36 +0,0 @@ - -

        How to Download JAWS 12 Crack for Free

        -

        JAWS is a popular screen reader software that helps visually impaired users to access and use computers. However, JAWS is not a free software and requires a license to activate. If you are looking for a way to download JAWS 12 crack for free, you may be tempted by some websites that claim to offer it. But be careful, as these websites may contain malware, viruses, or scams that can harm your computer or steal your personal information.

        -

        jaws 12 crack free download


        DOWNLOADhttps://urlcod.com/2uHvFX



        -

        In this article, we will explain why you should avoid downloading JAWS 12 crack from untrusted sources, and what are some legal and safe alternatives to get JAWS for free or at a lower cost.

        - -

        Why You Should Not Download JAWS 12 Crack

        -

        Downloading JAWS 12 crack from unknown websites is not only illegal, but also risky. Here are some of the dangers of doing so:

        -
          -
        • You may download a fake or corrupted file that does not work or damages your computer.
        • -
        • You may download a file that contains malware, such as spyware, ransomware, or trojans, that can infect your computer and compromise your security and privacy.
        • -
        • You may download a file that requires you to complete surveys, enter personal information, or pay money to access the crack, which can be a scam or a phishing attempt.
        • -
        • You may violate the intellectual property rights of Freedom Scientific, the developer of JAWS, and face legal consequences.
        • -
        • You may miss out on the latest updates, features, and bug fixes of JAWS, as cracked versions are usually outdated and incompatible with newer versions of Windows or other software.
        • -
        -

        Therefore, we strongly advise you not to download JAWS 12 crack from any website that offers it for free. It is not worth the risk and the hassle.

        -

        - -

        How to Get JAWS for Free or at a Lower Cost

        -

        If you need JAWS for your personal or professional use, there are some legitimate ways to get it for free or at a lower cost. Here are some of them:

        -
          -
        • If you are a student or an educator in the US, you can apply for a free annual license of JAWS through the Freedom Scientific Student License Program[^1^]. This program allows you to use JAWS on any computer that you own or use for educational purposes.
        • -
        • If you are a user of an older version of JAWS (version 11 or earlier), you can upgrade to JAWS 12 for free through the Freedom Scientific SMA Upgrade Program[^1^]. This program allows you to get the latest version of JAWS without paying any additional fees.
        • -
        • If you are a user of another screen reader software (such as NVDA or Narrator), you can switch to JAWS for a discounted price through the Freedom Scientific Competitive Upgrade Program[^1^]. This program allows you to get JAWS for 50% off the regular price.
        • -
        • If you are a user of Windows 10 ARM64 devices (such as Surface Pro X), you can download and use JAWS for ARM64 for free through the Freedom Scientific ARM64 Program[^1^]. This program allows you to use JAWS on devices that run on ARM-based processors.
        • -
        • If you are a user of Windows 11, you can download and use JAWS 2023 for free through the Freedom Scientific Windows 11 Program[^2^]. This program allows you to use JAWS on devices that run on Windows 11 operating system.
        • -
        -

        As you can see, there are many ways to get JAWS legally and safely without resorting to downloading cracks. We hope this article has helped you understand why downloading JAWS 12 crack is not a good idea, and what are some better options to get JAWS for your needs.

        - -

        References

        -
          -
        1. Downloads: JAWS - Freedom Scientific
        2. -
        3. JAWS 2023.2303.144 Free Download 2023 Latest
        4. - 7b8c122e87
          -
          -
          \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/vcs/subversion.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/vcs/subversion.py deleted file mode 100644 index 89c8754ce090a41f94ac9691098db6a9ec119930..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/vcs/subversion.py +++ /dev/null @@ -1,324 +0,0 @@ -import logging -import os -import re -from typing import List, Optional, Tuple - -from pip._internal.utils.misc import ( - HiddenText, - display_path, - is_console_interactive, - is_installable_dir, - split_auth_from_netloc, -) -from pip._internal.utils.subprocess import CommandArgs, make_command -from pip._internal.vcs.versioncontrol import ( - AuthInfo, - RemoteNotFoundError, - RevOptions, - VersionControl, - vcs, -) - -logger = logging.getLogger(__name__) - -_svn_xml_url_re = re.compile('url="([^"]+)"') -_svn_rev_re = re.compile(r'committed-rev="(\d+)"') -_svn_info_xml_rev_re = re.compile(r'\s*revision="(\d+)"') -_svn_info_xml_url_re = re.compile(r"(.*)") - - -class Subversion(VersionControl): - name = "svn" - dirname = ".svn" - repo_name = "checkout" - schemes = ("svn+ssh", "svn+http", "svn+https", "svn+svn", "svn+file") - - @classmethod - def should_add_vcs_url_prefix(cls, remote_url: str) -> bool: - return True - - @staticmethod - def get_base_rev_args(rev: str) -> List[str]: - return ["-r", rev] - - @classmethod - def get_revision(cls, location: str) -> str: - """ - Return the maximum revision for all files under a given location - """ - # Note: taken from setuptools.command.egg_info - revision = 0 - - for base, dirs, _ in os.walk(location): - if cls.dirname not in dirs: - dirs[:] = [] - continue # no sense walking uncontrolled subdirs - dirs.remove(cls.dirname) - entries_fn = os.path.join(base, cls.dirname, "entries") - if not os.path.exists(entries_fn): - # FIXME: should we warn? - continue - - dirurl, localrev = cls._get_svn_url_rev(base) - - if base == location: - assert dirurl is not None - base = dirurl + "/" # save the root url - elif not dirurl or not dirurl.startswith(base): - dirs[:] = [] - continue # not part of the same svn tree, skip it - revision = max(revision, localrev) - return str(revision) - - @classmethod - def get_netloc_and_auth( - cls, netloc: str, scheme: str - ) -> Tuple[str, Tuple[Optional[str], Optional[str]]]: - """ - This override allows the auth information to be passed to svn via the - --username and --password options instead of via the URL. - """ - if scheme == "ssh": - # The --username and --password options can't be used for - # svn+ssh URLs, so keep the auth information in the URL. - return super().get_netloc_and_auth(netloc, scheme) - - return split_auth_from_netloc(netloc) - - @classmethod - def get_url_rev_and_auth(cls, url: str) -> Tuple[str, Optional[str], AuthInfo]: - # hotfix the URL scheme after removing svn+ from svn+ssh:// readd it - url, rev, user_pass = super().get_url_rev_and_auth(url) - if url.startswith("ssh://"): - url = "svn+" + url - return url, rev, user_pass - - @staticmethod - def make_rev_args( - username: Optional[str], password: Optional[HiddenText] - ) -> CommandArgs: - extra_args: CommandArgs = [] - if username: - extra_args += ["--username", username] - if password: - extra_args += ["--password", password] - - return extra_args - - @classmethod - def get_remote_url(cls, location: str) -> str: - # In cases where the source is in a subdirectory, we have to look up in - # the location until we find a valid project root. - orig_location = location - while not is_installable_dir(location): - last_location = location - location = os.path.dirname(location) - if location == last_location: - # We've traversed up to the root of the filesystem without - # finding a Python project. - logger.warning( - "Could not find Python project for directory %s (tried all " - "parent directories)", - orig_location, - ) - raise RemoteNotFoundError - - url, _rev = cls._get_svn_url_rev(location) - if url is None: - raise RemoteNotFoundError - - return url - - @classmethod - def _get_svn_url_rev(cls, location: str) -> Tuple[Optional[str], int]: - from pip._internal.exceptions import InstallationError - - entries_path = os.path.join(location, cls.dirname, "entries") - if os.path.exists(entries_path): - with open(entries_path) as f: - data = f.read() - else: # subversion >= 1.7 does not have the 'entries' file - data = "" - - url = None - if data.startswith("8") or data.startswith("9") or data.startswith("10"): - entries = list(map(str.splitlines, data.split("\n\x0c\n"))) - del entries[0][0] # get rid of the '8' - url = entries[0][3] - revs = [int(d[9]) for d in entries if len(d) > 9 and d[9]] + [0] - elif data.startswith("= 1.7 - # Note that using get_remote_call_options is not necessary here - # because `svn info` is being run against a local directory. - # We don't need to worry about making sure interactive mode - # is being used to prompt for passwords, because passwords - # are only potentially needed for remote server requests. - xml = cls.run_command( - ["info", "--xml", location], - show_stdout=False, - stdout_only=True, - ) - match = _svn_info_xml_url_re.search(xml) - assert match is not None - url = match.group(1) - revs = [int(m.group(1)) for m in _svn_info_xml_rev_re.finditer(xml)] - except InstallationError: - url, revs = None, [] - - if revs: - rev = max(revs) - else: - rev = 0 - - return url, rev - - @classmethod - def is_commit_id_equal(cls, dest: str, name: Optional[str]) -> bool: - """Always assume the versions don't match""" - return False - - def __init__(self, use_interactive: bool = None) -> None: - if use_interactive is None: - use_interactive = is_console_interactive() - self.use_interactive = use_interactive - - # This member is used to cache the fetched version of the current - # ``svn`` client. - # Special value definitions: - # None: Not evaluated yet. - # Empty tuple: Could not parse version. - self._vcs_version: Optional[Tuple[int, ...]] = None - - super().__init__() - - def call_vcs_version(self) -> Tuple[int, ...]: - """Query the version of the currently installed Subversion client. - - :return: A tuple containing the parts of the version information or - ``()`` if the version returned from ``svn`` could not be parsed. - :raises: BadCommand: If ``svn`` is not installed. - """ - # Example versions: - # svn, version 1.10.3 (r1842928) - # compiled Feb 25 2019, 14:20:39 on x86_64-apple-darwin17.0.0 - # svn, version 1.7.14 (r1542130) - # compiled Mar 28 2018, 08:49:13 on x86_64-pc-linux-gnu - # svn, version 1.12.0-SlikSvn (SlikSvn/1.12.0) - # compiled May 28 2019, 13:44:56 on x86_64-microsoft-windows6.2 - version_prefix = "svn, version " - version = self.run_command(["--version"], show_stdout=False, stdout_only=True) - if not version.startswith(version_prefix): - return () - - version = version[len(version_prefix) :].split()[0] - version_list = version.partition("-")[0].split(".") - try: - parsed_version = tuple(map(int, version_list)) - except ValueError: - return () - - return parsed_version - - def get_vcs_version(self) -> Tuple[int, ...]: - """Return the version of the currently installed Subversion client. - - If the version of the Subversion client has already been queried, - a cached value will be used. - - :return: A tuple containing the parts of the version information or - ``()`` if the version returned from ``svn`` could not be parsed. - :raises: BadCommand: If ``svn`` is not installed. - """ - if self._vcs_version is not None: - # Use cached version, if available. - # If parsing the version failed previously (empty tuple), - # do not attempt to parse it again. - return self._vcs_version - - vcs_version = self.call_vcs_version() - self._vcs_version = vcs_version - return vcs_version - - def get_remote_call_options(self) -> CommandArgs: - """Return options to be used on calls to Subversion that contact the server. - - These options are applicable for the following ``svn`` subcommands used - in this class. - - - checkout - - switch - - update - - :return: A list of command line arguments to pass to ``svn``. - """ - if not self.use_interactive: - # --non-interactive switch is available since Subversion 0.14.4. - # Subversion < 1.8 runs in interactive mode by default. - return ["--non-interactive"] - - svn_version = self.get_vcs_version() - # By default, Subversion >= 1.8 runs in non-interactive mode if - # stdin is not a TTY. Since that is how pip invokes SVN, in - # call_subprocess(), pip must pass --force-interactive to ensure - # the user can be prompted for a password, if required. - # SVN added the --force-interactive option in SVN 1.8. Since - # e.g. RHEL/CentOS 7, which is supported until 2024, ships with - # SVN 1.7, pip should continue to support SVN 1.7. Therefore, pip - # can't safely add the option if the SVN version is < 1.8 (or unknown). - if svn_version >= (1, 8): - return ["--force-interactive"] - - return [] - - def fetch_new( - self, dest: str, url: HiddenText, rev_options: RevOptions, verbosity: int - ) -> None: - rev_display = rev_options.to_display() - logger.info( - "Checking out %s%s to %s", - url, - rev_display, - display_path(dest), - ) - if verbosity <= 0: - flag = "--quiet" - else: - flag = "" - cmd_args = make_command( - "checkout", - flag, - self.get_remote_call_options(), - rev_options.to_args(), - url, - dest, - ) - self.run_command(cmd_args) - - def switch(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None: - cmd_args = make_command( - "switch", - self.get_remote_call_options(), - rev_options.to_args(), - url, - dest, - ) - self.run_command(cmd_args) - - def update(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None: - cmd_args = make_command( - "update", - self.get_remote_call_options(), - rev_options.to_args(), - dest, - ) - self.run_command(cmd_args) - - -vcs.register(Subversion) diff --git a/spaces/tjeagle/Subaru/app.py b/spaces/tjeagle/Subaru/app.py deleted file mode 100644 index 0709986d22ec2addb4c86a2a0f0fb25837e40424..0000000000000000000000000000000000000000 --- a/spaces/tjeagle/Subaru/app.py +++ /dev/null @@ -1,38 +0,0 @@ -# AUTOGENERATED! DO NOT EDIT! File to edit: ../app.ipynb. - -# %% auto 0 -__all__ = ['learn', 'labels', 'examples', 'type_of_guy', 'predict'] - -# %% ../app.ipynb 3 -from fastai.vision.all import * - -# %% ../app.ipynb 4 -learn = load_learner('export.pkl') - -# %% ../app.ipynb 5 -def type_of_guy(model): - return{ - 'ascent': 'soccer mom', - 'brz': 'parents pay for insurance', - 'crosstrek': 'lists hiking as their hobby. never hikes', - 'EPX': 'Japanese defense force', - 'forester': 'coexist sticker', - 'not': 'not a subi :(', - 'outback': 'climbing gym', - 'wrx': 'monster energy' - }.get(model, 'error') - - -# %% ../app.ipynb 6 -labels = learn.dls.vocab -def predict(img): - img = PILImage.create(img) - pred,pred_idx,probs = learn.predict(img) - return {type_of_guy(labels[i]): float(probs[i]) for i in range(len(labels))} - -# %% ../app.ipynb 7 -examples = ['Bell_412.jpg','Ferrari.jpg','WRX.jpg','Crosstrek.jpg'] - -# %% ../app.ipynb 8 -import gradio as gr -gr.Interface(fn=predict, inputs=gr.inputs.Image(shape=(512, 512)), outputs=gr.outputs.Label(num_top_classes=1),examples=examples).launch() diff --git a/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/csrc/nms.h b/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/csrc/nms.h deleted file mode 100644 index 312fed4a7cb7c1bc6c2345b5e5d678cc6c1a7141..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/csrc/nms.h +++ /dev/null @@ -1,28 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -#pragma once -#include "cpu/vision.h" - -#ifdef WITH_CUDA -#include "cuda/vision.h" -#endif - - -at::Tensor nms(const at::Tensor& dets, - const at::Tensor& scores, - const float threshold) { - - if (dets.type().is_cuda()) { -#ifdef WITH_CUDA - // TODO raise error if not compiled with CUDA - if (dets.numel() == 0) - return at::empty({0}, dets.options().dtype(at::kLong).device(at::kCPU)); - auto b = at::cat({dets, scores.unsqueeze(1)}, 1); - return nms_cuda(b, threshold); -#else - AT_ERROR("Not compiled with GPU support"); -#endif - } - - at::Tensor result = nms_cpu(dets, scores, threshold); - return result; -} diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/roi_heads/standard_roi_head.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/roi_heads/standard_roi_head.py deleted file mode 100644 index 6ebdba8a966cf02b604105f6e4b3fcca00a0b6f0..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/roi_heads/standard_roi_head.py +++ /dev/null @@ -1,277 +0,0 @@ -import torch - -from mmdet.core import bbox2result, bbox2roi, build_assigner, build_sampler -from ..builder import HEADS, build_head, build_roi_extractor -from .base_roi_head import BaseRoIHead -from .test_mixins import BBoxTestMixin, MaskTestMixin - - -@HEADS.register_module() -class StandardRoIHead(BaseRoIHead, BBoxTestMixin, MaskTestMixin): - """Simplest base roi head including one bbox head and one mask head.""" - - def init_assigner_sampler(self): - """Initialize assigner and sampler.""" - self.bbox_assigner = None - self.bbox_sampler = None - if self.train_cfg: - self.bbox_assigner = build_assigner(self.train_cfg.assigner) - self.bbox_sampler = build_sampler( - self.train_cfg.sampler, context=self) - - def init_bbox_head(self, bbox_roi_extractor, bbox_head): - """Initialize ``bbox_head``""" - self.bbox_roi_extractor = build_roi_extractor(bbox_roi_extractor) - self.bbox_head = build_head(bbox_head) - - def init_mask_head(self, mask_roi_extractor, mask_head): - """Initialize ``mask_head``""" - if mask_roi_extractor is not None: - self.mask_roi_extractor = build_roi_extractor(mask_roi_extractor) - self.share_roi_extractor = False - else: - self.share_roi_extractor = True - self.mask_roi_extractor = self.bbox_roi_extractor - self.mask_head = build_head(mask_head) - - def forward_dummy(self, x, proposals): - """Dummy forward function.""" - # bbox head - outs = () - rois = bbox2roi([proposals]) - if self.with_bbox: - bbox_results = self._bbox_forward(x, rois) - outs = outs + (bbox_results['cls_score'], - bbox_results['bbox_pred']) - # mask head - if self.with_mask: - mask_rois = rois[:100] - mask_results = self._mask_forward(x, mask_rois) - outs = outs + (mask_results['mask_pred'], ) - return outs - - def forward_train(self, - x, - img_metas, - proposal_list, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None): - """ - Args: - x (list[Tensor]): list of multi-level img features. - img_metas (list[dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - proposals (list[Tensors]): list of region proposals. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - gt_masks (None | Tensor) : true segmentation masks for each box - used if the architecture supports a segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - # assign gts and sample proposals - if self.with_bbox or self.with_mask: - num_imgs = len(img_metas) - if gt_bboxes_ignore is None: - gt_bboxes_ignore = [None for _ in range(num_imgs)] - sampling_results = [] - for i in range(num_imgs): - assign_result = self.bbox_assigner.assign( - proposal_list[i], gt_bboxes[i], gt_bboxes_ignore[i], - gt_labels[i]) - sampling_result = self.bbox_sampler.sample( - assign_result, - proposal_list[i], - gt_bboxes[i], - gt_labels[i], - feats=[lvl_feat[i][None] for lvl_feat in x]) - sampling_results.append(sampling_result) - - losses = dict() - # bbox head forward and loss - if self.with_bbox: - bbox_results = self._bbox_forward_train(x, sampling_results, - gt_bboxes, gt_labels, - img_metas) - losses.update(bbox_results['loss_bbox']) - - # mask head forward and loss - if self.with_mask: - mask_results = self._mask_forward_train(x, sampling_results, - bbox_results['bbox_feats'], - gt_masks, img_metas) - losses.update(mask_results['loss_mask']) - - return losses - - def _bbox_forward(self, x, rois): - """Box head forward function used in both training and testing.""" - # TODO: a more flexible way to decide which feature maps to use - bbox_feats = self.bbox_roi_extractor( - x[:self.bbox_roi_extractor.num_inputs], rois) - if self.with_shared_head: - bbox_feats = self.shared_head(bbox_feats) - cls_score, bbox_pred = self.bbox_head(bbox_feats) - - bbox_results = dict( - cls_score=cls_score, bbox_pred=bbox_pred, bbox_feats=bbox_feats) - return bbox_results - - def _bbox_forward_train(self, x, sampling_results, gt_bboxes, gt_labels, - img_metas): - """Run forward function and calculate loss for box head in training.""" - rois = bbox2roi([res.bboxes for res in sampling_results]) - bbox_results = self._bbox_forward(x, rois) - - bbox_targets = self.bbox_head.get_targets(sampling_results, gt_bboxes, - gt_labels, self.train_cfg) - loss_bbox = self.bbox_head.loss(bbox_results['cls_score'], - bbox_results['bbox_pred'], rois, - *bbox_targets) - - bbox_results.update(loss_bbox=loss_bbox) - return bbox_results - - def _mask_forward_train(self, x, sampling_results, bbox_feats, gt_masks, - img_metas): - """Run forward function and calculate loss for mask head in - training.""" - if not self.share_roi_extractor: - pos_rois = bbox2roi([res.pos_bboxes for res in sampling_results]) - mask_results = self._mask_forward(x, pos_rois) - else: - pos_inds = [] - device = bbox_feats.device - for res in sampling_results: - pos_inds.append( - torch.ones( - res.pos_bboxes.shape[0], - device=device, - dtype=torch.uint8)) - pos_inds.append( - torch.zeros( - res.neg_bboxes.shape[0], - device=device, - dtype=torch.uint8)) - pos_inds = torch.cat(pos_inds) - - mask_results = self._mask_forward( - x, pos_inds=pos_inds, bbox_feats=bbox_feats) - - mask_targets = self.mask_head.get_targets(sampling_results, gt_masks, - self.train_cfg) - pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results]) - loss_mask = self.mask_head.loss(mask_results['mask_pred'], - mask_targets, pos_labels) - - mask_results.update(loss_mask=loss_mask, mask_targets=mask_targets) - return mask_results - - def _mask_forward(self, x, rois=None, pos_inds=None, bbox_feats=None): - """Mask head forward function used in both training and testing.""" - assert ((rois is not None) ^ - (pos_inds is not None and bbox_feats is not None)) - if rois is not None: - mask_feats = self.mask_roi_extractor( - x[:self.mask_roi_extractor.num_inputs], rois) - if self.with_shared_head: - mask_feats = self.shared_head(mask_feats) - else: - assert bbox_feats is not None - mask_feats = bbox_feats[pos_inds] - - mask_pred = self.mask_head(mask_feats) - mask_results = dict(mask_pred=mask_pred, mask_feats=mask_feats) - return mask_results - - async def async_simple_test(self, - x, - proposal_list, - img_metas, - proposals=None, - rescale=False): - """Async test without augmentation.""" - assert self.with_bbox, 'Bbox head must be implemented.' - - det_bboxes, det_labels = await self.async_test_bboxes( - x, img_metas, proposal_list, self.test_cfg, rescale=rescale) - bbox_results = bbox2result(det_bboxes, det_labels, - self.bbox_head.num_classes) - if not self.with_mask: - return bbox_results - else: - segm_results = await self.async_test_mask( - x, - img_metas, - det_bboxes, - det_labels, - rescale=rescale, - mask_test_cfg=self.test_cfg.get('mask')) - return bbox_results, segm_results - - def simple_test(self, - x, - proposal_list, - img_metas, - proposals=None, - rescale=False): - """Test without augmentation.""" - assert self.with_bbox, 'Bbox head must be implemented.' - - det_bboxes, det_labels = self.simple_test_bboxes( - x, img_metas, proposal_list, self.test_cfg, rescale=rescale) - if torch.onnx.is_in_onnx_export(): - if self.with_mask: - segm_results = self.simple_test_mask( - x, img_metas, det_bboxes, det_labels, rescale=rescale) - return det_bboxes, det_labels, segm_results - return det_bboxes, det_labels - - bbox_results = [ - bbox2result(det_bboxes[i], det_labels[i], - self.bbox_head.num_classes) - for i in range(len(det_bboxes)) - ] - - if not self.with_mask: - return bbox_results - else: - segm_results = self.simple_test_mask( - x, img_metas, det_bboxes, det_labels, rescale=rescale) - return list(zip(bbox_results, segm_results)) - - def aug_test(self, x, proposal_list, img_metas, rescale=False): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ - det_bboxes, det_labels = self.aug_test_bboxes(x, img_metas, - proposal_list, - self.test_cfg) - - if rescale: - _det_bboxes = det_bboxes - else: - _det_bboxes = det_bboxes.clone() - _det_bboxes[:, :4] *= det_bboxes.new_tensor( - img_metas[0][0]['scale_factor']) - bbox_results = bbox2result(_det_bboxes, det_labels, - self.bbox_head.num_classes) - - # det_bboxes always keep the original scale - if self.with_mask: - segm_results = self.aug_test_mask(x, img_metas, det_bboxes, - det_labels) - return [(bbox_results, segm_results)] - else: - return [bbox_results] diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tests/test_models/test_dense_heads/test_pisa_head.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tests/test_models/test_dense_heads/test_pisa_head.py deleted file mode 100644 index 6b1d42db49c498aca59b154b18d59794749643bf..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tests/test_models/test_dense_heads/test_pisa_head.py +++ /dev/null @@ -1,244 +0,0 @@ -import mmcv -import torch - -from mmdet.models.dense_heads import PISARetinaHead, PISASSDHead -from mmdet.models.roi_heads import PISARoIHead - - -def test_pisa_retinanet_head_loss(): - """Tests pisa retinanet head loss when truth is empty and non-empty.""" - s = 256 - img_metas = [{ - 'img_shape': (s, s, 3), - 'scale_factor': 1, - 'pad_shape': (s, s, 3) - }] - - cfg = mmcv.Config( - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - isr=dict(k=2., bias=0.), - carl=dict(k=1., bias=0.2), - allowed_border=0, - pos_weight=-1, - debug=False)) - self = PISARetinaHead(num_classes=4, in_channels=1, train_cfg=cfg) - - # Anchor head expects a multiple levels of features per image - feat = [ - torch.rand(1, 1, s // (2**(i + 2)), s // (2**(i + 2))) - for i in range(len(self.anchor_generator.strides)) - ] - cls_scores, bbox_preds = self.forward(feat) - - # Test that empty ground truth encourages the network to predict background - gt_bboxes = [torch.empty((0, 4))] - gt_labels = [torch.LongTensor([])] - - gt_bboxes_ignore = None - empty_gt_losses = self.loss(cls_scores, bbox_preds, gt_bboxes, gt_labels, - img_metas, gt_bboxes_ignore) - # When there is no truth, the cls loss should be nonzero but there should - # be no box loss. - empty_cls_loss = empty_gt_losses['loss_cls'].sum() - empty_box_loss = empty_gt_losses['loss_bbox'].sum() - assert empty_cls_loss.item() > 0, 'cls loss should be non-zero' - assert empty_box_loss.item() == 0, ( - 'there should be no box loss when there are no true boxes') - - # When truth is non-empty then both cls and box loss should be nonzero for - # random inputs - gt_bboxes = [ - torch.Tensor([[23.6667, 23.8757, 238.6326, 151.8874]]), - ] - gt_labels = [torch.LongTensor([2])] - one_gt_losses = self.loss(cls_scores, bbox_preds, gt_bboxes, gt_labels, - img_metas, gt_bboxes_ignore) - onegt_cls_loss = one_gt_losses['loss_cls'].sum() - onegt_box_loss = one_gt_losses['loss_bbox'].sum() - assert onegt_cls_loss.item() > 0, 'cls loss should be non-zero' - assert onegt_box_loss.item() > 0, 'box loss should be non-zero' - - -def test_pisa_ssd_head_loss(): - """Tests pisa ssd head loss when truth is empty and non-empty.""" - s = 256 - img_metas = [{ - 'img_shape': (s, s, 3), - 'scale_factor': 1, - 'pad_shape': (s, s, 3) - }] - - cfg = mmcv.Config( - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0., - ignore_iof_thr=-1, - gt_max_assign_all=False), - isr=dict(k=2., bias=0.), - carl=dict(k=1., bias=0.2), - smoothl1_beta=1., - allowed_border=-1, - pos_weight=-1, - neg_pos_ratio=3, - debug=False)) - ssd_anchor_generator = dict( - type='SSDAnchorGenerator', - scale_major=False, - input_size=300, - strides=[1], - ratios=([2], ), - basesize_ratio_range=(0.15, 0.9)) - self = PISASSDHead( - num_classes=4, - in_channels=(1, ), - train_cfg=cfg, - anchor_generator=ssd_anchor_generator) - - # Anchor head expects a multiple levels of features per image - feat = [ - torch.rand(1, 1, s // (2**(i + 2)), s // (2**(i + 2))) - for i in range(len(self.anchor_generator.strides)) - ] - cls_scores, bbox_preds = self.forward(feat) - - # Test that empty ground truth encourages the network to predict background - gt_bboxes = [torch.empty((0, 4))] - gt_labels = [torch.LongTensor([])] - - gt_bboxes_ignore = None - empty_gt_losses = self.loss(cls_scores, bbox_preds, gt_bboxes, gt_labels, - img_metas, gt_bboxes_ignore) - # When there is no truth, the cls loss should be nonzero but there should - # be no box loss. - empty_cls_loss = sum(empty_gt_losses['loss_cls']) - empty_box_loss = sum(empty_gt_losses['loss_bbox']) - # SSD is special, #pos:#neg = 1: 3, so empth gt will also lead loss cls = 0 - assert empty_cls_loss.item() == 0, 'cls loss should be non-zero' - assert empty_box_loss.item() == 0, ( - 'there should be no box loss when there are no true boxes') - - # When truth is non-empty then both cls and box loss should be nonzero for - # random inputs - gt_bboxes = [ - torch.Tensor([[23.6667, 23.8757, 238.6326, 151.8874]]), - ] - gt_labels = [torch.LongTensor([2])] - one_gt_losses = self.loss(cls_scores, bbox_preds, gt_bboxes, gt_labels, - img_metas, gt_bboxes_ignore) - onegt_cls_loss = sum(one_gt_losses['loss_cls']) - onegt_box_loss = sum(one_gt_losses['loss_bbox']) - assert onegt_cls_loss.item() > 0, 'cls loss should be non-zero' - assert onegt_box_loss.item() > 0, 'box loss should be non-zero' - - -def test_pisa_roi_head_loss(): - """Tests pisa roi head loss when truth is empty and non-empty.""" - train_cfg = mmcv.Config( - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='ScoreHLRSampler', - num=4, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True, - k=0.5, - bias=0.), - isr=dict(k=2., bias=0.), - carl=dict(k=1., bias=0.2), - allowed_border=0, - pos_weight=-1, - debug=False)) - - bbox_roi_extractor = dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=1, - featmap_strides=[1]) - - bbox_head = dict( - type='Shared2FCBBoxHead', - in_channels=1, - fc_out_channels=2, - roi_feat_size=7, - num_classes=4, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)) - - self = PISARoIHead(bbox_roi_extractor, bbox_head, train_cfg=train_cfg) - - s = 256 - img_metas = [{ - 'img_shape': (s, s, 3), - 'scale_factor': 1, - 'pad_shape': (s, s, 3) - }] - - # Anchor head expects a multiple levels of features per image - feat = [ - torch.rand(1, 1, s // (2**(i + 2)), s // (2**(i + 2))) - for i in range(1) - ] - - proposal_list = [ - torch.Tensor([[22.6667, 22.8757, 238.6326, 151.8874], [0, 3, 5, 7]]) - ] - - # Test that empty ground truth encourages the network to predict background - gt_bboxes = [torch.empty((0, 4))] - gt_labels = [torch.LongTensor([])] - gt_bboxes_ignore = None - - empty_gt_losses = self.forward_train(feat, img_metas, proposal_list, - gt_bboxes, gt_labels, - gt_bboxes_ignore) - - # When there is no truth, the cls loss should be nonzero but there should - # be no box loss. - empty_cls_loss = empty_gt_losses['loss_cls'].sum() - empty_box_loss = empty_gt_losses['loss_bbox'].sum() - assert empty_cls_loss.item() > 0, 'cls loss should be non-zero' - assert empty_box_loss.item() == 0, ( - 'there should be no box loss when there are no true boxes') - - # When truth is non-empty then both cls and box loss should be nonzero for - # random inputs - gt_bboxes = [ - torch.Tensor([[23.6667, 23.8757, 238.6326, 151.8874]]), - ] - gt_labels = [torch.LongTensor([2])] - - one_gt_losses = self.forward_train(feat, img_metas, proposal_list, - gt_bboxes, gt_labels, gt_bboxes_ignore) - onegt_cls_loss = one_gt_losses['loss_cls'].sum() - onegt_box_loss = one_gt_losses['loss_bbox'].sum() - assert onegt_cls_loss.item() > 0, 'cls loss should be non-zero' - assert onegt_box_loss.item() > 0, 'box loss should be non-zero' diff --git a/spaces/toonist/DualStyleGAN/README.md b/spaces/toonist/DualStyleGAN/README.md deleted file mode 100644 index 181fa765f5306c2ad1acc2797d3510fca7ad3cc4..0000000000000000000000000000000000000000 --- a/spaces/toonist/DualStyleGAN/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Portrait Style Transfer with DualStyleGAN -emoji: 😻 -colorFrom: purple -colorTo: red -sdk: gradio -sdk_version: 3.0.15 -app_file: app.py -pinned: false -duplicated_from: CVPR/DualStyleGAN ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/tornadoslims/instruct-pix2pix/stable_diffusion/scripts/img2img.py b/spaces/tornadoslims/instruct-pix2pix/stable_diffusion/scripts/img2img.py deleted file mode 100644 index 421e2151d9e9de75a142f5d5f532333645a36287..0000000000000000000000000000000000000000 --- a/spaces/tornadoslims/instruct-pix2pix/stable_diffusion/scripts/img2img.py +++ /dev/null @@ -1,293 +0,0 @@ -"""make variations of input image""" - -import argparse, os, sys, glob -import PIL -import torch -import numpy as np -from omegaconf import OmegaConf -from PIL import Image -from tqdm import tqdm, trange -from itertools import islice -from einops import rearrange, repeat -from torchvision.utils import make_grid -from torch import autocast -from contextlib import nullcontext -import time -from pytorch_lightning import seed_everything - -from ldm.util import instantiate_from_config -from ldm.models.diffusion.ddim import DDIMSampler -from ldm.models.diffusion.plms import PLMSSampler - - -def chunk(it, size): - it = iter(it) - return iter(lambda: tuple(islice(it, size)), ()) - - -def load_model_from_config(config, ckpt, verbose=False): - print(f"Loading model from {ckpt}") - pl_sd = torch.load(ckpt, map_location="cpu") - if "global_step" in pl_sd: - print(f"Global Step: {pl_sd['global_step']}") - sd = pl_sd["state_dict"] - model = instantiate_from_config(config.model) - m, u = model.load_state_dict(sd, strict=False) - if len(m) > 0 and verbose: - print("missing keys:") - print(m) - if len(u) > 0 and verbose: - print("unexpected keys:") - print(u) - - model.cuda() - model.eval() - return model - - -def load_img(path): - image = Image.open(path).convert("RGB") - w, h = image.size - print(f"loaded input image of size ({w}, {h}) from {path}") - w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 - image = image.resize((w, h), resample=PIL.Image.LANCZOS) - image = np.array(image).astype(np.float32) / 255.0 - image = image[None].transpose(0, 3, 1, 2) - image = torch.from_numpy(image) - return 2.*image - 1. - - -def main(): - parser = argparse.ArgumentParser() - - parser.add_argument( - "--prompt", - type=str, - nargs="?", - default="a painting of a virus monster playing guitar", - help="the prompt to render" - ) - - parser.add_argument( - "--init-img", - type=str, - nargs="?", - help="path to the input image" - ) - - parser.add_argument( - "--outdir", - type=str, - nargs="?", - help="dir to write results to", - default="outputs/img2img-samples" - ) - - parser.add_argument( - "--skip_grid", - action='store_true', - help="do not save a grid, only individual samples. Helpful when evaluating lots of samples", - ) - - parser.add_argument( - "--skip_save", - action='store_true', - help="do not save indiviual samples. For speed measurements.", - ) - - parser.add_argument( - "--ddim_steps", - type=int, - default=50, - help="number of ddim sampling steps", - ) - - parser.add_argument( - "--plms", - action='store_true', - help="use plms sampling", - ) - parser.add_argument( - "--fixed_code", - action='store_true', - help="if enabled, uses the same starting code across all samples ", - ) - - parser.add_argument( - "--ddim_eta", - type=float, - default=0.0, - help="ddim eta (eta=0.0 corresponds to deterministic sampling", - ) - parser.add_argument( - "--n_iter", - type=int, - default=1, - help="sample this often", - ) - parser.add_argument( - "--C", - type=int, - default=4, - help="latent channels", - ) - parser.add_argument( - "--f", - type=int, - default=8, - help="downsampling factor, most often 8 or 16", - ) - parser.add_argument( - "--n_samples", - type=int, - default=2, - help="how many samples to produce for each given prompt. A.k.a batch size", - ) - parser.add_argument( - "--n_rows", - type=int, - default=0, - help="rows in the grid (default: n_samples)", - ) - parser.add_argument( - "--scale", - type=float, - default=5.0, - help="unconditional guidance scale: eps = eps(x, empty) + scale * (eps(x, cond) - eps(x, empty))", - ) - - parser.add_argument( - "--strength", - type=float, - default=0.75, - help="strength for noising/unnoising. 1.0 corresponds to full destruction of information in init image", - ) - parser.add_argument( - "--from-file", - type=str, - help="if specified, load prompts from this file", - ) - parser.add_argument( - "--config", - type=str, - default="configs/stable-diffusion/v1-inference.yaml", - help="path to config which constructs model", - ) - parser.add_argument( - "--ckpt", - type=str, - default="models/ldm/stable-diffusion-v1/model.ckpt", - help="path to checkpoint of model", - ) - parser.add_argument( - "--seed", - type=int, - default=42, - help="the seed (for reproducible sampling)", - ) - parser.add_argument( - "--precision", - type=str, - help="evaluate at this precision", - choices=["full", "autocast"], - default="autocast" - ) - - opt = parser.parse_args() - seed_everything(opt.seed) - - config = OmegaConf.load(f"{opt.config}") - model = load_model_from_config(config, f"{opt.ckpt}") - - device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") - model = model.to(device) - - if opt.plms: - raise NotImplementedError("PLMS sampler not (yet) supported") - sampler = PLMSSampler(model) - else: - sampler = DDIMSampler(model) - - os.makedirs(opt.outdir, exist_ok=True) - outpath = opt.outdir - - batch_size = opt.n_samples - n_rows = opt.n_rows if opt.n_rows > 0 else batch_size - if not opt.from_file: - prompt = opt.prompt - assert prompt is not None - data = [batch_size * [prompt]] - - else: - print(f"reading prompts from {opt.from_file}") - with open(opt.from_file, "r") as f: - data = f.read().splitlines() - data = list(chunk(data, batch_size)) - - sample_path = os.path.join(outpath, "samples") - os.makedirs(sample_path, exist_ok=True) - base_count = len(os.listdir(sample_path)) - grid_count = len(os.listdir(outpath)) - 1 - - assert os.path.isfile(opt.init_img) - init_image = load_img(opt.init_img).to(device) - init_image = repeat(init_image, '1 ... -> b ...', b=batch_size) - init_latent = model.get_first_stage_encoding(model.encode_first_stage(init_image)) # move to latent space - - sampler.make_schedule(ddim_num_steps=opt.ddim_steps, ddim_eta=opt.ddim_eta, verbose=False) - - assert 0. <= opt.strength <= 1., 'can only work with strength in [0.0, 1.0]' - t_enc = int(opt.strength * opt.ddim_steps) - print(f"target t_enc is {t_enc} steps") - - precision_scope = autocast if opt.precision == "autocast" else nullcontext - with torch.no_grad(): - with precision_scope("cuda"): - with model.ema_scope(): - tic = time.time() - all_samples = list() - for n in trange(opt.n_iter, desc="Sampling"): - for prompts in tqdm(data, desc="data"): - uc = None - if opt.scale != 1.0: - uc = model.get_learned_conditioning(batch_size * [""]) - if isinstance(prompts, tuple): - prompts = list(prompts) - c = model.get_learned_conditioning(prompts) - - # encode (scaled latent) - z_enc = sampler.stochastic_encode(init_latent, torch.tensor([t_enc]*batch_size).to(device)) - # decode it - samples = sampler.decode(z_enc, c, t_enc, unconditional_guidance_scale=opt.scale, - unconditional_conditioning=uc,) - - x_samples = model.decode_first_stage(samples) - x_samples = torch.clamp((x_samples + 1.0) / 2.0, min=0.0, max=1.0) - - if not opt.skip_save: - for x_sample in x_samples: - x_sample = 255. * rearrange(x_sample.cpu().numpy(), 'c h w -> h w c') - Image.fromarray(x_sample.astype(np.uint8)).save( - os.path.join(sample_path, f"{base_count:05}.png")) - base_count += 1 - all_samples.append(x_samples) - - if not opt.skip_grid: - # additionally, save as grid - grid = torch.stack(all_samples, 0) - grid = rearrange(grid, 'n b c h w -> (n b) c h w') - grid = make_grid(grid, nrow=n_rows) - - # to image - grid = 255. * rearrange(grid, 'c h w -> h w c').cpu().numpy() - Image.fromarray(grid.astype(np.uint8)).save(os.path.join(outpath, f'grid-{grid_count:04}.png')) - grid_count += 1 - - toc = time.time() - - print(f"Your samples are ready and waiting for you here: \n{outpath} \n" - f" \nEnjoy.") - - -if __name__ == "__main__": - main() diff --git a/spaces/trnt/twitter_emotions/README.md b/spaces/trnt/twitter_emotions/README.md deleted file mode 100644 index 9f2b676da6f7d60d6b605a1ce56560d72f44a1fa..0000000000000000000000000000000000000000 --- a/spaces/trnt/twitter_emotions/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Twitter_emotions -emoji: 🚀 -colorFrom: blue -colorTo: green -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/umoubuton/atri-bert-vits2/transforms.py b/spaces/umoubuton/atri-bert-vits2/transforms.py deleted file mode 100644 index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000 --- a/spaces/umoubuton/atri-bert-vits2/transforms.py +++ /dev/null @@ -1,209 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = {"tails": tails, "tail_bound": tail_bound} - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails="linear", - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == "linear": - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError("{} tails are not implemented.".format(tails)) - - ( - outputs[inside_interval_mask], - logabsdet[inside_interval_mask], - ) = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, - right=tail_bound, - bottom=-tail_bound, - top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0.0, - right=1.0, - bottom=0.0, - top=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError("Input to a transform is not within its domain") - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError("Minimal bin width too large for the number of bins") - if min_bin_height * num_bins > 1.0: - raise ValueError("Minimal bin height too large for the number of bins") - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) + input_heights * (input_delta - input_derivatives) - b = input_heights * input_derivatives - (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) - c = -input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * ( - input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta - ) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/45cmx-k Audio Driver !!TOP!!.md b/spaces/usbethFlerru/sovits-modelsV2/example/45cmx-k Audio Driver !!TOP!!.md deleted file mode 100644 index a91136e3e28242b430d0b5a60a208e6a503a0e32..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/45cmx-k Audio Driver !!TOP!!.md +++ /dev/null @@ -1,15 +0,0 @@ - -

          If you could not find the exact driver for your hardware device or you aren't sure which driver is right one, we have a program that will detect your hardware specifications and identify the correct driver for your needs. Please click here to download.

          -

          45cmx-k Audio Driver


          Download File ---> https://urlcod.com/2uyY1X



          -

          Foxconn 45CMX-K drivers will help to eliminate failures and correct errors in your device's operation. Download Foxconn 45CMX-K drivers for different OS Windows versions (32 and 64 bit). After you have downloaded the archive with Foxconn 45CMX-K driver, unpack the file in any folder and run it.

          -

          Are you looking driver or manual for a Foxconn 45CMX-K Motherboard? Do you have the latest drivers for your Foxconn 45CMX-K Motherboard?You can see device drivers for a Foxconn Motherboards below on this page.Please select the correct driver version and operating system of Foxconn 45CMX-K device driver and click «view details» link below to view more detailed driver file info.

          -

          Motherboard drivers are a kind of software, and therefore they are subject to all the same problems that affect the work of other kindsof programs. Keep in mind that motherboard drivers may also be damaged for various reasons, such as virus-infected,or obsolete as a result of system upgrades or software changes.

          -

          Remember that is very important to have exactly the driver that is needed specifically for your hardware motherboard model.Therefore, it is recommended that you search using the motherboard manufacturer name and model number of each motherboard.

          -

          If the driver listed is not the right version or operating system, search our driver archive for the correct version. Enter Foxconn BIOS 45CMX into the search box above and then submit. In the results, choose the best match for your PC and operating system.

          -

          Once you have downloaded your new driver, you'll need to install it. In Windows, use a built-in utility called Device Manager, which allows you to see all of the devices recognized by your system, and the drivers associated with them.

          -

          -

          Here is my hardware information:
          $ sudo inxi -Fxz
          System:
          Host: antix1 Kernel: 4.9.193-antix.1-486-smp i686 bits: 32 compiler: gcc
          v: 8.3.0 Desktop: IceWM 1.5.5+git20190610
          Distro: antiX-19_386-full Marielle Franco 16 October 2019
          base: Debian GNU/Linux 10 (buster)
          Machine:
          Type: Desktop Mobo: Foxconn model: 45CMX/45GMX/45CMX-K serial: N/A
          BIOS: Phoenix v: 6.00 PG date: 10/10/2007
          CPU:
          Topology: Dual Core model: Intel Pentium Dual E2160 bits: 64 type: MCP
          arch: Core Merom rev: D L2 cache: 1024 KiB
          flags: lm nx pae sse sse2 sse3 ssse3 bogomips: 7199
          Speed: 1200 MHz min/max: 1200/1800 MHz Core speeds (MHz): 1: 1200 2: 1200
          Graphics:
          Device-1: Intel 82945G/GZ Integrated Graphics vendor: Foxconn driver: i915
          v: kernel bus ID: 00:02.0
          Display: server: X.Org 1.20.4 driver: intel
          unloaded: fbdev,modesetting,vesa resolution: 1280×1024~60Hz
          OpenGL: renderer: Mesa DRI Intel 945G x86/MMX/SSE2 v: 1.4 Mesa 18.3.6
          direct render: Yes
          Audio:
          Device-1: Intel NM10/ICH7 Family High Definition Audio vendor: Foxconn
          driver: snd_hda_intel v: kernel bus ID: 00:1b.0
          Sound Server: ALSA v: k4.9.193-antix.1-486-smp
          Network:
          Device-1: Realtek RTL8101/2/6E PCI Express Fast/Gigabit Ethernet
          vendor: Foxconn driver: r8169 v: 2.3LK-NAPI port: de00 bus ID: 02:00.0
          IF: eth0 state: down mac:
          IF-ID-1: wlan0 state: up mac:
          Drives:
          Local Storage: total: 1.08 TiB used: 273.76 GiB (24.7%)
          ID-1: /dev/sda vendor: Samsung model: HD161HJ size: 149.05 GiB
          ID-2: /dev/sdb type: USB vendor: Seagate model: Expansion size: 931.51 GiB
          ID-3: /dev/sdc type: USB vendor: Kingston model: DataTraveler 3.0
          size: 28.90 GiB
          Partition:
          ID-1: / size: 1.54 GiB used: 902.6 MiB (57.1%) fs: overlay source: ERR-102
          Sensors:
          System Temperatures: cpu: 40.0 C mobo: N/A
          Fan Speeds (RPM): N/A
          Info:
          Processes: 266 Uptime: 5d 14h 22m Memory: 1.96 GiB used: 874.9 MiB (43.6%)
          Init: SysVinit runlevel: 5 Compilers: gcc: 8.3.0 Shell: bash v: 5.0.3
          inxi: 3.0.36

          -

          Drivers para Foxconn 45CMX-K vão ajudar a corrigir as falhas e erros no funcionamento do dispositivo. Baixe os drivers para Foxconn 45CMX-K para diferentes versões dos sistemas operacionais Windows (32 e 64 bits). Depois de baixar o arquivo com o driver para Foxconn 45CMX-K você precisa extrair o arquivo para uma pasta a sua escolha e iniciá-lo.

          -

          (adsbygoogle = window.adsbygoogle || []).push();

          Motherboard : Foxconn 45CMX BIOS - Version: (765F1P07)

          Foxconn 45CMX BIOS 765F1P07

          Operating system Support: Windows
          File name: 765F1P07.zip

          Note: Keep your hardware drivers up-to-date, and remember before installing any device driver to set a system restore point.

          aaccfb2cb3
          -
          -
          \ No newline at end of file diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Burning Spear-Hail H.I.M. Full Album Zip.md b/spaces/usbethFlerru/sovits-modelsV2/example/Burning Spear-Hail H.I.M. Full Album Zip.md deleted file mode 100644 index 2f5b4eeba04b934cec09d02247200a21f25eac39..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Burning Spear-Hail H.I.M. Full Album Zip.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Burning Spear-Hail H.I.M. full album zip


          Download >>>>> https://urlcod.com/2uyXx0



          - -Search by location, salary, full/part time, commute options, and more. ... Rent a whole home for your next weekend or holiday. ... Draw zip code map ... Any given sin new album ... Afk arena burning woods right chest ... and components; BB/pellet, stun, and spear guns; etc; ammunition, clips, cartridges, reloading materials, ... 1fdad05405
          -
          -
          -

          diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Cp Pthc Avi Added By Request ((FREE)).md b/spaces/usbethFlerru/sovits-modelsV2/example/Cp Pthc Avi Added By Request ((FREE)).md deleted file mode 100644 index 9ac9e94a1e8829ad80404afee4eeae8e3db16804..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Cp Pthc Avi Added By Request ((FREE)).md +++ /dev/null @@ -1,9 +0,0 @@ -
          -

          Post CP is an acronym in which CP can have a variety of meanings. Initially used as an acronym for 4chan posters to indirectly mention or request a child pornography thread, in later years the acronym became commonly associated with the posting of other subjects with the same first letters (C & P, most notably "Cheese Pizza") to parody prohibited request threads or to troll those who hope to find child pornography. The users can also ask for illegal pornographic materials by requesting photos of delicious cake.

          -

          cp pthc avi | added by request


          Download Zip ☆☆☆☆☆ https://urlcod.com/2uyWgc



          -

          4chan's anonymous posting style has long been a way for users to request and post socially unacceptable content and taboos without facing many risks. Although 4chan houses content that other areas of the internet wouldn't post in public, the site still maintains a set of rules which also forbid the posting of "anything that violates local or United States law".[1] Regardless of this, users commonly try to post such content with the hope the site moderators won't spot it, such as in "mods are asleep" threads.

          -

          smart Web browser for Windows is a lightweight browser based upon Chromium with all major web functionality without intrusive visual errors.
          smart Web browser does not come with a lot of bloatware and developers may update the application from within the utility.
          The end user controls every aspect of smart Web browser and thus it is much faster than a traditional application. New features may be added in the future if clever developers bring their request in.
          The major problem with smart Web browser is that it requires =&event2=&event3=&goto=

          -

          When I originally commented I seem to have clicked the -Notify me when new comments are added- checkbox and from now on each
          time a comment is added I receive 4 emails with the exact same comment.
          Is there a means you can remove me from that service?
          Thanks a lot!

          -

          aaccfb2cb3
          -
          -
          \ No newline at end of file diff --git a/spaces/user238921933/stable-diffusion-webui/modules/generation_parameters_copypaste.py b/spaces/user238921933/stable-diffusion-webui/modules/generation_parameters_copypaste.py deleted file mode 100644 index 3bab793d67baf1fdc598775722f1e44083c94d95..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/modules/generation_parameters_copypaste.py +++ /dev/null @@ -1,402 +0,0 @@ -import base64 -import html -import io -import math -import os -import re -from pathlib import Path - -import gradio as gr -from modules.paths import data_path -from modules import shared, ui_tempdir, script_callbacks -import tempfile -from PIL import Image - -re_param_code = r'\s*([\w ]+):\s*("(?:\\"[^,]|\\"|\\|[^\"])+"|[^,]*)(?:,|$)' -re_param = re.compile(re_param_code) -re_imagesize = re.compile(r"^(\d+)x(\d+)$") -re_hypernet_hash = re.compile("\(([0-9a-f]+)\)$") -type_of_gr_update = type(gr.update()) - -paste_fields = {} -registered_param_bindings = [] - - -class ParamBinding: - def __init__(self, paste_button, tabname, source_text_component=None, source_image_component=None, source_tabname=None, override_settings_component=None): - self.paste_button = paste_button - self.tabname = tabname - self.source_text_component = source_text_component - self.source_image_component = source_image_component - self.source_tabname = source_tabname - self.override_settings_component = override_settings_component - - -def reset(): - paste_fields.clear() - - -def quote(text): - if ',' not in str(text): - return text - - text = str(text) - text = text.replace('\\', '\\\\') - text = text.replace('"', '\\"') - return f'"{text}"' - - -def image_from_url_text(filedata): - if filedata is None: - return None - - if type(filedata) == list and len(filedata) > 0 and type(filedata[0]) == dict and filedata[0].get("is_file", False): - filedata = filedata[0] - - if type(filedata) == dict and filedata.get("is_file", False): - filename = filedata["name"] - is_in_right_dir = ui_tempdir.check_tmp_file(shared.demo, filename) - assert is_in_right_dir, 'trying to open image file outside of allowed directories' - - return Image.open(filename) - - if type(filedata) == list: - if len(filedata) == 0: - return None - - filedata = filedata[0] - - if filedata.startswith("data:image/png;base64,"): - filedata = filedata[len("data:image/png;base64,"):] - - filedata = base64.decodebytes(filedata.encode('utf-8')) - image = Image.open(io.BytesIO(filedata)) - return image - - -def add_paste_fields(tabname, init_img, fields, override_settings_component=None): - paste_fields[tabname] = {"init_img": init_img, "fields": fields, "override_settings_component": override_settings_component} - - # backwards compatibility for existing extensions - import modules.ui - if tabname == 'txt2img': - modules.ui.txt2img_paste_fields = fields - elif tabname == 'img2img': - modules.ui.img2img_paste_fields = fields - - -def create_buttons(tabs_list): - buttons = {} - for tab in tabs_list: - buttons[tab] = gr.Button(f"Send to {tab}", elem_id=f"{tab}_tab") - return buttons - - -def bind_buttons(buttons, send_image, send_generate_info): - """old function for backwards compatibility; do not use this, use register_paste_params_button""" - for tabname, button in buttons.items(): - source_text_component = send_generate_info if isinstance(send_generate_info, gr.components.Component) else None - source_tabname = send_generate_info if isinstance(send_generate_info, str) else None - - register_paste_params_button(ParamBinding(paste_button=button, tabname=tabname, source_text_component=source_text_component, source_image_component=send_image, source_tabname=source_tabname)) - - -def register_paste_params_button(binding: ParamBinding): - registered_param_bindings.append(binding) - - -def connect_paste_params_buttons(): - binding: ParamBinding - for binding in registered_param_bindings: - destination_image_component = paste_fields[binding.tabname]["init_img"] - fields = paste_fields[binding.tabname]["fields"] - override_settings_component = binding.override_settings_component or paste_fields[binding.tabname]["override_settings_component"] - - destination_width_component = next(iter([field for field, name in fields if name == "Size-1"] if fields else []), None) - destination_height_component = next(iter([field for field, name in fields if name == "Size-2"] if fields else []), None) - - if binding.source_image_component and destination_image_component: - if isinstance(binding.source_image_component, gr.Gallery): - func = send_image_and_dimensions if destination_width_component else image_from_url_text - jsfunc = "extract_image_from_gallery" - else: - func = send_image_and_dimensions if destination_width_component else lambda x: x - jsfunc = None - - binding.paste_button.click( - fn=func, - _js=jsfunc, - inputs=[binding.source_image_component], - outputs=[destination_image_component, destination_width_component, destination_height_component] if destination_width_component else [destination_image_component], - ) - - if binding.source_text_component is not None and fields is not None: - connect_paste(binding.paste_button, fields, binding.source_text_component, override_settings_component, binding.tabname) - - if binding.source_tabname is not None and fields is not None: - paste_field_names = ['Prompt', 'Negative prompt', 'Steps', 'Face restoration'] + (["Seed"] if shared.opts.send_seed else []) - binding.paste_button.click( - fn=lambda *x: x, - inputs=[field for field, name in paste_fields[binding.source_tabname]["fields"] if name in paste_field_names], - outputs=[field for field, name in fields if name in paste_field_names], - ) - - binding.paste_button.click( - fn=None, - _js=f"switch_to_{binding.tabname}", - inputs=None, - outputs=None, - ) - - -def send_image_and_dimensions(x): - if isinstance(x, Image.Image): - img = x - else: - img = image_from_url_text(x) - - if shared.opts.send_size and isinstance(img, Image.Image): - w = img.width - h = img.height - else: - w = gr.update() - h = gr.update() - - return img, w, h - - - -def find_hypernetwork_key(hypernet_name, hypernet_hash=None): - """Determines the config parameter name to use for the hypernet based on the parameters in the infotext. - - Example: an infotext provides "Hypernet: ke-ta" and "Hypernet hash: 1234abcd". For the "Hypernet" config - parameter this means there should be an entry that looks like "ke-ta-10000(1234abcd)" to set it to. - - If the infotext has no hash, then a hypernet with the same name will be selected instead. - """ - hypernet_name = hypernet_name.lower() - if hypernet_hash is not None: - # Try to match the hash in the name - for hypernet_key in shared.hypernetworks.keys(): - result = re_hypernet_hash.search(hypernet_key) - if result is not None and result[1] == hypernet_hash: - return hypernet_key - else: - # Fall back to a hypernet with the same name - for hypernet_key in shared.hypernetworks.keys(): - if hypernet_key.lower().startswith(hypernet_name): - return hypernet_key - - return None - - -def restore_old_hires_fix_params(res): - """for infotexts that specify old First pass size parameter, convert it into - width, height, and hr scale""" - - firstpass_width = res.get('First pass size-1', None) - firstpass_height = res.get('First pass size-2', None) - - if shared.opts.use_old_hires_fix_width_height: - hires_width = int(res.get("Hires resize-1", 0)) - hires_height = int(res.get("Hires resize-2", 0)) - - if hires_width and hires_height: - res['Size-1'] = hires_width - res['Size-2'] = hires_height - return - - if firstpass_width is None or firstpass_height is None: - return - - firstpass_width, firstpass_height = int(firstpass_width), int(firstpass_height) - width = int(res.get("Size-1", 512)) - height = int(res.get("Size-2", 512)) - - if firstpass_width == 0 or firstpass_height == 0: - from modules import processing - firstpass_width, firstpass_height = processing.old_hires_fix_first_pass_dimensions(width, height) - - res['Size-1'] = firstpass_width - res['Size-2'] = firstpass_height - res['Hires resize-1'] = width - res['Hires resize-2'] = height - - -def parse_generation_parameters(x: str): - """parses generation parameters string, the one you see in text field under the picture in UI: -``` -girl with an artist's beret, determined, blue eyes, desert scene, computer monitors, heavy makeup, by Alphonse Mucha and Charlie Bowater, ((eyeshadow)), (coquettish), detailed, intricate -Negative prompt: ugly, fat, obese, chubby, (((deformed))), [blurry], bad anatomy, disfigured, poorly drawn face, mutation, mutated, (extra_limb), (ugly), (poorly drawn hands), messy drawing -Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 965400086, Size: 512x512, Model hash: 45dee52b -``` - - returns a dict with field values - """ - - res = {} - - prompt = "" - negative_prompt = "" - - done_with_prompt = False - - *lines, lastline = x.strip().split("\n") - if len(re_param.findall(lastline)) < 3: - lines.append(lastline) - lastline = '' - - for i, line in enumerate(lines): - line = line.strip() - if line.startswith("Negative prompt:"): - done_with_prompt = True - line = line[16:].strip() - - if done_with_prompt: - negative_prompt += ("" if negative_prompt == "" else "\n") + line - else: - prompt += ("" if prompt == "" else "\n") + line - - res["Prompt"] = prompt - res["Negative prompt"] = negative_prompt - - for k, v in re_param.findall(lastline): - v = v[1:-1] if v[0] == '"' and v[-1] == '"' else v - m = re_imagesize.match(v) - if m is not None: - res[k+"-1"] = m.group(1) - res[k+"-2"] = m.group(2) - else: - res[k] = v - - # Missing CLIP skip means it was set to 1 (the default) - if "Clip skip" not in res: - res["Clip skip"] = "1" - - hypernet = res.get("Hypernet", None) - if hypernet is not None: - res["Prompt"] += f"""""" - - if "Hires resize-1" not in res: - res["Hires resize-1"] = 0 - res["Hires resize-2"] = 0 - - restore_old_hires_fix_params(res) - - return res - - -settings_map = {} - -infotext_to_setting_name_mapping = [ - ('Clip skip', 'CLIP_stop_at_last_layers', ), - ('Conditional mask weight', 'inpainting_mask_weight'), - ('Model hash', 'sd_model_checkpoint'), - ('ENSD', 'eta_noise_seed_delta'), - ('Noise multiplier', 'initial_noise_multiplier'), - ('Eta', 'eta_ancestral'), - ('Eta DDIM', 'eta_ddim'), - ('Discard penultimate sigma', 'always_discard_next_to_last_sigma') -] - - -def create_override_settings_dict(text_pairs): - """creates processing's override_settings parameters from gradio's multiselect - - Example input: - ['Clip skip: 2', 'Model hash: e6e99610c4', 'ENSD: 31337'] - - Example output: - {'CLIP_stop_at_last_layers': 2, 'sd_model_checkpoint': 'e6e99610c4', 'eta_noise_seed_delta': 31337} - """ - - res = {} - - params = {} - for pair in text_pairs: - k, v = pair.split(":", maxsplit=1) - - params[k] = v.strip() - - for param_name, setting_name in infotext_to_setting_name_mapping: - value = params.get(param_name, None) - - if value is None: - continue - - res[setting_name] = shared.opts.cast_value(setting_name, value) - - return res - - -def connect_paste(button, paste_fields, input_comp, override_settings_component, tabname): - def paste_func(prompt): - if not prompt and not shared.cmd_opts.hide_ui_dir_config: - filename = os.path.join(data_path, "params.txt") - if os.path.exists(filename): - with open(filename, "r", encoding="utf8") as file: - prompt = file.read() - - params = parse_generation_parameters(prompt) - script_callbacks.infotext_pasted_callback(prompt, params) - res = [] - - for output, key in paste_fields: - if callable(key): - v = key(params) - else: - v = params.get(key, None) - - if v is None: - res.append(gr.update()) - elif isinstance(v, type_of_gr_update): - res.append(v) - else: - try: - valtype = type(output.value) - - if valtype == bool and v == "False": - val = False - else: - val = valtype(v) - - res.append(gr.update(value=val)) - except Exception: - res.append(gr.update()) - - return res - - if override_settings_component is not None: - def paste_settings(params): - vals = {} - - for param_name, setting_name in infotext_to_setting_name_mapping: - v = params.get(param_name, None) - if v is None: - continue - - if setting_name == "sd_model_checkpoint" and shared.opts.disable_weights_auto_swap: - continue - - v = shared.opts.cast_value(setting_name, v) - current_value = getattr(shared.opts, setting_name, None) - - if v == current_value: - continue - - vals[param_name] = v - - vals_pairs = [f"{k}: {v}" for k, v in vals.items()] - - return gr.Dropdown.update(value=vals_pairs, choices=vals_pairs, visible=len(vals_pairs) > 0) - - paste_fields = paste_fields + [(override_settings_component, paste_settings)] - - button.click( - fn=paste_func, - _js=f"recalculate_prompts_{tabname}", - inputs=[input_comp], - outputs=[x[0] for x in paste_fields], - ) - - diff --git a/spaces/vinid/webplip/visualization.py b/spaces/vinid/webplip/visualization.py deleted file mode 100644 index 563f8868747d48b7af05417ffca00c433006c7d4..0000000000000000000000000000000000000000 --- a/spaces/vinid/webplip/visualization.py +++ /dev/null @@ -1,44 +0,0 @@ -from pathlib import Path -import streamlit as st -import streamlit.components.v1 as components -import plotly.figure_factory as ff -import numpy as np -import pandas as pd -#from streamlit_plotly_events import plotly_events - - - -def app(): - st.markdown('#### Visualization') - - img_2d_embed = pd.read_csv('data/img_2d_embedding.csv', index_col=0) - img_2d_embed = img_2d_embed.sample(frac=0.1, random_state=0) - - txt_2d_embed = pd.read_csv('data/txt_2d_embedding.csv', index_col=0) - txt_2d_embed = txt_2d_embed.sample(frac=0.1, random_state=0) - - col1, col2 = st.columns(2) - with col1: - fig1 = ff.create_2d_density( - x=img_2d_embed['UMAP_1'], - y=img_2d_embed['UMAP_2'], - #colors=img_2d_embed['tag'], - colorscale='Blues', # set the color map - height=500, # set height of the figure - width=500, # set width of the figure - title='Image embedding visualized in 2D UMAP' - ) - #selected_points = plotly_events(fig1, click_event=True, hover_event=True) - st.plotly_chart(fig1, use_container_width=True) - - with col2: - fig2 = ff.create_2d_density( - x=txt_2d_embed['UMAP_1'], - y=txt_2d_embed['UMAP_2'], - #colors=img_2d_embed['tag'], - colorscale='Blues', # set the color map - height=500, # set height of the figure - width=500, # set width of the figure - title='Text embedding visualized in 2D UMAP' - ) - st.plotly_chart(fig2, use_container_width=True) \ No newline at end of file diff --git a/spaces/vishal2023/Pneumonia-detection/app.py b/spaces/vishal2023/Pneumonia-detection/app.py deleted file mode 100644 index 3bdfe23d651397238e6ee3067013b9fa2cec0001..0000000000000000000000000000000000000000 --- a/spaces/vishal2023/Pneumonia-detection/app.py +++ /dev/null @@ -1,17 +0,0 @@ -from tensorflow.keras.models import load_model -import numpy as np -import cv2 - -model = load_model('little_imporved_model.h5') - -def predict_from_img(img): - img = cv2.cvtColor(img,cv2.COLOR_RGB2GRAY) - img = img/255.0 - img = np.expand_dims(img,axis = 0) - output = model.predict(img)[0][0] - return {'NORMAL':float(output),'PNEUMONIA':float(1-output)} - -import gradio as gr -image = gr.inputs.Image(shape=(150,150)) -label = gr.outputs.Label(num_top_classes=2) -gr.Interface(fn=predict_from_img, inputs=image, outputs=label,title = 'PNEUMONIA-DETECTION').launch() \ No newline at end of file diff --git a/spaces/vslasor/VLS1-ASRLiveSpeechRecognition-GR/README.md b/spaces/vslasor/VLS1-ASRLiveSpeechRecognition-GR/README.md deleted file mode 100644 index 2954d82daab65da1dd536bda4d02761e10b097ba..0000000000000000000000000000000000000000 --- a/spaces/vslasor/VLS1-ASRLiveSpeechRecognition-GR/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: VLS1 ASRLiveSpeechRecognition GR -emoji: 🌖 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.8.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/webshop/amazon_shop/templates/review_page.html b/spaces/webshop/amazon_shop/templates/review_page.html deleted file mode 100644 index 805842909c2edb1ffc8db54ed1258bc62415c981..0000000000000000000000000000000000000000 --- a/spaces/webshop/amazon_shop/templates/review_page.html +++ /dev/null @@ -1,59 +0,0 @@ - - - - - - - - - - - -
          -
          -
          -
          -

          Instruction:
          {{ instruction_text }}

          -
          -
          -
          -
          -
          - -
          -
          -
          -
          - -
          -
          -
          -
          -
          -
          -
          - {% for review in product_info.Reviews %} -
          -
          -

          "{{review.title}}"

          -

          - {{review.score}} - {% for i in range(review.score | int) %} - - {% endfor %} - {% for i in range(5 - review.score | int) %} - - {% endfor %} -

          -

          {{review.body}}

          -
          -
          - {% endfor %} -
          -
          -
          -
          -
          -
          - - \ No newline at end of file diff --git a/spaces/wffcyrus/MetaGPT-v1/metagpt/environment.py b/spaces/wffcyrus/MetaGPT-v1/metagpt/environment.py deleted file mode 100644 index 24e6ada2f904dafe9bf2fd87d21b993723ada964..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/metagpt/environment.py +++ /dev/null @@ -1,79 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/11 22:12 -@Author : alexanderwu -@File : environment.py -""" -import asyncio -from typing import Iterable - -from pydantic import BaseModel, Field - -from metagpt.memory import Memory -from metagpt.roles import Role -from metagpt.schema import Message - - -class Environment(BaseModel): - """环境,承载一批角色,角色可以向环境发布消息,可以被其他角色观察到 - Environment, hosting a batch of roles, roles can publish messages to the environment, and can be observed by other roles - - """ - - roles: dict[str, Role] = Field(default_factory=dict) - memory: Memory = Field(default_factory=Memory) - history: str = Field(default='') - - class Config: - arbitrary_types_allowed = True - - def add_role(self, role: Role): - """增加一个在当前环境的角色 - Add a role in the current environment - """ - role.set_env(self) - self.roles[role.profile] = role - - def add_roles(self, roles: Iterable[Role]): - """增加一批在当前环境的角色 - Add a batch of characters in the current environment - """ - for role in roles: - self.add_role(role) - - def publish_message(self, message: Message): - """向当前环境发布信息 - Post information to the current environment - """ - # self.message_queue.put(message) - self.memory.add(message) - self.history += f"\n{message}" - - async def run(self, k=1): - """处理一次所有信息的运行 - Process all Role runs at once - """ - # while not self.message_queue.empty(): - # message = self.message_queue.get() - # rsp = await self.manager.handle(message, self) - # self.message_queue.put(rsp) - for _ in range(k): - futures = [] - for role in self.roles.values(): - future = role.run() - futures.append(future) - - await asyncio.gather(*futures) - - def get_roles(self) -> dict[str, Role]: - """获得环境内的所有角色 - Process all Role runs at once - """ - return self.roles - - def get_role(self, name: str) -> Role: - """获得环境内的指定角色 - get all the environment roles - """ - return self.roles.get(name, None) diff --git a/spaces/williamzhou2023/GPT2/custom.css b/spaces/williamzhou2023/GPT2/custom.css deleted file mode 100644 index 5143eb138ea2469d8c457c71cb210fd3fb7cbe15..0000000000000000000000000000000000000000 --- a/spaces/williamzhou2023/GPT2/custom.css +++ /dev/null @@ -1,162 +0,0 @@ -:root { - --chatbot-color-light: #F3F3F3; - --chatbot-color-dark: #121111; -} - -/* status_display */ -#status_display { - display: flex; - min-height: 2.5em; - align-items: flex-end; - justify-content: flex-end; -} -#status_display p { - font-size: .85em; - font-family: monospace; - color: var(--body-text-color-subdued); -} - -#chuanhu_chatbot, #status_display { - transition: all 0.6s; -} -/* list */ -ol:not(.options), ul:not(.options) { - padding-inline-start: 2em !important; -} - -/* 亮色 */ -#chuanhu_chatbot { - background-color: var(--chatbot-color-light) !important; -} -[data-testid = "bot"] { - background-color: #FFFFFF !important; -} -[data-testid = "user"] { - background-color: #95EC69 !important; -} -/* 对话气泡 */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - border: none; - padding: var(--spacing-xl) !important; - font-size: var(--text-md) !important; - line-height: var(--line-md) !important; - min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); - min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); -} -[data-testid = "bot"] { - max-width: 85%; - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 85%; - width: auto !important; - border-bottom-right-radius: 0 !important; -} -/* 表格 */ -table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} -td,th { - border: 1.2px solid var(--border-color-primary) !important; - padding: 0.2em; -} -thead { - background-color: rgba(175,184,193,0.2); -} -thead th { - padding: .5em .2em; -} -/* 行内代码 */ -code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 代码块 */ -pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: hsla(0, 0%, 0%, 80%)!important; - border-radius: 10px; - padding: 1.4em 1.2em 0em 1.4em; - margin: 1.2em 2em 1.2em 0.5em; - color: #FFF; - box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2); -} -/* 代码高亮样式 */ -.highlight .hll { background-color: #49483e } -.highlight .c { color: #75715e } /* Comment */ -.highlight .err { color: #960050; background-color: #1e0010 } /* Error */ -.highlight .k { color: #66d9ef } /* Keyword */ -.highlight .l { color: #ae81ff } /* Literal */ -.highlight .n { color: #f8f8f2 } /* Name */ -.highlight .o { color: #f92672 } /* Operator */ -.highlight .p { color: #f8f8f2 } /* Punctuation */ -.highlight .ch { color: #75715e } /* Comment.Hashbang */ -.highlight .cm { color: #75715e } /* Comment.Multiline */ -.highlight .cp { color: #75715e } /* Comment.Preproc */ -.highlight .cpf { color: #75715e } /* Comment.PreprocFile */ -.highlight .c1 { color: #75715e } /* Comment.Single */ -.highlight .cs { color: #75715e } /* Comment.Special */ -.highlight .gd { color: #f92672 } /* Generic.Deleted */ -.highlight .ge { font-style: italic } /* Generic.Emph */ -.highlight .gi { color: #a6e22e } /* Generic.Inserted */ -.highlight .gs { font-weight: bold } /* Generic.Strong */ -.highlight .gu { color: #75715e } /* Generic.Subheading */ -.highlight .kc { color: #66d9ef } /* Keyword.Constant */ -.highlight .kd { color: #66d9ef } /* Keyword.Declaration */ -.highlight .kn { color: #f92672 } /* Keyword.Namespace */ -.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */ -.highlight .kr { color: #66d9ef } /* Keyword.Reserved */ -.highlight .kt { color: #66d9ef } /* Keyword.Type */ -.highlight .ld { color: #e6db74 } /* Literal.Date */ -.highlight .m { color: #ae81ff } /* Literal.Number */ -.highlight .s { color: #e6db74 } /* Literal.String */ -.highlight .na { color: #a6e22e } /* Name.Attribute */ -.highlight .nb { color: #f8f8f2 } /* Name.Builtin */ -.highlight .nc { color: #a6e22e } /* Name.Class */ -.highlight .no { color: #66d9ef } /* Name.Constant */ -.highlight .nd { color: #a6e22e } /* Name.Decorator */ -.highlight .ni { color: #f8f8f2 } /* Name.Entity */ -.highlight .ne { color: #a6e22e } /* Name.Exception */ -.highlight .nf { color: #a6e22e } /* Name.Function */ -.highlight .nl { color: #f8f8f2 } /* Name.Label */ -.highlight .nn { color: #f8f8f2 } /* Name.Namespace */ -.highlight .nx { color: #a6e22e } /* Name.Other */ -.highlight .py { color: #f8f8f2 } /* Name.Property */ -.highlight .nt { color: #f92672 } /* Name.Tag */ -.highlight .nv { color: #f8f8f2 } /* Name.Variable */ -.highlight .ow { color: #f92672 } /* Operator.Word */ -.highlight .w { color: #f8f8f2 } /* Text.Whitespace */ -.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */ -.highlight .mf { color: #ae81ff } /* Literal.Number.Float */ -.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */ -.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */ -.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */ -.highlight .sa { color: #e6db74 } /* Literal.String.Affix */ -.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */ -.highlight .sc { color: #e6db74 } /* Literal.String.Char */ -.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */ -.highlight .sd { color: #e6db74 } /* Literal.String.Doc */ -.highlight .s2 { color: #e6db74 } /* Literal.String.Double */ -.highlight .se { color: #ae81ff } /* Literal.String.Escape */ -.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */ -.highlight .si { color: #e6db74 } /* Literal.String.Interpol */ -.highlight .sx { color: #e6db74 } /* Literal.String.Other */ -.highlight .sr { color: #e6db74 } /* Literal.String.Regex */ -.highlight .s1 { color: #e6db74 } /* Literal.String.Single */ -.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */ -.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */ -.highlight .fm { color: #a6e22e } /* Name.Function.Magic */ -.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */ -.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */ -.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */ -.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */ -.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */ diff --git a/spaces/xdecoder/Demo/xdecoder/backbone/build.py b/spaces/xdecoder/Demo/xdecoder/backbone/build.py deleted file mode 100644 index a559fa6a010d3379ff5fcbeb43c510122988735f..0000000000000000000000000000000000000000 --- a/spaces/xdecoder/Demo/xdecoder/backbone/build.py +++ /dev/null @@ -1,11 +0,0 @@ -from .registry import model_entrypoints -from .registry import is_model - -from .backbone import * - -def build_backbone(config, **kwargs): - model_name = config['MODEL']['BACKBONE']['NAME'] - if not is_model(model_name): - raise ValueError(f'Unkown model: {model_name}') - - return model_entrypoints(model_name)(config, **kwargs) \ No newline at end of file diff --git a/spaces/xdecoder/Instruct-X-Decoder/xdecoder/modules/__init__.py b/spaces/xdecoder/Instruct-X-Decoder/xdecoder/modules/__init__.py deleted file mode 100644 index 6bbbff85221d3e15d34b52f69706896896c47ef3..0000000000000000000000000000000000000000 --- a/spaces/xdecoder/Instruct-X-Decoder/xdecoder/modules/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .position_encoding import * -from .attention import * -from .postprocessing import * \ No newline at end of file diff --git a/spaces/xfys/yolov5_tracking/yolov5/models/tf.py b/spaces/xfys/yolov5_tracking/yolov5/models/tf.py deleted file mode 100644 index bc0a465d7edd094231bd7d2230eb9fe9270ed27e..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/yolov5/models/tf.py +++ /dev/null @@ -1,608 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license -""" -TensorFlow, Keras and TFLite versions of YOLOv5 -Authored by https://github.com/zldrobit in PR https://github.com/ultralytics/yolov5/pull/1127 - -Usage: - $ python models/tf.py --weights yolov5s.pt - -Export: - $ python export.py --weights yolov5s.pt --include saved_model pb tflite tfjs -""" - -import argparse -import sys -from copy import deepcopy -from pathlib import Path - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[1] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH -# ROOT = ROOT.relative_to(Path.cwd()) # relative - -import numpy as np -import tensorflow as tf -import torch -import torch.nn as nn -from tensorflow import keras - -from models.common import (C3, SPP, SPPF, Bottleneck, BottleneckCSP, C3x, Concat, Conv, CrossConv, DWConv, - DWConvTranspose2d, Focus, autopad) -from models.experimental import MixConv2d, attempt_load -from models.yolo import Detect, Segment -from utils.activations import SiLU -from utils.general import LOGGER, make_divisible, print_args - - -class TFBN(keras.layers.Layer): - # TensorFlow BatchNormalization wrapper - def __init__(self, w=None): - super().__init__() - self.bn = keras.layers.BatchNormalization( - beta_initializer=keras.initializers.Constant(w.bias.numpy()), - gamma_initializer=keras.initializers.Constant(w.weight.numpy()), - moving_mean_initializer=keras.initializers.Constant(w.running_mean.numpy()), - moving_variance_initializer=keras.initializers.Constant(w.running_var.numpy()), - epsilon=w.eps) - - def call(self, inputs): - return self.bn(inputs) - - -class TFPad(keras.layers.Layer): - # Pad inputs in spatial dimensions 1 and 2 - def __init__(self, pad): - super().__init__() - if isinstance(pad, int): - self.pad = tf.constant([[0, 0], [pad, pad], [pad, pad], [0, 0]]) - else: # tuple/list - self.pad = tf.constant([[0, 0], [pad[0], pad[0]], [pad[1], pad[1]], [0, 0]]) - - def call(self, inputs): - return tf.pad(inputs, self.pad, mode='constant', constant_values=0) - - -class TFConv(keras.layers.Layer): - # Standard convolution - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True, w=None): - # ch_in, ch_out, weights, kernel, stride, padding, groups - super().__init__() - assert g == 1, "TF v2.2 Conv2D does not support 'groups' argument" - # TensorFlow convolution padding is inconsistent with PyTorch (e.g. k=3 s=2 'SAME' padding) - # see https://stackoverflow.com/questions/52975843/comparing-conv2d-with-padding-between-tensorflow-and-pytorch - conv = keras.layers.Conv2D( - filters=c2, - kernel_size=k, - strides=s, - padding='SAME' if s == 1 else 'VALID', - use_bias=not hasattr(w, 'bn'), - kernel_initializer=keras.initializers.Constant(w.conv.weight.permute(2, 3, 1, 0).numpy()), - bias_initializer='zeros' if hasattr(w, 'bn') else keras.initializers.Constant(w.conv.bias.numpy())) - self.conv = conv if s == 1 else keras.Sequential([TFPad(autopad(k, p)), conv]) - self.bn = TFBN(w.bn) if hasattr(w, 'bn') else tf.identity - self.act = activations(w.act) if act else tf.identity - - def call(self, inputs): - return self.act(self.bn(self.conv(inputs))) - - -class TFDWConv(keras.layers.Layer): - # Depthwise convolution - def __init__(self, c1, c2, k=1, s=1, p=None, act=True, w=None): - # ch_in, ch_out, weights, kernel, stride, padding, groups - super().__init__() - assert c2 % c1 == 0, f'TFDWConv() output={c2} must be a multiple of input={c1} channels' - conv = keras.layers.DepthwiseConv2D( - kernel_size=k, - depth_multiplier=c2 // c1, - strides=s, - padding='SAME' if s == 1 else 'VALID', - use_bias=not hasattr(w, 'bn'), - depthwise_initializer=keras.initializers.Constant(w.conv.weight.permute(2, 3, 1, 0).numpy()), - bias_initializer='zeros' if hasattr(w, 'bn') else keras.initializers.Constant(w.conv.bias.numpy())) - self.conv = conv if s == 1 else keras.Sequential([TFPad(autopad(k, p)), conv]) - self.bn = TFBN(w.bn) if hasattr(w, 'bn') else tf.identity - self.act = activations(w.act) if act else tf.identity - - def call(self, inputs): - return self.act(self.bn(self.conv(inputs))) - - -class TFDWConvTranspose2d(keras.layers.Layer): - # Depthwise ConvTranspose2d - def __init__(self, c1, c2, k=1, s=1, p1=0, p2=0, w=None): - # ch_in, ch_out, weights, kernel, stride, padding, groups - super().__init__() - assert c1 == c2, f'TFDWConv() output={c2} must be equal to input={c1} channels' - assert k == 4 and p1 == 1, 'TFDWConv() only valid for k=4 and p1=1' - weight, bias = w.weight.permute(2, 3, 1, 0).numpy(), w.bias.numpy() - self.c1 = c1 - self.conv = [ - keras.layers.Conv2DTranspose(filters=1, - kernel_size=k, - strides=s, - padding='VALID', - output_padding=p2, - use_bias=True, - kernel_initializer=keras.initializers.Constant(weight[..., i:i + 1]), - bias_initializer=keras.initializers.Constant(bias[i])) for i in range(c1)] - - def call(self, inputs): - return tf.concat([m(x) for m, x in zip(self.conv, tf.split(inputs, self.c1, 3))], 3)[:, 1:-1, 1:-1] - - -class TFFocus(keras.layers.Layer): - # Focus wh information into c-space - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True, w=None): - # ch_in, ch_out, kernel, stride, padding, groups - super().__init__() - self.conv = TFConv(c1 * 4, c2, k, s, p, g, act, w.conv) - - def call(self, inputs): # x(b,w,h,c) -> y(b,w/2,h/2,4c) - # inputs = inputs / 255 # normalize 0-255 to 0-1 - inputs = [inputs[:, ::2, ::2, :], inputs[:, 1::2, ::2, :], inputs[:, ::2, 1::2, :], inputs[:, 1::2, 1::2, :]] - return self.conv(tf.concat(inputs, 3)) - - -class TFBottleneck(keras.layers.Layer): - # Standard bottleneck - def __init__(self, c1, c2, shortcut=True, g=1, e=0.5, w=None): # ch_in, ch_out, shortcut, groups, expansion - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1) - self.cv2 = TFConv(c_, c2, 3, 1, g=g, w=w.cv2) - self.add = shortcut and c1 == c2 - - def call(self, inputs): - return inputs + self.cv2(self.cv1(inputs)) if self.add else self.cv2(self.cv1(inputs)) - - -class TFCrossConv(keras.layers.Layer): - # Cross Convolution - def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False, w=None): - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = TFConv(c1, c_, (1, k), (1, s), w=w.cv1) - self.cv2 = TFConv(c_, c2, (k, 1), (s, 1), g=g, w=w.cv2) - self.add = shortcut and c1 == c2 - - def call(self, inputs): - return inputs + self.cv2(self.cv1(inputs)) if self.add else self.cv2(self.cv1(inputs)) - - -class TFConv2d(keras.layers.Layer): - # Substitution for PyTorch nn.Conv2D - def __init__(self, c1, c2, k, s=1, g=1, bias=True, w=None): - super().__init__() - assert g == 1, "TF v2.2 Conv2D does not support 'groups' argument" - self.conv = keras.layers.Conv2D(filters=c2, - kernel_size=k, - strides=s, - padding='VALID', - use_bias=bias, - kernel_initializer=keras.initializers.Constant( - w.weight.permute(2, 3, 1, 0).numpy()), - bias_initializer=keras.initializers.Constant(w.bias.numpy()) if bias else None) - - def call(self, inputs): - return self.conv(inputs) - - -class TFBottleneckCSP(keras.layers.Layer): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5, w=None): - # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1) - self.cv2 = TFConv2d(c1, c_, 1, 1, bias=False, w=w.cv2) - self.cv3 = TFConv2d(c_, c_, 1, 1, bias=False, w=w.cv3) - self.cv4 = TFConv(2 * c_, c2, 1, 1, w=w.cv4) - self.bn = TFBN(w.bn) - self.act = lambda x: keras.activations.swish(x) - self.m = keras.Sequential([TFBottleneck(c_, c_, shortcut, g, e=1.0, w=w.m[j]) for j in range(n)]) - - def call(self, inputs): - y1 = self.cv3(self.m(self.cv1(inputs))) - y2 = self.cv2(inputs) - return self.cv4(self.act(self.bn(tf.concat((y1, y2), axis=3)))) - - -class TFC3(keras.layers.Layer): - # CSP Bottleneck with 3 convolutions - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5, w=None): - # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1) - self.cv2 = TFConv(c1, c_, 1, 1, w=w.cv2) - self.cv3 = TFConv(2 * c_, c2, 1, 1, w=w.cv3) - self.m = keras.Sequential([TFBottleneck(c_, c_, shortcut, g, e=1.0, w=w.m[j]) for j in range(n)]) - - def call(self, inputs): - return self.cv3(tf.concat((self.m(self.cv1(inputs)), self.cv2(inputs)), axis=3)) - - -class TFC3x(keras.layers.Layer): - # 3 module with cross-convolutions - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5, w=None): - # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1) - self.cv2 = TFConv(c1, c_, 1, 1, w=w.cv2) - self.cv3 = TFConv(2 * c_, c2, 1, 1, w=w.cv3) - self.m = keras.Sequential([ - TFCrossConv(c_, c_, k=3, s=1, g=g, e=1.0, shortcut=shortcut, w=w.m[j]) for j in range(n)]) - - def call(self, inputs): - return self.cv3(tf.concat((self.m(self.cv1(inputs)), self.cv2(inputs)), axis=3)) - - -class TFSPP(keras.layers.Layer): - # Spatial pyramid pooling layer used in YOLOv3-SPP - def __init__(self, c1, c2, k=(5, 9, 13), w=None): - super().__init__() - c_ = c1 // 2 # hidden channels - self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1) - self.cv2 = TFConv(c_ * (len(k) + 1), c2, 1, 1, w=w.cv2) - self.m = [keras.layers.MaxPool2D(pool_size=x, strides=1, padding='SAME') for x in k] - - def call(self, inputs): - x = self.cv1(inputs) - return self.cv2(tf.concat([x] + [m(x) for m in self.m], 3)) - - -class TFSPPF(keras.layers.Layer): - # Spatial pyramid pooling-Fast layer - def __init__(self, c1, c2, k=5, w=None): - super().__init__() - c_ = c1 // 2 # hidden channels - self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1) - self.cv2 = TFConv(c_ * 4, c2, 1, 1, w=w.cv2) - self.m = keras.layers.MaxPool2D(pool_size=k, strides=1, padding='SAME') - - def call(self, inputs): - x = self.cv1(inputs) - y1 = self.m(x) - y2 = self.m(y1) - return self.cv2(tf.concat([x, y1, y2, self.m(y2)], 3)) - - -class TFDetect(keras.layers.Layer): - # TF YOLOv5 Detect layer - def __init__(self, nc=80, anchors=(), ch=(), imgsz=(640, 640), w=None): # detection layer - super().__init__() - self.stride = tf.convert_to_tensor(w.stride.numpy(), dtype=tf.float32) - self.nc = nc # number of classes - self.no = nc + 5 # number of outputs per anchor - self.nl = len(anchors) # number of detection layers - self.na = len(anchors[0]) // 2 # number of anchors - self.grid = [tf.zeros(1)] * self.nl # init grid - self.anchors = tf.convert_to_tensor(w.anchors.numpy(), dtype=tf.float32) - self.anchor_grid = tf.reshape(self.anchors * tf.reshape(self.stride, [self.nl, 1, 1]), [self.nl, 1, -1, 1, 2]) - self.m = [TFConv2d(x, self.no * self.na, 1, w=w.m[i]) for i, x in enumerate(ch)] - self.training = False # set to False after building model - self.imgsz = imgsz - for i in range(self.nl): - ny, nx = self.imgsz[0] // self.stride[i], self.imgsz[1] // self.stride[i] - self.grid[i] = self._make_grid(nx, ny) - - def call(self, inputs): - z = [] # inference output - x = [] - for i in range(self.nl): - x.append(self.m[i](inputs[i])) - # x(bs,20,20,255) to x(bs,3,20,20,85) - ny, nx = self.imgsz[0] // self.stride[i], self.imgsz[1] // self.stride[i] - x[i] = tf.reshape(x[i], [-1, ny * nx, self.na, self.no]) - - if not self.training: # inference - y = x[i] - grid = tf.transpose(self.grid[i], [0, 2, 1, 3]) - 0.5 - anchor_grid = tf.transpose(self.anchor_grid[i], [0, 2, 1, 3]) * 4 - xy = (tf.sigmoid(y[..., 0:2]) * 2 + grid) * self.stride[i] # xy - wh = tf.sigmoid(y[..., 2:4]) ** 2 * anchor_grid - # Normalize xywh to 0-1 to reduce calibration error - xy /= tf.constant([[self.imgsz[1], self.imgsz[0]]], dtype=tf.float32) - wh /= tf.constant([[self.imgsz[1], self.imgsz[0]]], dtype=tf.float32) - y = tf.concat([xy, wh, tf.sigmoid(y[..., 4:5 + self.nc]), y[..., 5 + self.nc:]], -1) - z.append(tf.reshape(y, [-1, self.na * ny * nx, self.no])) - - return tf.transpose(x, [0, 2, 1, 3]) if self.training else (tf.concat(z, 1),) - - @staticmethod - def _make_grid(nx=20, ny=20): - # yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)]) - # return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float() - xv, yv = tf.meshgrid(tf.range(nx), tf.range(ny)) - return tf.cast(tf.reshape(tf.stack([xv, yv], 2), [1, 1, ny * nx, 2]), dtype=tf.float32) - - -class TFSegment(TFDetect): - # YOLOv5 Segment head for segmentation models - def __init__(self, nc=80, anchors=(), nm=32, npr=256, ch=(), imgsz=(640, 640), w=None): - super().__init__(nc, anchors, ch, imgsz, w) - self.nm = nm # number of masks - self.npr = npr # number of protos - self.no = 5 + nc + self.nm # number of outputs per anchor - self.m = [TFConv2d(x, self.no * self.na, 1, w=w.m[i]) for i, x in enumerate(ch)] # output conv - self.proto = TFProto(ch[0], self.npr, self.nm, w=w.proto) # protos - self.detect = TFDetect.call - - def call(self, x): - p = self.proto(x[0]) - # p = TFUpsample(None, scale_factor=4, mode='nearest')(self.proto(x[0])) # (optional) full-size protos - p = tf.transpose(p, [0, 3, 1, 2]) # from shape(1,160,160,32) to shape(1,32,160,160) - x = self.detect(self, x) - return (x, p) if self.training else (x[0], p) - - -class TFProto(keras.layers.Layer): - - def __init__(self, c1, c_=256, c2=32, w=None): - super().__init__() - self.cv1 = TFConv(c1, c_, k=3, w=w.cv1) - self.upsample = TFUpsample(None, scale_factor=2, mode='nearest') - self.cv2 = TFConv(c_, c_, k=3, w=w.cv2) - self.cv3 = TFConv(c_, c2, w=w.cv3) - - def call(self, inputs): - return self.cv3(self.cv2(self.upsample(self.cv1(inputs)))) - - -class TFUpsample(keras.layers.Layer): - # TF version of torch.nn.Upsample() - def __init__(self, size, scale_factor, mode, w=None): # warning: all arguments needed including 'w' - super().__init__() - assert scale_factor % 2 == 0, 'scale_factor must be multiple of 2' - self.upsample = lambda x: tf.image.resize(x, (x.shape[1] * scale_factor, x.shape[2] * scale_factor), mode) - # self.upsample = keras.layers.UpSampling2D(size=scale_factor, interpolation=mode) - # with default arguments: align_corners=False, half_pixel_centers=False - # self.upsample = lambda x: tf.raw_ops.ResizeNearestNeighbor(images=x, - # size=(x.shape[1] * 2, x.shape[2] * 2)) - - def call(self, inputs): - return self.upsample(inputs) - - -class TFConcat(keras.layers.Layer): - # TF version of torch.concat() - def __init__(self, dimension=1, w=None): - super().__init__() - assert dimension == 1, 'convert only NCHW to NHWC concat' - self.d = 3 - - def call(self, inputs): - return tf.concat(inputs, self.d) - - -def parse_model(d, ch, model, imgsz): # model_dict, input_channels(3) - LOGGER.info(f"\n{'':>3}{'from':>18}{'n':>3}{'params':>10} {'module':<40}{'arguments':<30}") - anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple'] - na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors - no = na * (nc + 5) # number of outputs = anchors * (classes + 5) - - layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out - for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args - m_str = m - m = eval(m) if isinstance(m, str) else m # eval strings - for j, a in enumerate(args): - try: - args[j] = eval(a) if isinstance(a, str) else a # eval strings - except NameError: - pass - - n = max(round(n * gd), 1) if n > 1 else n # depth gain - if m in [ - nn.Conv2d, Conv, DWConv, DWConvTranspose2d, Bottleneck, SPP, SPPF, MixConv2d, Focus, CrossConv, - BottleneckCSP, C3, C3x]: - c1, c2 = ch[f], args[0] - c2 = make_divisible(c2 * gw, 8) if c2 != no else c2 - - args = [c1, c2, *args[1:]] - if m in [BottleneckCSP, C3, C3x]: - args.insert(2, n) - n = 1 - elif m is nn.BatchNorm2d: - args = [ch[f]] - elif m is Concat: - c2 = sum(ch[-1 if x == -1 else x + 1] for x in f) - elif m in [Detect, Segment]: - args.append([ch[x + 1] for x in f]) - if isinstance(args[1], int): # number of anchors - args[1] = [list(range(args[1] * 2))] * len(f) - if m is Segment: - args[3] = make_divisible(args[3] * gw, 8) - args.append(imgsz) - else: - c2 = ch[f] - - tf_m = eval('TF' + m_str.replace('nn.', '')) - m_ = keras.Sequential([tf_m(*args, w=model.model[i][j]) for j in range(n)]) if n > 1 \ - else tf_m(*args, w=model.model[i]) # module - - torch_m_ = nn.Sequential(*(m(*args) for _ in range(n))) if n > 1 else m(*args) # module - t = str(m)[8:-2].replace('__main__.', '') # module type - np = sum(x.numel() for x in torch_m_.parameters()) # number params - m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params - LOGGER.info(f'{i:>3}{str(f):>18}{str(n):>3}{np:>10} {t:<40}{str(args):<30}') # print - save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist - layers.append(m_) - ch.append(c2) - return keras.Sequential(layers), sorted(save) - - -class TFModel: - # TF YOLOv5 model - def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, model=None, imgsz=(640, 640)): # model, channels, classes - super().__init__() - if isinstance(cfg, dict): - self.yaml = cfg # model dict - else: # is *.yaml - import yaml # for torch hub - self.yaml_file = Path(cfg).name - with open(cfg) as f: - self.yaml = yaml.load(f, Loader=yaml.FullLoader) # model dict - - # Define model - if nc and nc != self.yaml['nc']: - LOGGER.info(f"Overriding {cfg} nc={self.yaml['nc']} with nc={nc}") - self.yaml['nc'] = nc # override yaml value - self.model, self.savelist = parse_model(deepcopy(self.yaml), ch=[ch], model=model, imgsz=imgsz) - - def predict(self, - inputs, - tf_nms=False, - agnostic_nms=False, - topk_per_class=100, - topk_all=100, - iou_thres=0.45, - conf_thres=0.25): - y = [] # outputs - x = inputs - for m in self.model.layers: - if m.f != -1: # if not from previous layer - x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers - - x = m(x) # run - y.append(x if m.i in self.savelist else None) # save output - - # Add TensorFlow NMS - if tf_nms: - boxes = self._xywh2xyxy(x[0][..., :4]) - probs = x[0][:, :, 4:5] - classes = x[0][:, :, 5:] - scores = probs * classes - if agnostic_nms: - nms = AgnosticNMS()((boxes, classes, scores), topk_all, iou_thres, conf_thres) - else: - boxes = tf.expand_dims(boxes, 2) - nms = tf.image.combined_non_max_suppression(boxes, - scores, - topk_per_class, - topk_all, - iou_thres, - conf_thres, - clip_boxes=False) - return (nms,) - return x # output [1,6300,85] = [xywh, conf, class0, class1, ...] - # x = x[0] # [x(1,6300,85), ...] to x(6300,85) - # xywh = x[..., :4] # x(6300,4) boxes - # conf = x[..., 4:5] # x(6300,1) confidences - # cls = tf.reshape(tf.cast(tf.argmax(x[..., 5:], axis=1), tf.float32), (-1, 1)) # x(6300,1) classes - # return tf.concat([conf, cls, xywh], 1) - - @staticmethod - def _xywh2xyxy(xywh): - # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right - x, y, w, h = tf.split(xywh, num_or_size_splits=4, axis=-1) - return tf.concat([x - w / 2, y - h / 2, x + w / 2, y + h / 2], axis=-1) - - -class AgnosticNMS(keras.layers.Layer): - # TF Agnostic NMS - def call(self, input, topk_all, iou_thres, conf_thres): - # wrap map_fn to avoid TypeSpec related error https://stackoverflow.com/a/65809989/3036450 - return tf.map_fn(lambda x: self._nms(x, topk_all, iou_thres, conf_thres), - input, - fn_output_signature=(tf.float32, tf.float32, tf.float32, tf.int32), - name='agnostic_nms') - - @staticmethod - def _nms(x, topk_all=100, iou_thres=0.45, conf_thres=0.25): # agnostic NMS - boxes, classes, scores = x - class_inds = tf.cast(tf.argmax(classes, axis=-1), tf.float32) - scores_inp = tf.reduce_max(scores, -1) - selected_inds = tf.image.non_max_suppression(boxes, - scores_inp, - max_output_size=topk_all, - iou_threshold=iou_thres, - score_threshold=conf_thres) - selected_boxes = tf.gather(boxes, selected_inds) - padded_boxes = tf.pad(selected_boxes, - paddings=[[0, topk_all - tf.shape(selected_boxes)[0]], [0, 0]], - mode='CONSTANT', - constant_values=0.0) - selected_scores = tf.gather(scores_inp, selected_inds) - padded_scores = tf.pad(selected_scores, - paddings=[[0, topk_all - tf.shape(selected_boxes)[0]]], - mode='CONSTANT', - constant_values=-1.0) - selected_classes = tf.gather(class_inds, selected_inds) - padded_classes = tf.pad(selected_classes, - paddings=[[0, topk_all - tf.shape(selected_boxes)[0]]], - mode='CONSTANT', - constant_values=-1.0) - valid_detections = tf.shape(selected_inds)[0] - return padded_boxes, padded_scores, padded_classes, valid_detections - - -def activations(act=nn.SiLU): - # Returns TF activation from input PyTorch activation - if isinstance(act, nn.LeakyReLU): - return lambda x: keras.activations.relu(x, alpha=0.1) - elif isinstance(act, nn.Hardswish): - return lambda x: x * tf.nn.relu6(x + 3) * 0.166666667 - elif isinstance(act, (nn.SiLU, SiLU)): - return lambda x: keras.activations.swish(x) - else: - raise Exception(f'no matching TensorFlow activation found for PyTorch activation {act}') - - -def representative_dataset_gen(dataset, ncalib=100): - # Representative dataset generator for use with converter.representative_dataset, returns a generator of np arrays - for n, (path, img, im0s, vid_cap, string) in enumerate(dataset): - im = np.transpose(img, [1, 2, 0]) - im = np.expand_dims(im, axis=0).astype(np.float32) - im /= 255 - yield [im] - if n >= ncalib: - break - - -def run( - weights=ROOT / 'yolov5s.pt', # weights path - imgsz=(640, 640), # inference size h,w - batch_size=1, # batch size - dynamic=False, # dynamic batch size -): - # PyTorch model - im = torch.zeros((batch_size, 3, *imgsz)) # BCHW image - model = attempt_load(weights, device=torch.device('cpu'), inplace=True, fuse=False) - _ = model(im) # inference - model.info() - - # TensorFlow model - im = tf.zeros((batch_size, *imgsz, 3)) # BHWC image - tf_model = TFModel(cfg=model.yaml, model=model, nc=model.nc, imgsz=imgsz) - _ = tf_model.predict(im) # inference - - # Keras model - im = keras.Input(shape=(*imgsz, 3), batch_size=None if dynamic else batch_size) - keras_model = keras.Model(inputs=im, outputs=tf_model.predict(im)) - keras_model.summary() - - LOGGER.info('PyTorch, TensorFlow and Keras models successfully verified.\nUse export.py for TF model export.') - - -def parse_opt(): - parser = argparse.ArgumentParser() - parser.add_argument('--weights', type=str, default=ROOT / 'yolov5s.pt', help='weights path') - parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w') - parser.add_argument('--batch-size', type=int, default=1, help='batch size') - parser.add_argument('--dynamic', action='store_true', help='dynamic batch size') - opt = parser.parse_args() - opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand - print_args(vars(opt)) - return opt - - -def main(opt): - run(**vars(opt)) - - -if __name__ == '__main__': - opt = parse_opt() - main(opt) diff --git a/spaces/xp3857/Image_Restoration_Colorization/Face_Enhancement/models/networks/sync_batchnorm/batchnorm.py b/spaces/xp3857/Image_Restoration_Colorization/Face_Enhancement/models/networks/sync_batchnorm/batchnorm.py deleted file mode 100644 index bf8d7a7325b474771a11a137053971fd40426079..0000000000000000000000000000000000000000 --- a/spaces/xp3857/Image_Restoration_Colorization/Face_Enhancement/models/networks/sync_batchnorm/batchnorm.py +++ /dev/null @@ -1,412 +0,0 @@ -# -*- coding: utf-8 -*- -# File : batchnorm.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import collections -import contextlib - -import torch -import torch.nn.functional as F - -from torch.nn.modules.batchnorm import _BatchNorm - -try: - from torch.nn.parallel._functions import ReduceAddCoalesced, Broadcast -except ImportError: - ReduceAddCoalesced = Broadcast = None - -try: - from jactorch.parallel.comm import SyncMaster - from jactorch.parallel.data_parallel import JacDataParallel as DataParallelWithCallback -except ImportError: - from .comm import SyncMaster - from .replicate import DataParallelWithCallback - -__all__ = [ - 'set_sbn_eps_mode', - 'SynchronizedBatchNorm1d', 'SynchronizedBatchNorm2d', 'SynchronizedBatchNorm3d', - 'patch_sync_batchnorm', 'convert_model' -] - - -SBN_EPS_MODE = 'clamp' - - -def set_sbn_eps_mode(mode): - global SBN_EPS_MODE - assert mode in ('clamp', 'plus') - SBN_EPS_MODE = mode - - -def _sum_ft(tensor): - """sum over the first and last dimention""" - return tensor.sum(dim=0).sum(dim=-1) - - -def _unsqueeze_ft(tensor): - """add new dimensions at the front and the tail""" - return tensor.unsqueeze(0).unsqueeze(-1) - - -_ChildMessage = collections.namedtuple('_ChildMessage', ['sum', 'ssum', 'sum_size']) -_MasterMessage = collections.namedtuple('_MasterMessage', ['sum', 'inv_std']) - - -class _SynchronizedBatchNorm(_BatchNorm): - def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=True, track_running_stats=True): - assert ReduceAddCoalesced is not None, 'Can not use Synchronized Batch Normalization without CUDA support.' - - super(_SynchronizedBatchNorm, self).__init__(num_features, eps=eps, momentum=momentum, affine=affine, - track_running_stats=track_running_stats) - - if not self.track_running_stats: - import warnings - warnings.warn('track_running_stats=False is not supported by the SynchronizedBatchNorm.') - - self._sync_master = SyncMaster(self._data_parallel_master) - - self._is_parallel = False - self._parallel_id = None - self._slave_pipe = None - - def forward(self, input): - # If it is not parallel computation or is in evaluation mode, use PyTorch's implementation. - if not (self._is_parallel and self.training): - return F.batch_norm( - input, self.running_mean, self.running_var, self.weight, self.bias, - self.training, self.momentum, self.eps) - - # Resize the input to (B, C, -1). - input_shape = input.size() - assert input.size(1) == self.num_features, 'Channel size mismatch: got {}, expect {}.'.format(input.size(1), self.num_features) - input = input.view(input.size(0), self.num_features, -1) - - # Compute the sum and square-sum. - sum_size = input.size(0) * input.size(2) - input_sum = _sum_ft(input) - input_ssum = _sum_ft(input ** 2) - - # Reduce-and-broadcast the statistics. - if self._parallel_id == 0: - mean, inv_std = self._sync_master.run_master(_ChildMessage(input_sum, input_ssum, sum_size)) - else: - mean, inv_std = self._slave_pipe.run_slave(_ChildMessage(input_sum, input_ssum, sum_size)) - - # Compute the output. - if self.affine: - # MJY:: Fuse the multiplication for speed. - output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std * self.weight) + _unsqueeze_ft(self.bias) - else: - output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std) - - # Reshape it. - return output.view(input_shape) - - def __data_parallel_replicate__(self, ctx, copy_id): - self._is_parallel = True - self._parallel_id = copy_id - - # parallel_id == 0 means master device. - if self._parallel_id == 0: - ctx.sync_master = self._sync_master - else: - self._slave_pipe = ctx.sync_master.register_slave(copy_id) - - def _data_parallel_master(self, intermediates): - """Reduce the sum and square-sum, compute the statistics, and broadcast it.""" - - # Always using same "device order" makes the ReduceAdd operation faster. - # Thanks to:: Tete Xiao (http://tetexiao.com/) - intermediates = sorted(intermediates, key=lambda i: i[1].sum.get_device()) - - to_reduce = [i[1][:2] for i in intermediates] - to_reduce = [j for i in to_reduce for j in i] # flatten - target_gpus = [i[1].sum.get_device() for i in intermediates] - - sum_size = sum([i[1].sum_size for i in intermediates]) - sum_, ssum = ReduceAddCoalesced.apply(target_gpus[0], 2, *to_reduce) - mean, inv_std = self._compute_mean_std(sum_, ssum, sum_size) - - broadcasted = Broadcast.apply(target_gpus, mean, inv_std) - - outputs = [] - for i, rec in enumerate(intermediates): - outputs.append((rec[0], _MasterMessage(*broadcasted[i*2:i*2+2]))) - - return outputs - - def _compute_mean_std(self, sum_, ssum, size): - """Compute the mean and standard-deviation with sum and square-sum. This method - also maintains the moving average on the master device.""" - assert size > 1, 'BatchNorm computes unbiased standard-deviation, which requires size > 1.' - mean = sum_ / size - sumvar = ssum - sum_ * mean - unbias_var = sumvar / (size - 1) - bias_var = sumvar / size - - if hasattr(torch, 'no_grad'): - with torch.no_grad(): - self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * mean.data - self.running_var = (1 - self.momentum) * self.running_var + self.momentum * unbias_var.data - else: - self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * mean.data - self.running_var = (1 - self.momentum) * self.running_var + self.momentum * unbias_var.data - - if SBN_EPS_MODE == 'clamp': - return mean, bias_var.clamp(self.eps) ** -0.5 - elif SBN_EPS_MODE == 'plus': - return mean, (bias_var + self.eps) ** -0.5 - else: - raise ValueError('Unknown EPS mode: {}.'.format(SBN_EPS_MODE)) - - -class SynchronizedBatchNorm1d(_SynchronizedBatchNorm): - r"""Applies Synchronized Batch Normalization over a 2d or 3d input that is seen as a - mini-batch. - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm1d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, L)` slices, it's common terminology to call this Temporal BatchNorm - - Args: - num_features: num_features from an expected input of size - `batch_size x num_features [x width]` - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape:: - - Input: :math:`(N, C)` or :math:`(N, C, L)` - - Output: :math:`(N, C)` or :math:`(N, C, L)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm1d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm1d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 2 and input.dim() != 3: - raise ValueError('expected 2D or 3D input (got {}D input)' - .format(input.dim())) - - -class SynchronizedBatchNorm2d(_SynchronizedBatchNorm): - r"""Applies Batch Normalization over a 4d input that is seen as a mini-batch - of 3d inputs - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm2d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, H, W)` slices, it's common terminology to call this Spatial BatchNorm - - Args: - num_features: num_features from an expected input of - size batch_size x num_features x height x width - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape:: - - Input: :math:`(N, C, H, W)` - - Output: :math:`(N, C, H, W)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm2d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm2d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 4: - raise ValueError('expected 4D input (got {}D input)' - .format(input.dim())) - - -class SynchronizedBatchNorm3d(_SynchronizedBatchNorm): - r"""Applies Batch Normalization over a 5d input that is seen as a mini-batch - of 4d inputs - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm3d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, D, H, W)` slices, it's common terminology to call this Volumetric BatchNorm - or Spatio-temporal BatchNorm - - Args: - num_features: num_features from an expected input of - size batch_size x num_features x depth x height x width - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape:: - - Input: :math:`(N, C, D, H, W)` - - Output: :math:`(N, C, D, H, W)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm3d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm3d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45, 10)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 5: - raise ValueError('expected 5D input (got {}D input)' - .format(input.dim())) - - -@contextlib.contextmanager -def patch_sync_batchnorm(): - import torch.nn as nn - - backup = nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d - - nn.BatchNorm1d = SynchronizedBatchNorm1d - nn.BatchNorm2d = SynchronizedBatchNorm2d - nn.BatchNorm3d = SynchronizedBatchNorm3d - - yield - - nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d = backup - - -def convert_model(module): - """Traverse the input module and its child recursively - and replace all instance of torch.nn.modules.batchnorm.BatchNorm*N*d - to SynchronizedBatchNorm*N*d - - Args: - module: the input module needs to be convert to SyncBN model - - Examples: - >>> import torch.nn as nn - >>> import torchvision - >>> # m is a standard pytorch model - >>> m = torchvision.models.resnet18(True) - >>> m = nn.DataParallel(m) - >>> # after convert, m is using SyncBN - >>> m = convert_model(m) - """ - if isinstance(module, torch.nn.DataParallel): - mod = module.module - mod = convert_model(mod) - mod = DataParallelWithCallback(mod, device_ids=module.device_ids) - return mod - - mod = module - for pth_module, sync_module in zip([torch.nn.modules.batchnorm.BatchNorm1d, - torch.nn.modules.batchnorm.BatchNorm2d, - torch.nn.modules.batchnorm.BatchNorm3d], - [SynchronizedBatchNorm1d, - SynchronizedBatchNorm2d, - SynchronizedBatchNorm3d]): - if isinstance(module, pth_module): - mod = sync_module(module.num_features, module.eps, module.momentum, module.affine) - mod.running_mean = module.running_mean - mod.running_var = module.running_var - if module.affine: - mod.weight.data = module.weight.data.clone().detach() - mod.bias.data = module.bias.data.clone().detach() - - for name, child in module.named_children(): - mod.add_module(name, convert_model(child)) - - return mod diff --git a/spaces/xu1998hz/sescore_english_mt/description.md b/spaces/xu1998hz/sescore_english_mt/description.md deleted file mode 100644 index acc8b9f730fcdc4869da324c54f2db6c57f1983c..0000000000000000000000000000000000000000 --- a/spaces/xu1998hz/sescore_english_mt/description.md +++ /dev/null @@ -1,64 +0,0 @@ -## Installation and usage - -```bash -pip install -r requirements.txt -``` - -Minimal example (evaluating English text generation) -```python -import evaluate -sescore = evaluate.load("xu1998hz/sescore") -# for different versions of SEScore -# sescore = evaluate.load("xu1998hz/sescore_english_mt") -> for English at Machine Translation -# sescore = evaluate.load("xu1998hz/sescore_german_mt") -> for German at Machine Translation -# sescore = evaluate.load("xu1998hz/sescore_english_webnlg") -> for webnlg data-to-text -# sescore = evaluate.load("xu1998hz/sescore_english_coco") -> for image caption -score = sescore.compute( - references=['sescore is a simple but effective next-generation text evaluation metric'], - predictions=['sescore is simple effective text evaluation metric for next generation'] -) -``` - -*SEScore* compares a list of references (gold translation/generated output examples) with a same-length list of candidate generated samples. Currently, the output range is learned and scores are most useful in relative ranking scenarios rather than absolute comparisons. We are producing a series of rescaling options to make absolute SEScore-based scaling more effective. - - -### Available pre-trained models - -Currently, the following language/model pairs are available: - -| Language | pretrained data | pretrained model link | -|----------|-----------------|-----------------------| -| English | MT | [xu1998hz/sescore_english_mt](https://huggingface.co/xu1998hz/sescore_english_mt) | -| German | MT | [xu1998hz/sescore_german_mt](https://huggingface.co/xu1998hz/sescore_german_mt) | -| English | webNLG17 | [xu1998hz/sescore_english_webnlg17](https://huggingface.co/xu1998hz/sescore_english_webnlg17) | -| English | CoCo captions | [xu1998hz/sescore_english_coco](https://huggingface.co/xu1998hz/sescore_english_coco) | - - -Please contact repo maintainer Wenda Xu to add your models! - -## Limitations - -*SEScore* is trained on synthetic data in-domain. -Although this data is generated to simulate user-relevant errors like deletion and spurious insertion, it may be limited in its ability to simulate humanlike errors. -Model applicability is domain-specific (e.g., CoCo caption-trained model will be better for captioning than MT-trained). - -We are in the process of producing and benchmarking general language-level *SEScore* variants. - -## Citation - -If you find our work useful, please cite the following: - -```bibtex -@inproceedings{xu-etal-2022-not, - title={Not All Errors are Equal: Learning Text Generation Metrics using Stratified Error Synthesis}, - author={Xu, Wenda and Tuan, Yi-lin and Lu, Yujie and Saxon, Michael and Li, Lei and Wang, William Yang}, - booktitle ={Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing}, - month={dec}, - year={2022}, - url={https://arxiv.org/abs/2210.05035} -} -``` - -## Acknowledgements - -The work of the [COMET](https://github.com/Unbabel/COMET) maintainers at [Unbabel](https://duckduckgo.com/?t=ffab&q=unbabel&ia=web) has been instrumental in producing SEScore. \ No newline at end of file diff --git a/spaces/xxbb/VITS-Umamusume-voice-synthesizer/monotonic_align/core.c b/spaces/xxbb/VITS-Umamusume-voice-synthesizer/monotonic_align/core.c deleted file mode 100644 index 5631d20a9a00db29e143a6e8e4e5c378d6bb850a..0000000000000000000000000000000000000000 --- a/spaces/xxbb/VITS-Umamusume-voice-synthesizer/monotonic_align/core.c +++ /dev/null @@ -1,21299 +0,0 @@ -/* Generated by Cython 0.29.21 */ - -/* BEGIN: Cython Metadata -{ - "distutils": { - "name": "monotonic_align.core", - "sources": [ - "core.pyx" - ] - }, - "module_name": "monotonic_align.core" -} -END: Cython Metadata */ - -#define PY_SSIZE_T_CLEAN -#include "Python.h" -#ifndef Py_PYTHON_H - #error Python headers needed to compile C extensions, please install development version of Python. -#elif PY_VERSION_HEX < 0x02060000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000) - #error Cython requires Python 2.6+ or Python 3.3+. -#else -#define CYTHON_ABI "0_29_21" -#define CYTHON_HEX_VERSION 0x001D15F0 -#define CYTHON_FUTURE_DIVISION 0 -#include -#ifndef offsetof - #define offsetof(type, member) ( (size_t) & ((type*)0) -> member ) -#endif -#if !defined(WIN32) && !defined(MS_WINDOWS) - #ifndef __stdcall - #define __stdcall - #endif - #ifndef __cdecl - #define __cdecl - #endif - #ifndef __fastcall - #define __fastcall - #endif -#endif -#ifndef DL_IMPORT - #define DL_IMPORT(t) t -#endif -#ifndef DL_EXPORT - #define DL_EXPORT(t) t -#endif -#define __PYX_COMMA , -#ifndef HAVE_LONG_LONG - #if PY_VERSION_HEX >= 0x02070000 - #define HAVE_LONG_LONG - #endif -#endif -#ifndef PY_LONG_LONG - #define PY_LONG_LONG LONG_LONG -#endif -#ifndef Py_HUGE_VAL - #define Py_HUGE_VAL HUGE_VAL -#endif -#ifdef PYPY_VERSION - #define CYTHON_COMPILING_IN_PYPY 1 - #define CYTHON_COMPILING_IN_PYSTON 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #if PY_VERSION_HEX < 0x03050000 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #undef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 1 - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 -#elif defined(PYSTON_VERSION) - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_PYSTON 1 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 -#else - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_PYSTON 0 - #define CYTHON_COMPILING_IN_CPYTHON 1 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #if PY_VERSION_HEX < 0x02070000 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #elif !defined(CYTHON_USE_PYTYPE_LOOKUP) - #define CYTHON_USE_PYTYPE_LOOKUP 1 - #endif - #if PY_MAJOR_VERSION < 3 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #if PY_VERSION_HEX < 0x02070000 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #elif !defined(CYTHON_USE_PYLONG_INTERNALS) - #define CYTHON_USE_PYLONG_INTERNALS 1 - #endif - #ifndef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 1 - #endif - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #if PY_VERSION_HEX < 0x030300F0 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #elif !defined(CYTHON_USE_UNICODE_WRITER) - #define CYTHON_USE_UNICODE_WRITER 1 - #endif - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #ifndef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 1 - #endif - #ifndef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 1 - #endif - #ifndef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT (PY_VERSION_HEX >= 0x03050000) - #endif - #ifndef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1) - #endif - #ifndef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS (PY_VERSION_HEX >= 0x030600B1) - #endif - #ifndef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK (PY_VERSION_HEX >= 0x030700A3) - #endif -#endif -#if !defined(CYTHON_FAST_PYCCALL) -#define CYTHON_FAST_PYCCALL (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1) -#endif -#if CYTHON_USE_PYLONG_INTERNALS - #include "longintrepr.h" - #undef SHIFT - #undef BASE - #undef MASK - #ifdef SIZEOF_VOID_P - enum { __pyx_check_sizeof_voidp = 1 / (int)(SIZEOF_VOID_P == sizeof(void*)) }; - #endif -#endif -#ifndef __has_attribute - #define __has_attribute(x) 0 -#endif -#ifndef __has_cpp_attribute - #define __has_cpp_attribute(x) 0 -#endif -#ifndef CYTHON_RESTRICT - #if defined(__GNUC__) - #define CYTHON_RESTRICT __restrict__ - #elif defined(_MSC_VER) && _MSC_VER >= 1400 - #define CYTHON_RESTRICT __restrict - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_RESTRICT restrict - #else - #define CYTHON_RESTRICT - #endif -#endif -#ifndef CYTHON_UNUSED -# if defined(__GNUC__) -# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -#endif -#ifndef CYTHON_MAYBE_UNUSED_VAR -# if defined(__cplusplus) - template void CYTHON_MAYBE_UNUSED_VAR( const T& ) { } -# else -# define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x) -# endif -#endif -#ifndef CYTHON_NCP_UNUSED -# if CYTHON_COMPILING_IN_CPYTHON -# define CYTHON_NCP_UNUSED -# else -# define CYTHON_NCP_UNUSED CYTHON_UNUSED -# endif -#endif -#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None) -#ifdef _MSC_VER - #ifndef _MSC_STDINT_H_ - #if _MSC_VER < 1300 - typedef unsigned char uint8_t; - typedef unsigned int uint32_t; - #else - typedef unsigned __int8 uint8_t; - typedef unsigned __int32 uint32_t; - #endif - #endif -#else - #include -#endif -#ifndef CYTHON_FALLTHROUGH - #if defined(__cplusplus) && __cplusplus >= 201103L - #if __has_cpp_attribute(fallthrough) - #define CYTHON_FALLTHROUGH [[fallthrough]] - #elif __has_cpp_attribute(clang::fallthrough) - #define CYTHON_FALLTHROUGH [[clang::fallthrough]] - #elif __has_cpp_attribute(gnu::fallthrough) - #define CYTHON_FALLTHROUGH [[gnu::fallthrough]] - #endif - #endif - #ifndef CYTHON_FALLTHROUGH - #if __has_attribute(fallthrough) - #define CYTHON_FALLTHROUGH __attribute__((fallthrough)) - #else - #define CYTHON_FALLTHROUGH - #endif - #endif - #if defined(__clang__ ) && defined(__apple_build_version__) - #if __apple_build_version__ < 7000000 - #undef CYTHON_FALLTHROUGH - #define CYTHON_FALLTHROUGH - #endif - #endif -#endif - -#ifndef CYTHON_INLINE - #if defined(__clang__) - #define CYTHON_INLINE __inline__ __attribute__ ((__unused__)) - #elif defined(__GNUC__) - #define CYTHON_INLINE __inline__ - #elif defined(_MSC_VER) - #define CYTHON_INLINE __inline - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_INLINE inline - #else - #define CYTHON_INLINE - #endif -#endif - -#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag) - #define Py_OptimizeFlag 0 -#endif -#define __PYX_BUILD_PY_SSIZE_T "n" -#define CYTHON_FORMAT_SSIZE_T "z" -#if PY_MAJOR_VERSION < 3 - #define __Pyx_BUILTIN_MODULE_NAME "__builtin__" - #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) - #define __Pyx_DefaultClassType PyClass_Type -#else - #define __Pyx_BUILTIN_MODULE_NAME "builtins" -#if PY_VERSION_HEX >= 0x030800A4 && PY_VERSION_HEX < 0x030800B2 - #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a, 0, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#else - #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#endif - #define __Pyx_DefaultClassType PyType_Type -#endif -#ifndef Py_TPFLAGS_CHECKTYPES - #define Py_TPFLAGS_CHECKTYPES 0 -#endif -#ifndef Py_TPFLAGS_HAVE_INDEX - #define Py_TPFLAGS_HAVE_INDEX 0 -#endif -#ifndef Py_TPFLAGS_HAVE_NEWBUFFER - #define Py_TPFLAGS_HAVE_NEWBUFFER 0 -#endif -#ifndef Py_TPFLAGS_HAVE_FINALIZE - #define Py_TPFLAGS_HAVE_FINALIZE 0 -#endif -#ifndef METH_STACKLESS - #define METH_STACKLESS 0 -#endif -#if PY_VERSION_HEX <= 0x030700A3 || !defined(METH_FASTCALL) - #ifndef METH_FASTCALL - #define METH_FASTCALL 0x80 - #endif - typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *const *args, Py_ssize_t nargs); - typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject *const *args, - Py_ssize_t nargs, PyObject *kwnames); -#else - #define __Pyx_PyCFunctionFast _PyCFunctionFast - #define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords -#endif -#if CYTHON_FAST_PYCCALL -#define __Pyx_PyFastCFunction_Check(func)\ - ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS))))) -#else -#define __Pyx_PyFastCFunction_Check(func) 0 -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc) - #define PyObject_Malloc(s) PyMem_Malloc(s) - #define PyObject_Free(p) PyMem_Free(p) - #define PyObject_Realloc(p) PyMem_Realloc(p) -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030400A1 - #define PyMem_RawMalloc(n) PyMem_Malloc(n) - #define PyMem_RawRealloc(p, n) PyMem_Realloc(p, n) - #define PyMem_RawFree(p) PyMem_Free(p) -#endif -#if CYTHON_COMPILING_IN_PYSTON - #define __Pyx_PyCode_HasFreeVars(co) PyCode_HasFreeVars(co) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno) -#else - #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno) -#endif -#if !CYTHON_FAST_THREAD_STATE || PY_VERSION_HEX < 0x02070000 - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#elif PY_VERSION_HEX >= 0x03060000 - #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet() -#elif PY_VERSION_HEX >= 0x03000000 - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#else - #define __Pyx_PyThreadState_Current _PyThreadState_Current -#endif -#if PY_VERSION_HEX < 0x030700A2 && !defined(PyThread_tss_create) && !defined(Py_tss_NEEDS_INIT) -#include "pythread.h" -#define Py_tss_NEEDS_INIT 0 -typedef int Py_tss_t; -static CYTHON_INLINE int PyThread_tss_create(Py_tss_t *key) { - *key = PyThread_create_key(); - return 0; -} -static CYTHON_INLINE Py_tss_t * PyThread_tss_alloc(void) { - Py_tss_t *key = (Py_tss_t *)PyObject_Malloc(sizeof(Py_tss_t)); - *key = Py_tss_NEEDS_INIT; - return key; -} -static CYTHON_INLINE void PyThread_tss_free(Py_tss_t *key) { - PyObject_Free(key); -} -static CYTHON_INLINE int PyThread_tss_is_created(Py_tss_t *key) { - return *key != Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE void PyThread_tss_delete(Py_tss_t *key) { - PyThread_delete_key(*key); - *key = Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE int PyThread_tss_set(Py_tss_t *key, void *value) { - return PyThread_set_key_value(*key, value); -} -static CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) { - return PyThread_get_key_value(*key); -} -#endif -#if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized) -#define __Pyx_PyDict_NewPresized(n) ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n)) -#else -#define __Pyx_PyDict_NewPresized(n) PyDict_New() -#endif -#if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION - #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y) -#else - #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y) -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 && CYTHON_USE_UNICODE_INTERNALS -#define __Pyx_PyDict_GetItemStr(dict, name) _PyDict_GetItem_KnownHash(dict, name, ((PyASCIIObject *) name)->hash) -#else -#define __Pyx_PyDict_GetItemStr(dict, name) PyDict_GetItem(dict, name) -#endif -#if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND) - #define CYTHON_PEP393_ENABLED 1 - #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\ - 0 : _PyUnicode_Ready((PyObject *)(op))) - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u) - #define __Pyx_PyUnicode_KIND(u) PyUnicode_KIND(u) - #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u) - #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, ch) - #if defined(PyUnicode_IS_READY) && defined(PyUnicode_GET_SIZE) - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u))) - #else - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_LENGTH(u)) - #endif -#else - #define CYTHON_PEP393_ENABLED 0 - #define PyUnicode_1BYTE_KIND 1 - #define PyUnicode_2BYTE_KIND 2 - #define PyUnicode_4BYTE_KIND 4 - #define __Pyx_PyUnicode_READY(op) (0) - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i])) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111) - #define __Pyx_PyUnicode_KIND(u) (sizeof(Py_UNICODE)) - #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u)) - #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i])) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) (((void)(k)), ((Py_UNICODE*)d)[i] = ch) - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u)) -#endif -#if CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b) -#else - #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\ - PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b)) -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_Contains) - #define PyUnicode_Contains(u, s) PySequence_Contains(u, s) -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyByteArray_Check) - #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type) -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Format) - #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt) -#endif -#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyString_Check(b) && !PyString_CheckExact(b)))) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b)) -#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyUnicode_Check(b) && !PyUnicode_CheckExact(b)))) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b)) -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b) -#else - #define __Pyx_PyString_Format(a, b) PyString_Format(a, b) -#endif -#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII) - #define PyObject_ASCII(o) PyObject_Repr(o) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBaseString_Type PyUnicode_Type - #define PyStringObject PyUnicodeObject - #define PyString_Type PyUnicode_Type - #define PyString_Check PyUnicode_Check - #define PyString_CheckExact PyUnicode_CheckExact -#ifndef PyObject_Unicode - #define PyObject_Unicode PyObject_Str -#endif -#endif -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj) - #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj) -#else - #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj)) - #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj)) -#endif -#ifndef PySet_CheckExact - #define PySet_CheckExact(obj) (Py_TYPE(obj) == &PySet_Type) -#endif -#if PY_VERSION_HEX >= 0x030900A4 - #define __Pyx_SET_REFCNT(obj, refcnt) Py_SET_REFCNT(obj, refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SET_SIZE(obj, size) -#else - #define __Pyx_SET_REFCNT(obj, refcnt) Py_REFCNT(obj) = (refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SIZE(obj) = (size) -#endif -#if CYTHON_ASSUME_SAFE_MACROS - #define __Pyx_PySequence_SIZE(seq) Py_SIZE(seq) -#else - #define __Pyx_PySequence_SIZE(seq) PySequence_Size(seq) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyIntObject PyLongObject - #define PyInt_Type PyLong_Type - #define PyInt_Check(op) PyLong_Check(op) - #define PyInt_CheckExact(op) PyLong_CheckExact(op) - #define PyInt_FromString PyLong_FromString - #define PyInt_FromUnicode PyLong_FromUnicode - #define PyInt_FromLong PyLong_FromLong - #define PyInt_FromSize_t PyLong_FromSize_t - #define PyInt_FromSsize_t PyLong_FromSsize_t - #define PyInt_AsLong PyLong_AsLong - #define PyInt_AS_LONG PyLong_AS_LONG - #define PyInt_AsSsize_t PyLong_AsSsize_t - #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask - #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask - #define PyNumber_Int PyNumber_Long -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBoolObject PyLongObject -#endif -#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY - #ifndef PyUnicode_InternFromString - #define PyUnicode_InternFromString(s) PyUnicode_FromString(s) - #endif -#endif -#if PY_VERSION_HEX < 0x030200A4 - typedef long Py_hash_t; - #define __Pyx_PyInt_FromHash_t PyInt_FromLong - #define __Pyx_PyInt_AsHash_t PyInt_AsLong -#else - #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t - #define __Pyx_PyInt_AsHash_t PyInt_AsSsize_t -#endif -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyMethod_New(func, self, klass) ((self) ? ((void)(klass), PyMethod_New(func, self)) : __Pyx_NewRef(func)) -#else - #define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass) -#endif -#if CYTHON_USE_ASYNC_SLOTS - #if PY_VERSION_HEX >= 0x030500B1 - #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods - #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async) - #else - #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved)) - #endif -#else - #define __Pyx_PyType_AsAsync(obj) NULL -#endif -#ifndef __Pyx_PyAsyncMethodsStruct - typedef struct { - unaryfunc am_await; - unaryfunc am_aiter; - unaryfunc am_anext; - } __Pyx_PyAsyncMethodsStruct; -#endif - -#if defined(WIN32) || defined(MS_WINDOWS) - #define _USE_MATH_DEFINES -#endif -#include -#ifdef NAN -#define __PYX_NAN() ((float) NAN) -#else -static CYTHON_INLINE float __PYX_NAN() { - float value; - memset(&value, 0xFF, sizeof(value)); - return value; -} -#endif -#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL) -#define __Pyx_truncl trunc -#else -#define __Pyx_truncl truncl -#endif - -#define __PYX_MARK_ERR_POS(f_index, lineno) \ - { __pyx_filename = __pyx_f[f_index]; (void)__pyx_filename; __pyx_lineno = lineno; (void)__pyx_lineno; __pyx_clineno = __LINE__; (void)__pyx_clineno; } -#define __PYX_ERR(f_index, lineno, Ln_error) \ - { __PYX_MARK_ERR_POS(f_index, lineno) goto Ln_error; } - -#ifndef __PYX_EXTERN_C - #ifdef __cplusplus - #define __PYX_EXTERN_C extern "C" - #else - #define __PYX_EXTERN_C extern - #endif -#endif - -#define __PYX_HAVE__monotonic_align__core -#define __PYX_HAVE_API__monotonic_align__core -/* Early includes */ -#include "pythread.h" -#include -#include -#include -#include "pystate.h" -#ifdef _OPENMP -#include -#endif /* _OPENMP */ - -#if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS) -#define CYTHON_WITHOUT_ASSERTIONS -#endif - -typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding; - const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; - -#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_UTF8 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT (PY_MAJOR_VERSION >= 3 && __PYX_DEFAULT_STRING_ENCODING_IS_UTF8) -#define __PYX_DEFAULT_STRING_ENCODING "" -#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString -#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#define __Pyx_uchar_cast(c) ((unsigned char)c) -#define __Pyx_long_cast(x) ((long)x) -#define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\ - (sizeof(type) < sizeof(Py_ssize_t)) ||\ - (sizeof(type) > sizeof(Py_ssize_t) &&\ - likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX) &&\ - (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\ - v == (type)PY_SSIZE_T_MIN))) ||\ - (sizeof(type) == sizeof(Py_ssize_t) &&\ - (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX))) ) -static CYTHON_INLINE int __Pyx_is_valid_index(Py_ssize_t i, Py_ssize_t limit) { - return (size_t) i < (size_t) limit; -} -#if defined (__cplusplus) && __cplusplus >= 201103L - #include - #define __Pyx_sst_abs(value) std::abs(value) -#elif SIZEOF_INT >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) abs(value) -#elif SIZEOF_LONG >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) labs(value) -#elif defined (_MSC_VER) - #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value)) -#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define __Pyx_sst_abs(value) llabs(value) -#elif defined (__GNUC__) - #define __Pyx_sst_abs(value) __builtin_llabs(value) -#else - #define __Pyx_sst_abs(value) ((value<0) ? -value : value) -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*); -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length); -#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s)) -#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l) -#define __Pyx_PyBytes_FromString PyBytes_FromString -#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*); -#if PY_MAJOR_VERSION < 3 - #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#else - #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize -#endif -#define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyObject_AsWritableString(s) ((char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableSString(s) ((signed char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsSString(s) ((const signed char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsUString(s) ((const unsigned char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s) -#define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s) -#define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s) -#define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s) -#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s) -static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) { - const Py_UNICODE *u_end = u; - while (*u_end++) ; - return (size_t)(u_end - u - 1); -} -#define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u)) -#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode -#define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode -#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj) -#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None) -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b); -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject*); -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x); -#define __Pyx_PySequence_Tuple(obj)\ - (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj)) -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t); -#if CYTHON_ASSUME_SAFE_MACROS -#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) -#else -#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x) -#endif -#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x)) -#if PY_MAJOR_VERSION >= 3 -#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x)) -#else -#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x)) -#endif -#define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Float(x)) -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII -static int __Pyx_sys_getdefaultencoding_not_ascii; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - PyObject* ascii_chars_u = NULL; - PyObject* ascii_chars_b = NULL; - const char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - if (strcmp(default_encoding_c, "ascii") == 0) { - __Pyx_sys_getdefaultencoding_not_ascii = 0; - } else { - char ascii_chars[128]; - int c; - for (c = 0; c < 128; c++) { - ascii_chars[c] = c; - } - __Pyx_sys_getdefaultencoding_not_ascii = 1; - ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL); - if (!ascii_chars_u) goto bad; - ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL); - if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) { - PyErr_Format( - PyExc_ValueError, - "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.", - default_encoding_c); - goto bad; - } - Py_DECREF(ascii_chars_u); - Py_DECREF(ascii_chars_b); - } - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - Py_XDECREF(ascii_chars_u); - Py_XDECREF(ascii_chars_b); - return -1; -} -#endif -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3 -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL) -#else -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL) -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -static char* __PYX_DEFAULT_STRING_ENCODING; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c) + 1); - if (!__PYX_DEFAULT_STRING_ENCODING) goto bad; - strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c); - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - return -1; -} -#endif -#endif - - -/* Test for GCC > 2.95 */ -#if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))) - #define likely(x) __builtin_expect(!!(x), 1) - #define unlikely(x) __builtin_expect(!!(x), 0) -#else /* !__GNUC__ or GCC < 2.95 */ - #define likely(x) (x) - #define unlikely(x) (x) -#endif /* __GNUC__ */ -static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; } - -static PyObject *__pyx_m = NULL; -static PyObject *__pyx_d; -static PyObject *__pyx_b; -static PyObject *__pyx_cython_runtime = NULL; -static PyObject *__pyx_empty_tuple; -static PyObject *__pyx_empty_bytes; -static PyObject *__pyx_empty_unicode; -static int __pyx_lineno; -static int __pyx_clineno = 0; -static const char * __pyx_cfilenm= __FILE__; -static const char *__pyx_filename; - - -static const char *__pyx_f[] = { - "core.pyx", - "stringsource", -}; -/* NoFastGil.proto */ -#define __Pyx_PyGILState_Ensure PyGILState_Ensure -#define __Pyx_PyGILState_Release PyGILState_Release -#define __Pyx_FastGIL_Remember() -#define __Pyx_FastGIL_Forget() -#define __Pyx_FastGilFuncInit() - -/* MemviewSliceStruct.proto */ -struct __pyx_memoryview_obj; -typedef struct { - struct __pyx_memoryview_obj *memview; - char *data; - Py_ssize_t shape[8]; - Py_ssize_t strides[8]; - Py_ssize_t suboffsets[8]; -} __Pyx_memviewslice; -#define __Pyx_MemoryView_Len(m) (m.shape[0]) - -/* Atomics.proto */ -#include -#ifndef CYTHON_ATOMICS - #define CYTHON_ATOMICS 1 -#endif -#define __pyx_atomic_int_type int -#if CYTHON_ATOMICS && __GNUC__ >= 4 && (__GNUC_MINOR__ > 1 ||\ - (__GNUC_MINOR__ == 1 && __GNUC_PATCHLEVEL >= 2)) &&\ - !defined(__i386__) - #define __pyx_atomic_incr_aligned(value, lock) __sync_fetch_and_add(value, 1) - #define __pyx_atomic_decr_aligned(value, lock) __sync_fetch_and_sub(value, 1) - #ifdef __PYX_DEBUG_ATOMICS - #warning "Using GNU atomics" - #endif -#elif CYTHON_ATOMICS && defined(_MSC_VER) && 0 - #include - #undef __pyx_atomic_int_type - #define __pyx_atomic_int_type LONG - #define __pyx_atomic_incr_aligned(value, lock) InterlockedIncrement(value) - #define __pyx_atomic_decr_aligned(value, lock) InterlockedDecrement(value) - #ifdef __PYX_DEBUG_ATOMICS - #pragma message ("Using MSVC atomics") - #endif -#elif CYTHON_ATOMICS && (defined(__ICC) || defined(__INTEL_COMPILER)) && 0 - #define __pyx_atomic_incr_aligned(value, lock) _InterlockedIncrement(value) - #define __pyx_atomic_decr_aligned(value, lock) _InterlockedDecrement(value) - #ifdef __PYX_DEBUG_ATOMICS - #warning "Using Intel atomics" - #endif -#else - #undef CYTHON_ATOMICS - #define CYTHON_ATOMICS 0 - #ifdef __PYX_DEBUG_ATOMICS - #warning "Not using atomics" - #endif -#endif -typedef volatile __pyx_atomic_int_type __pyx_atomic_int; -#if CYTHON_ATOMICS - #define __pyx_add_acquisition_count(memview)\ - __pyx_atomic_incr_aligned(__pyx_get_slice_count_pointer(memview), memview->lock) - #define __pyx_sub_acquisition_count(memview)\ - __pyx_atomic_decr_aligned(__pyx_get_slice_count_pointer(memview), memview->lock) -#else - #define __pyx_add_acquisition_count(memview)\ - __pyx_add_acquisition_count_locked(__pyx_get_slice_count_pointer(memview), memview->lock) - #define __pyx_sub_acquisition_count(memview)\ - __pyx_sub_acquisition_count_locked(__pyx_get_slice_count_pointer(memview), memview->lock) -#endif - -/* ForceInitThreads.proto */ -#ifndef __PYX_FORCE_INIT_THREADS - #define __PYX_FORCE_INIT_THREADS 0 -#endif - -/* BufferFormatStructs.proto */ -#define IS_UNSIGNED(type) (((type) -1) > 0) -struct __Pyx_StructField_; -#define __PYX_BUF_FLAGS_PACKED_STRUCT (1 << 0) -typedef struct { - const char* name; - struct __Pyx_StructField_* fields; - size_t size; - size_t arraysize[8]; - int ndim; - char typegroup; - char is_unsigned; - int flags; -} __Pyx_TypeInfo; -typedef struct __Pyx_StructField_ { - __Pyx_TypeInfo* type; - const char* name; - size_t offset; -} __Pyx_StructField; -typedef struct { - __Pyx_StructField* field; - size_t parent_offset; -} __Pyx_BufFmt_StackElem; -typedef struct { - __Pyx_StructField root; - __Pyx_BufFmt_StackElem* head; - size_t fmt_offset; - size_t new_count, enc_count; - size_t struct_alignment; - int is_complex; - char enc_type; - char new_packmode; - char enc_packmode; - char is_valid_array; -} __Pyx_BufFmt_Context; - - -/*--- Type declarations ---*/ -struct __pyx_array_obj; -struct __pyx_MemviewEnum_obj; -struct __pyx_memoryview_obj; -struct __pyx_memoryviewslice_obj; -struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each; - -/* "monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ -struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each { - int __pyx_n; - float max_neg_val; -}; - -/* "View.MemoryView":105 - * - * @cname("__pyx_array") - * cdef class array: # <<<<<<<<<<<<<< - * - * cdef: - */ -struct __pyx_array_obj { - PyObject_HEAD - struct __pyx_vtabstruct_array *__pyx_vtab; - char *data; - Py_ssize_t len; - char *format; - int ndim; - Py_ssize_t *_shape; - Py_ssize_t *_strides; - Py_ssize_t itemsize; - PyObject *mode; - PyObject *_format; - void (*callback_free_data)(void *); - int free_data; - int dtype_is_object; -}; - - -/* "View.MemoryView":279 - * - * @cname('__pyx_MemviewEnum') - * cdef class Enum(object): # <<<<<<<<<<<<<< - * cdef object name - * def __init__(self, name): - */ -struct __pyx_MemviewEnum_obj { - PyObject_HEAD - PyObject *name; -}; - - -/* "View.MemoryView":330 - * - * @cname('__pyx_memoryview') - * cdef class memoryview(object): # <<<<<<<<<<<<<< - * - * cdef object obj - */ -struct __pyx_memoryview_obj { - PyObject_HEAD - struct __pyx_vtabstruct_memoryview *__pyx_vtab; - PyObject *obj; - PyObject *_size; - PyObject *_array_interface; - PyThread_type_lock lock; - __pyx_atomic_int acquisition_count[2]; - __pyx_atomic_int *acquisition_count_aligned_p; - Py_buffer view; - int flags; - int dtype_is_object; - __Pyx_TypeInfo *typeinfo; -}; - - -/* "View.MemoryView":965 - * - * @cname('__pyx_memoryviewslice') - * cdef class _memoryviewslice(memoryview): # <<<<<<<<<<<<<< - * "Internal class for passing memoryview slices to Python" - * - */ -struct __pyx_memoryviewslice_obj { - struct __pyx_memoryview_obj __pyx_base; - __Pyx_memviewslice from_slice; - PyObject *from_object; - PyObject *(*to_object_func)(char *); - int (*to_dtype_func)(char *, PyObject *); -}; - - - -/* "View.MemoryView":105 - * - * @cname("__pyx_array") - * cdef class array: # <<<<<<<<<<<<<< - * - * cdef: - */ - -struct __pyx_vtabstruct_array { - PyObject *(*get_memview)(struct __pyx_array_obj *); -}; -static struct __pyx_vtabstruct_array *__pyx_vtabptr_array; - - -/* "View.MemoryView":330 - * - * @cname('__pyx_memoryview') - * cdef class memoryview(object): # <<<<<<<<<<<<<< - * - * cdef object obj - */ - -struct __pyx_vtabstruct_memoryview { - char *(*get_item_pointer)(struct __pyx_memoryview_obj *, PyObject *); - PyObject *(*is_slice)(struct __pyx_memoryview_obj *, PyObject *); - PyObject *(*setitem_slice_assignment)(struct __pyx_memoryview_obj *, PyObject *, PyObject *); - PyObject *(*setitem_slice_assign_scalar)(struct __pyx_memoryview_obj *, struct __pyx_memoryview_obj *, PyObject *); - PyObject *(*setitem_indexed)(struct __pyx_memoryview_obj *, PyObject *, PyObject *); - PyObject *(*convert_item_to_object)(struct __pyx_memoryview_obj *, char *); - PyObject *(*assign_item_from_object)(struct __pyx_memoryview_obj *, char *, PyObject *); -}; -static struct __pyx_vtabstruct_memoryview *__pyx_vtabptr_memoryview; - - -/* "View.MemoryView":965 - * - * @cname('__pyx_memoryviewslice') - * cdef class _memoryviewslice(memoryview): # <<<<<<<<<<<<<< - * "Internal class for passing memoryview slices to Python" - * - */ - -struct __pyx_vtabstruct__memoryviewslice { - struct __pyx_vtabstruct_memoryview __pyx_base; -}; -static struct __pyx_vtabstruct__memoryviewslice *__pyx_vtabptr__memoryviewslice; - -/* --- Runtime support code (head) --- */ -/* Refnanny.proto */ -#ifndef CYTHON_REFNANNY - #define CYTHON_REFNANNY 0 -#endif -#if CYTHON_REFNANNY - typedef struct { - void (*INCREF)(void*, PyObject*, int); - void (*DECREF)(void*, PyObject*, int); - void (*GOTREF)(void*, PyObject*, int); - void (*GIVEREF)(void*, PyObject*, int); - void* (*SetupContext)(const char*, int, const char*); - void (*FinishContext)(void**); - } __Pyx_RefNannyAPIStruct; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname); - #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL; -#ifdef WITH_THREAD - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - if (acquire_gil) {\ - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ - PyGILState_Release(__pyx_gilstate_save);\ - } else {\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ - } -#else - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__) -#endif - #define __Pyx_RefNannyFinishContext()\ - __Pyx_RefNanny->FinishContext(&__pyx_refnanny) - #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_XINCREF(r) do { if((r) != NULL) {__Pyx_INCREF(r); }} while(0) - #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r); }} while(0) - #define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r); }} while(0) - #define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);}} while(0) -#else - #define __Pyx_RefNannyDeclarations - #define __Pyx_RefNannySetupContext(name, acquire_gil) - #define __Pyx_RefNannyFinishContext() - #define __Pyx_INCREF(r) Py_INCREF(r) - #define __Pyx_DECREF(r) Py_DECREF(r) - #define __Pyx_GOTREF(r) - #define __Pyx_GIVEREF(r) - #define __Pyx_XINCREF(r) Py_XINCREF(r) - #define __Pyx_XDECREF(r) Py_XDECREF(r) - #define __Pyx_XGOTREF(r) - #define __Pyx_XGIVEREF(r) -#endif -#define __Pyx_XDECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_XDECREF(tmp);\ - } while (0) -#define __Pyx_DECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_DECREF(tmp);\ - } while (0) -#define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0) -#define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0) - -/* PyObjectGetAttrStr.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n) -#endif - -/* GetBuiltinName.proto */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name); - -/* MemviewSliceInit.proto */ -#define __Pyx_BUF_MAX_NDIMS %(BUF_MAX_NDIMS)d -#define __Pyx_MEMVIEW_DIRECT 1 -#define __Pyx_MEMVIEW_PTR 2 -#define __Pyx_MEMVIEW_FULL 4 -#define __Pyx_MEMVIEW_CONTIG 8 -#define __Pyx_MEMVIEW_STRIDED 16 -#define __Pyx_MEMVIEW_FOLLOW 32 -#define __Pyx_IS_C_CONTIG 1 -#define __Pyx_IS_F_CONTIG 2 -static int __Pyx_init_memviewslice( - struct __pyx_memoryview_obj *memview, - int ndim, - __Pyx_memviewslice *memviewslice, - int memview_is_new_reference); -static CYTHON_INLINE int __pyx_add_acquisition_count_locked( - __pyx_atomic_int *acquisition_count, PyThread_type_lock lock); -static CYTHON_INLINE int __pyx_sub_acquisition_count_locked( - __pyx_atomic_int *acquisition_count, PyThread_type_lock lock); -#define __pyx_get_slice_count_pointer(memview) (memview->acquisition_count_aligned_p) -#define __pyx_get_slice_count(memview) (*__pyx_get_slice_count_pointer(memview)) -#define __PYX_INC_MEMVIEW(slice, have_gil) __Pyx_INC_MEMVIEW(slice, have_gil, __LINE__) -#define __PYX_XDEC_MEMVIEW(slice, have_gil) __Pyx_XDEC_MEMVIEW(slice, have_gil, __LINE__) -static CYTHON_INLINE void __Pyx_INC_MEMVIEW(__Pyx_memviewslice *, int, int); -static CYTHON_INLINE void __Pyx_XDEC_MEMVIEW(__Pyx_memviewslice *, int, int); - -/* RaiseArgTupleInvalid.proto */ -static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact, - Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found); - -/* RaiseDoubleKeywords.proto */ -static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name); - -/* ParseKeywords.proto */ -static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[],\ - PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args,\ - const char* function_name); - -/* None.proto */ -static CYTHON_INLINE void __Pyx_RaiseUnboundLocalError(const char *varname); - -/* ArgTypeTest.proto */ -#define __Pyx_ArgTypeTest(obj, type, none_allowed, name, exact)\ - ((likely((Py_TYPE(obj) == type) | (none_allowed && (obj == Py_None)))) ? 1 :\ - __Pyx__ArgTypeTest(obj, type, name, exact)) -static int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const char *name, int exact); - -/* PyObjectCall.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw); -#else -#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw) -#endif - -/* PyThreadStateGet.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate; -#define __Pyx_PyThreadState_assign __pyx_tstate = __Pyx_PyThreadState_Current; -#define __Pyx_PyErr_Occurred() __pyx_tstate->curexc_type -#else -#define __Pyx_PyThreadState_declare -#define __Pyx_PyThreadState_assign -#define __Pyx_PyErr_Occurred() PyErr_Occurred() -#endif - -/* PyErrFetchRestore.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL) -#define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL)) -#else -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#endif -#else -#define __Pyx_PyErr_Clear() PyErr_Clear() -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestoreInState(tstate, type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchInState(tstate, type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb) -#endif - -/* RaiseException.proto */ -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause); - -/* PyCFunctionFastCall.proto */ -#if CYTHON_FAST_PYCCALL -static CYTHON_INLINE PyObject *__Pyx_PyCFunction_FastCall(PyObject *func, PyObject **args, Py_ssize_t nargs); -#else -#define __Pyx_PyCFunction_FastCall(func, args, nargs) (assert(0), NULL) -#endif - -/* PyFunctionFastCall.proto */ -#if CYTHON_FAST_PYCALL -#define __Pyx_PyFunction_FastCall(func, args, nargs)\ - __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL) -#if 1 || PY_VERSION_HEX < 0x030600B1 -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs); -#else -#define __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs) _PyFunction_FastCallDict(func, args, nargs, kwargs) -#endif -#define __Pyx_BUILD_ASSERT_EXPR(cond)\ - (sizeof(char [1 - 2*!(cond)]) - 1) -#ifndef Py_MEMBER_SIZE -#define Py_MEMBER_SIZE(type, member) sizeof(((type *)0)->member) -#endif - static size_t __pyx_pyframe_localsplus_offset = 0; - #include "frameobject.h" - #define __Pxy_PyFrame_Initialize_Offsets()\ - ((void)__Pyx_BUILD_ASSERT_EXPR(sizeof(PyFrameObject) == offsetof(PyFrameObject, f_localsplus) + Py_MEMBER_SIZE(PyFrameObject, f_localsplus)),\ - (void)(__pyx_pyframe_localsplus_offset = ((size_t)PyFrame_Type.tp_basicsize) - Py_MEMBER_SIZE(PyFrameObject, f_localsplus))) - #define __Pyx_PyFrame_GetLocalsplus(frame)\ - (assert(__pyx_pyframe_localsplus_offset), (PyObject **)(((char *)(frame)) + __pyx_pyframe_localsplus_offset)) -#endif - -/* PyObjectCall2Args.proto */ -static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2); - -/* PyObjectCallMethO.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg); -#endif - -/* PyObjectCallOneArg.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg); - -/* IncludeStringH.proto */ -#include - -/* BytesEquals.proto */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals); - -/* UnicodeEquals.proto */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals); - -/* StrEquals.proto */ -#if PY_MAJOR_VERSION >= 3 -#define __Pyx_PyString_Equals __Pyx_PyUnicode_Equals -#else -#define __Pyx_PyString_Equals __Pyx_PyBytes_Equals -#endif - -/* None.proto */ -static CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t, Py_ssize_t); - -/* UnaryNegOverflows.proto */ -#define UNARY_NEG_WOULD_OVERFLOW(x)\ - (((x) < 0) & ((unsigned long)(x) == 0-(unsigned long)(x))) - -static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *); /*proto*/ -/* GetAttr.proto */ -static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *, PyObject *); - -/* GetItemInt.proto */ -#define __Pyx_GetItemInt(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_Fast(o, (Py_ssize_t)i, is_list, wraparound, boundscheck) :\ - (is_list ? (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL) :\ - __Pyx_GetItemInt_Generic(o, to_py_func(i)))) -#define __Pyx_GetItemInt_List(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_List_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ - (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL)) -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, - int wraparound, int boundscheck); -#define __Pyx_GetItemInt_Tuple(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_Tuple_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ - (PyErr_SetString(PyExc_IndexError, "tuple index out of range"), (PyObject*)NULL)) -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, - int wraparound, int boundscheck); -static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j); -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, - int is_list, int wraparound, int boundscheck); - -/* ObjectGetItem.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject* key); -#else -#define __Pyx_PyObject_GetItem(obj, key) PyObject_GetItem(obj, key) -#endif - -/* decode_c_string_utf16.proto */ -static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16(const char *s, Py_ssize_t size, const char *errors) { - int byteorder = 0; - return PyUnicode_DecodeUTF16(s, size, errors, &byteorder); -} -static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16LE(const char *s, Py_ssize_t size, const char *errors) { - int byteorder = -1; - return PyUnicode_DecodeUTF16(s, size, errors, &byteorder); -} -static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16BE(const char *s, Py_ssize_t size, const char *errors) { - int byteorder = 1; - return PyUnicode_DecodeUTF16(s, size, errors, &byteorder); -} - -/* decode_c_string.proto */ -static CYTHON_INLINE PyObject* __Pyx_decode_c_string( - const char* cstring, Py_ssize_t start, Py_ssize_t stop, - const char* encoding, const char* errors, - PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)); - -/* PyErrExceptionMatches.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err) -static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err); -#else -#define __Pyx_PyErr_ExceptionMatches(err) PyErr_ExceptionMatches(err) -#endif - -/* GetAttr3.proto */ -static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *, PyObject *, PyObject *); - -/* PyDictVersioning.proto */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -#define __PYX_DICT_VERSION_INIT ((PY_UINT64_T) -1) -#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)\ - (version_var) = __PYX_GET_DICT_VERSION(dict);\ - (cache_var) = (value); -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - if (likely(__PYX_GET_DICT_VERSION(DICT) == __pyx_dict_version)) {\ - (VAR) = __pyx_dict_cached_value;\ - } else {\ - (VAR) = __pyx_dict_cached_value = (LOOKUP);\ - __pyx_dict_version = __PYX_GET_DICT_VERSION(DICT);\ - }\ -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj); -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj); -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version); -#else -#define __PYX_GET_DICT_VERSION(dict) (0) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var) -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) (VAR) = (LOOKUP); -#endif - -/* GetModuleGlobalName.proto */ -#if CYTHON_USE_DICT_VERSIONS -#define __Pyx_GetModuleGlobalName(var, name) {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - (var) = (likely(__pyx_dict_version == __PYX_GET_DICT_VERSION(__pyx_d))) ?\ - (likely(__pyx_dict_cached_value) ? __Pyx_NewRef(__pyx_dict_cached_value) : __Pyx_GetBuiltinName(name)) :\ - __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} -#define __Pyx_GetModuleGlobalNameUncached(var, name) {\ - PY_UINT64_T __pyx_dict_version;\ - PyObject *__pyx_dict_cached_value;\ - (var) = __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value); -#else -#define __Pyx_GetModuleGlobalName(var, name) (var) = __Pyx__GetModuleGlobalName(name) -#define __Pyx_GetModuleGlobalNameUncached(var, name) (var) = __Pyx__GetModuleGlobalName(name) -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name); -#endif - -/* RaiseTooManyValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected); - -/* RaiseNeedMoreValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index); - -/* RaiseNoneIterError.proto */ -static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void); - -/* ExtTypeTest.proto */ -static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type); - -/* GetTopmostException.proto */ -#if CYTHON_USE_EXC_INFO_STACK -static _PyErr_StackItem * __Pyx_PyErr_GetTopmostException(PyThreadState *tstate); -#endif - -/* SaveResetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSave(type, value, tb) __Pyx__ExceptionSave(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#define __Pyx_ExceptionReset(type, value, tb) __Pyx__ExceptionReset(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -#else -#define __Pyx_ExceptionSave(type, value, tb) PyErr_GetExcInfo(type, value, tb) -#define __Pyx_ExceptionReset(type, value, tb) PyErr_SetExcInfo(type, value, tb) -#endif - -/* GetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_GetException(type, value, tb) __Pyx__GetException(__pyx_tstate, type, value, tb) -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* SwapException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSwap(type, value, tb) __Pyx__ExceptionSwap(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* Import.proto */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level); - -/* FastTypeChecks.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type) -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2); -#else -#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type) -#define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type) -#define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2)) -#endif -#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception) - -static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -/* ListCompAppend.proto */ -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE int __Pyx_ListComp_Append(PyObject* list, PyObject* x) { - PyListObject* L = (PyListObject*) list; - Py_ssize_t len = Py_SIZE(list); - if (likely(L->allocated > len)) { - Py_INCREF(x); - PyList_SET_ITEM(list, len, x); - __Pyx_SET_SIZE(list, len + 1); - return 0; - } - return PyList_Append(list, x); -} -#else -#define __Pyx_ListComp_Append(L,x) PyList_Append(L,x) -#endif - -/* PyIntBinop.proto */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check); -#else -#define __Pyx_PyInt_AddObjC(op1, op2, intval, inplace, zerodivision_check)\ - (inplace ? PyNumber_InPlaceAdd(op1, op2) : PyNumber_Add(op1, op2)) -#endif - -/* ListExtend.proto */ -static CYTHON_INLINE int __Pyx_PyList_Extend(PyObject* L, PyObject* v) { -#if CYTHON_COMPILING_IN_CPYTHON - PyObject* none = _PyList_Extend((PyListObject*)L, v); - if (unlikely(!none)) - return -1; - Py_DECREF(none); - return 0; -#else - return PyList_SetSlice(L, PY_SSIZE_T_MAX, PY_SSIZE_T_MAX, v); -#endif -} - -/* ListAppend.proto */ -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE int __Pyx_PyList_Append(PyObject* list, PyObject* x) { - PyListObject* L = (PyListObject*) list; - Py_ssize_t len = Py_SIZE(list); - if (likely(L->allocated > len) & likely(len > (L->allocated >> 1))) { - Py_INCREF(x); - PyList_SET_ITEM(list, len, x); - __Pyx_SET_SIZE(list, len + 1); - return 0; - } - return PyList_Append(list, x); -} -#else -#define __Pyx_PyList_Append(L,x) PyList_Append(L,x) -#endif - -/* None.proto */ -static CYTHON_INLINE long __Pyx_div_long(long, long); - -/* ImportFrom.proto */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name); - -/* HasAttr.proto */ -static CYTHON_INLINE int __Pyx_HasAttr(PyObject *, PyObject *); - -/* PyObject_GenericGetAttrNoDict.proto */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GenericGetAttrNoDict PyObject_GenericGetAttr -#endif - -/* PyObject_GenericGetAttr.proto */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GenericGetAttr PyObject_GenericGetAttr -#endif - -/* SetVTable.proto */ -static int __Pyx_SetVtable(PyObject *dict, void *vtable); - -/* PyObjectGetAttrStrNoError.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name); - -/* SetupReduce.proto */ -static int __Pyx_setup_reduce(PyObject* type_obj); - -/* CLineInTraceback.proto */ -#ifdef CYTHON_CLINE_IN_TRACEBACK -#define __Pyx_CLineForTraceback(tstate, c_line) (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0) -#else -static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line); -#endif - -/* CodeObjectCache.proto */ -typedef struct { - PyCodeObject* code_object; - int code_line; -} __Pyx_CodeObjectCacheEntry; -struct __Pyx_CodeObjectCache { - int count; - int max_count; - __Pyx_CodeObjectCacheEntry* entries; -}; -static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL}; -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line); -static PyCodeObject *__pyx_find_code_object(int code_line); -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object); - -/* AddTraceback.proto */ -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename); - -#if PY_MAJOR_VERSION < 3 - static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags); - static void __Pyx_ReleaseBuffer(Py_buffer *view); -#else - #define __Pyx_GetBuffer PyObject_GetBuffer - #define __Pyx_ReleaseBuffer PyBuffer_Release -#endif - - -/* BufferStructDeclare.proto */ -typedef struct { - Py_ssize_t shape, strides, suboffsets; -} __Pyx_Buf_DimInfo; -typedef struct { - size_t refcount; - Py_buffer pybuffer; -} __Pyx_Buffer; -typedef struct { - __Pyx_Buffer *rcbuffer; - char *data; - __Pyx_Buf_DimInfo diminfo[8]; -} __Pyx_LocalBuf_ND; - -/* MemviewSliceIsContig.proto */ -static int __pyx_memviewslice_is_contig(const __Pyx_memviewslice mvs, char order, int ndim); - -/* OverlappingSlices.proto */ -static int __pyx_slices_overlap(__Pyx_memviewslice *slice1, - __Pyx_memviewslice *slice2, - int ndim, size_t itemsize); - -/* Capsule.proto */ -static CYTHON_INLINE PyObject *__pyx_capsule_create(void *p, const char *sig); - -/* IsLittleEndian.proto */ -static CYTHON_INLINE int __Pyx_Is_Little_Endian(void); - -/* BufferFormatCheck.proto */ -static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts); -static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx, - __Pyx_BufFmt_StackElem* stack, - __Pyx_TypeInfo* type); - -/* TypeInfoCompare.proto */ -static int __pyx_typeinfo_cmp(__Pyx_TypeInfo *a, __Pyx_TypeInfo *b); - -/* MemviewSliceValidateAndInit.proto */ -static int __Pyx_ValidateAndInit_memviewslice( - int *axes_specs, - int c_or_f_flag, - int buf_flags, - int ndim, - __Pyx_TypeInfo *dtype, - __Pyx_BufFmt_StackElem stack[], - __Pyx_memviewslice *memviewslice, - PyObject *original_obj); - -/* ObjectToMemviewSlice.proto */ -static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(PyObject *, int writable_flag); - -/* ObjectToMemviewSlice.proto */ -static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(PyObject *, int writable_flag); - -/* ObjectToMemviewSlice.proto */ -static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_dc_int(PyObject *, int writable_flag); - -/* CIntToPy.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value); - -/* CIntToPy.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value); - -/* MemviewSliceCopyTemplate.proto */ -static __Pyx_memviewslice -__pyx_memoryview_copy_new_contig(const __Pyx_memviewslice *from_mvs, - const char *mode, int ndim, - size_t sizeof_dtype, int contig_flag, - int dtype_is_object); - -/* CIntFromPy.proto */ -static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *); - -/* CIntFromPy.proto */ -static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *); - -/* CIntFromPy.proto */ -static CYTHON_INLINE char __Pyx_PyInt_As_char(PyObject *); - -/* CheckBinaryVersion.proto */ -static int __Pyx_check_binary_version(void); - -/* InitStrings.proto */ -static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); - -static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *__pyx_v_self); /* proto*/ -static char *__pyx_memoryview_get_item_pointer(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index); /* proto*/ -static PyObject *__pyx_memoryview_is_slice(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj); /* proto*/ -static PyObject *__pyx_memoryview_setitem_slice_assignment(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_dst, PyObject *__pyx_v_src); /* proto*/ -static PyObject *__pyx_memoryview_setitem_slice_assign_scalar(struct __pyx_memoryview_obj *__pyx_v_self, struct __pyx_memoryview_obj *__pyx_v_dst, PyObject *__pyx_v_value); /* proto*/ -static PyObject *__pyx_memoryview_setitem_indexed(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /* proto*/ -static PyObject *__pyx_memoryview_convert_item_to_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp); /* proto*/ -static PyObject *__pyx_memoryview_assign_item_from_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value); /* proto*/ -static PyObject *__pyx_memoryviewslice_convert_item_to_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp); /* proto*/ -static PyObject *__pyx_memoryviewslice_assign_item_from_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value); /* proto*/ - -/* Module declarations from 'cython.view' */ - -/* Module declarations from 'cython' */ - -/* Module declarations from 'monotonic_align.core' */ -static PyTypeObject *__pyx_array_type = 0; -static PyTypeObject *__pyx_MemviewEnum_type = 0; -static PyTypeObject *__pyx_memoryview_type = 0; -static PyTypeObject *__pyx_memoryviewslice_type = 0; -static PyObject *generic = 0; -static PyObject *strided = 0; -static PyObject *indirect = 0; -static PyObject *contiguous = 0; -static PyObject *indirect_contiguous = 0; -static int __pyx_memoryview_thread_locks_used; -static PyThread_type_lock __pyx_memoryview_thread_locks[8]; -static void __pyx_f_15monotonic_align_4core_maximum_path_each(__Pyx_memviewslice, __Pyx_memviewslice, int, int, struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each *__pyx_optional_args); /*proto*/ -static void __pyx_f_15monotonic_align_4core_maximum_path_c(__Pyx_memviewslice, __Pyx_memviewslice, __Pyx_memviewslice, __Pyx_memviewslice, int __pyx_skip_dispatch); /*proto*/ -static struct __pyx_array_obj *__pyx_array_new(PyObject *, Py_ssize_t, char *, char *, char *); /*proto*/ -static void *__pyx_align_pointer(void *, size_t); /*proto*/ -static PyObject *__pyx_memoryview_new(PyObject *, int, int, __Pyx_TypeInfo *); /*proto*/ -static CYTHON_INLINE int __pyx_memoryview_check(PyObject *); /*proto*/ -static PyObject *_unellipsify(PyObject *, int); /*proto*/ -static PyObject *assert_direct_dimensions(Py_ssize_t *, int); /*proto*/ -static struct __pyx_memoryview_obj *__pyx_memview_slice(struct __pyx_memoryview_obj *, PyObject *); /*proto*/ -static int __pyx_memoryview_slice_memviewslice(__Pyx_memviewslice *, Py_ssize_t, Py_ssize_t, Py_ssize_t, int, int, int *, Py_ssize_t, Py_ssize_t, Py_ssize_t, int, int, int, int); /*proto*/ -static char *__pyx_pybuffer_index(Py_buffer *, char *, Py_ssize_t, Py_ssize_t); /*proto*/ -static int __pyx_memslice_transpose(__Pyx_memviewslice *); /*proto*/ -static PyObject *__pyx_memoryview_fromslice(__Pyx_memviewslice, int, PyObject *(*)(char *), int (*)(char *, PyObject *), int); /*proto*/ -static __Pyx_memviewslice *__pyx_memoryview_get_slice_from_memoryview(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/ -static void __pyx_memoryview_slice_copy(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/ -static PyObject *__pyx_memoryview_copy_object(struct __pyx_memoryview_obj *); /*proto*/ -static PyObject *__pyx_memoryview_copy_object_from_slice(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/ -static Py_ssize_t abs_py_ssize_t(Py_ssize_t); /*proto*/ -static char __pyx_get_best_slice_order(__Pyx_memviewslice *, int); /*proto*/ -static void _copy_strided_to_strided(char *, Py_ssize_t *, char *, Py_ssize_t *, Py_ssize_t *, Py_ssize_t *, int, size_t); /*proto*/ -static void copy_strided_to_strided(__Pyx_memviewslice *, __Pyx_memviewslice *, int, size_t); /*proto*/ -static Py_ssize_t __pyx_memoryview_slice_get_size(__Pyx_memviewslice *, int); /*proto*/ -static Py_ssize_t __pyx_fill_contig_strides_array(Py_ssize_t *, Py_ssize_t *, Py_ssize_t, int, char); /*proto*/ -static void *__pyx_memoryview_copy_data_to_temp(__Pyx_memviewslice *, __Pyx_memviewslice *, char, int); /*proto*/ -static int __pyx_memoryview_err_extents(int, Py_ssize_t, Py_ssize_t); /*proto*/ -static int __pyx_memoryview_err_dim(PyObject *, char *, int); /*proto*/ -static int __pyx_memoryview_err(PyObject *, char *); /*proto*/ -static int __pyx_memoryview_copy_contents(__Pyx_memviewslice, __Pyx_memviewslice, int, int, int); /*proto*/ -static void __pyx_memoryview_broadcast_leading(__Pyx_memviewslice *, int, int); /*proto*/ -static void __pyx_memoryview_refcount_copying(__Pyx_memviewslice *, int, int, int); /*proto*/ -static void __pyx_memoryview_refcount_objects_in_slice_with_gil(char *, Py_ssize_t *, Py_ssize_t *, int, int); /*proto*/ -static void __pyx_memoryview_refcount_objects_in_slice(char *, Py_ssize_t *, Py_ssize_t *, int, int); /*proto*/ -static void __pyx_memoryview_slice_assign_scalar(__Pyx_memviewslice *, int, size_t, void *, int); /*proto*/ -static void __pyx_memoryview__slice_assign_scalar(char *, Py_ssize_t *, Py_ssize_t *, int, size_t, void *); /*proto*/ -static PyObject *__pyx_unpickle_Enum__set_state(struct __pyx_MemviewEnum_obj *, PyObject *); /*proto*/ -static __Pyx_TypeInfo __Pyx_TypeInfo_int = { "int", NULL, sizeof(int), { 0 }, 0, IS_UNSIGNED(int) ? 'U' : 'I', IS_UNSIGNED(int), 0 }; -static __Pyx_TypeInfo __Pyx_TypeInfo_float = { "float", NULL, sizeof(float), { 0 }, 0, 'R', 0, 0 }; -#define __Pyx_MODULE_NAME "monotonic_align.core" -extern int __pyx_module_is_main_monotonic_align__core; -int __pyx_module_is_main_monotonic_align__core = 0; - -/* Implementation of 'monotonic_align.core' */ -static PyObject *__pyx_builtin_range; -static PyObject *__pyx_builtin_ValueError; -static PyObject *__pyx_builtin_MemoryError; -static PyObject *__pyx_builtin_enumerate; -static PyObject *__pyx_builtin_TypeError; -static PyObject *__pyx_builtin_Ellipsis; -static PyObject *__pyx_builtin_id; -static PyObject *__pyx_builtin_IndexError; -static const char __pyx_k_O[] = "O"; -static const char __pyx_k_c[] = "c"; -static const char __pyx_k_id[] = "id"; -static const char __pyx_k_new[] = "__new__"; -static const char __pyx_k_obj[] = "obj"; -static const char __pyx_k_base[] = "base"; -static const char __pyx_k_dict[] = "__dict__"; -static const char __pyx_k_main[] = "__main__"; -static const char __pyx_k_mode[] = "mode"; -static const char __pyx_k_name[] = "name"; -static const char __pyx_k_ndim[] = "ndim"; -static const char __pyx_k_pack[] = "pack"; -static const char __pyx_k_size[] = "size"; -static const char __pyx_k_step[] = "step"; -static const char __pyx_k_stop[] = "stop"; -static const char __pyx_k_t_xs[] = "t_xs"; -static const char __pyx_k_t_ys[] = "t_ys"; -static const char __pyx_k_test[] = "__test__"; -static const char __pyx_k_ASCII[] = "ASCII"; -static const char __pyx_k_class[] = "__class__"; -static const char __pyx_k_error[] = "error"; -static const char __pyx_k_flags[] = "flags"; -static const char __pyx_k_paths[] = "paths"; -static const char __pyx_k_range[] = "range"; -static const char __pyx_k_shape[] = "shape"; -static const char __pyx_k_start[] = "start"; -static const char __pyx_k_encode[] = "encode"; -static const char __pyx_k_format[] = "format"; -static const char __pyx_k_import[] = "__import__"; -static const char __pyx_k_name_2[] = "__name__"; -static const char __pyx_k_pickle[] = "pickle"; -static const char __pyx_k_reduce[] = "__reduce__"; -static const char __pyx_k_struct[] = "struct"; -static const char __pyx_k_unpack[] = "unpack"; -static const char __pyx_k_update[] = "update"; -static const char __pyx_k_values[] = "values"; -static const char __pyx_k_fortran[] = "fortran"; -static const char __pyx_k_memview[] = "memview"; -static const char __pyx_k_Ellipsis[] = "Ellipsis"; -static const char __pyx_k_getstate[] = "__getstate__"; -static const char __pyx_k_itemsize[] = "itemsize"; -static const char __pyx_k_pyx_type[] = "__pyx_type"; -static const char __pyx_k_setstate[] = "__setstate__"; -static const char __pyx_k_TypeError[] = "TypeError"; -static const char __pyx_k_enumerate[] = "enumerate"; -static const char __pyx_k_pyx_state[] = "__pyx_state"; -static const char __pyx_k_reduce_ex[] = "__reduce_ex__"; -static const char __pyx_k_IndexError[] = "IndexError"; -static const char __pyx_k_ValueError[] = "ValueError"; -static const char __pyx_k_pyx_result[] = "__pyx_result"; -static const char __pyx_k_pyx_vtable[] = "__pyx_vtable__"; -static const char __pyx_k_MemoryError[] = "MemoryError"; -static const char __pyx_k_PickleError[] = "PickleError"; -static const char __pyx_k_pyx_checksum[] = "__pyx_checksum"; -static const char __pyx_k_stringsource[] = "stringsource"; -static const char __pyx_k_pyx_getbuffer[] = "__pyx_getbuffer"; -static const char __pyx_k_reduce_cython[] = "__reduce_cython__"; -static const char __pyx_k_View_MemoryView[] = "View.MemoryView"; -static const char __pyx_k_allocate_buffer[] = "allocate_buffer"; -static const char __pyx_k_dtype_is_object[] = "dtype_is_object"; -static const char __pyx_k_pyx_PickleError[] = "__pyx_PickleError"; -static const char __pyx_k_setstate_cython[] = "__setstate_cython__"; -static const char __pyx_k_pyx_unpickle_Enum[] = "__pyx_unpickle_Enum"; -static const char __pyx_k_cline_in_traceback[] = "cline_in_traceback"; -static const char __pyx_k_strided_and_direct[] = ""; -static const char __pyx_k_strided_and_indirect[] = ""; -static const char __pyx_k_contiguous_and_direct[] = ""; -static const char __pyx_k_MemoryView_of_r_object[] = ""; -static const char __pyx_k_MemoryView_of_r_at_0x_x[] = ""; -static const char __pyx_k_contiguous_and_indirect[] = ""; -static const char __pyx_k_Cannot_index_with_type_s[] = "Cannot index with type '%s'"; -static const char __pyx_k_Invalid_shape_in_axis_d_d[] = "Invalid shape in axis %d: %d."; -static const char __pyx_k_itemsize_0_for_cython_array[] = "itemsize <= 0 for cython.array"; -static const char __pyx_k_unable_to_allocate_array_data[] = "unable to allocate array data."; -static const char __pyx_k_strided_and_direct_or_indirect[] = ""; -static const char __pyx_k_Buffer_view_does_not_expose_stri[] = "Buffer view does not expose strides"; -static const char __pyx_k_Can_only_create_a_buffer_that_is[] = "Can only create a buffer that is contiguous in memory."; -static const char __pyx_k_Cannot_assign_to_read_only_memor[] = "Cannot assign to read-only memoryview"; -static const char __pyx_k_Cannot_create_writable_memory_vi[] = "Cannot create writable memory view from read-only memoryview"; -static const char __pyx_k_Empty_shape_tuple_for_cython_arr[] = "Empty shape tuple for cython.array"; -static const char __pyx_k_Incompatible_checksums_s_vs_0xb0[] = "Incompatible checksums (%s vs 0xb068931 = (name))"; -static const char __pyx_k_Indirect_dimensions_not_supporte[] = "Indirect dimensions not supported"; -static const char __pyx_k_Invalid_mode_expected_c_or_fortr[] = "Invalid mode, expected 'c' or 'fortran', got %s"; -static const char __pyx_k_Out_of_bounds_on_buffer_access_a[] = "Out of bounds on buffer access (axis %d)"; -static const char __pyx_k_Unable_to_convert_item_to_object[] = "Unable to convert item to object"; -static const char __pyx_k_got_differing_extents_in_dimensi[] = "got differing extents in dimension %d (got %d and %d)"; -static const char __pyx_k_no_default___reduce___due_to_non[] = "no default __reduce__ due to non-trivial __cinit__"; -static const char __pyx_k_unable_to_allocate_shape_and_str[] = "unable to allocate shape and strides."; -static PyObject *__pyx_n_s_ASCII; -static PyObject *__pyx_kp_s_Buffer_view_does_not_expose_stri; -static PyObject *__pyx_kp_s_Can_only_create_a_buffer_that_is; -static PyObject *__pyx_kp_s_Cannot_assign_to_read_only_memor; -static PyObject *__pyx_kp_s_Cannot_create_writable_memory_vi; -static PyObject *__pyx_kp_s_Cannot_index_with_type_s; -static PyObject *__pyx_n_s_Ellipsis; -static PyObject *__pyx_kp_s_Empty_shape_tuple_for_cython_arr; -static PyObject *__pyx_kp_s_Incompatible_checksums_s_vs_0xb0; -static PyObject *__pyx_n_s_IndexError; -static PyObject *__pyx_kp_s_Indirect_dimensions_not_supporte; -static PyObject *__pyx_kp_s_Invalid_mode_expected_c_or_fortr; -static PyObject *__pyx_kp_s_Invalid_shape_in_axis_d_d; -static PyObject *__pyx_n_s_MemoryError; -static PyObject *__pyx_kp_s_MemoryView_of_r_at_0x_x; -static PyObject *__pyx_kp_s_MemoryView_of_r_object; -static PyObject *__pyx_n_b_O; -static PyObject *__pyx_kp_s_Out_of_bounds_on_buffer_access_a; -static PyObject *__pyx_n_s_PickleError; -static PyObject *__pyx_n_s_TypeError; -static PyObject *__pyx_kp_s_Unable_to_convert_item_to_object; -static PyObject *__pyx_n_s_ValueError; -static PyObject *__pyx_n_s_View_MemoryView; -static PyObject *__pyx_n_s_allocate_buffer; -static PyObject *__pyx_n_s_base; -static PyObject *__pyx_n_s_c; -static PyObject *__pyx_n_u_c; -static PyObject *__pyx_n_s_class; -static PyObject *__pyx_n_s_cline_in_traceback; -static PyObject *__pyx_kp_s_contiguous_and_direct; -static PyObject *__pyx_kp_s_contiguous_and_indirect; -static PyObject *__pyx_n_s_dict; -static PyObject *__pyx_n_s_dtype_is_object; -static PyObject *__pyx_n_s_encode; -static PyObject *__pyx_n_s_enumerate; -static PyObject *__pyx_n_s_error; -static PyObject *__pyx_n_s_flags; -static PyObject *__pyx_n_s_format; -static PyObject *__pyx_n_s_fortran; -static PyObject *__pyx_n_u_fortran; -static PyObject *__pyx_n_s_getstate; -static PyObject *__pyx_kp_s_got_differing_extents_in_dimensi; -static PyObject *__pyx_n_s_id; -static PyObject *__pyx_n_s_import; -static PyObject *__pyx_n_s_itemsize; -static PyObject *__pyx_kp_s_itemsize_0_for_cython_array; -static PyObject *__pyx_n_s_main; -static PyObject *__pyx_n_s_memview; -static PyObject *__pyx_n_s_mode; -static PyObject *__pyx_n_s_name; -static PyObject *__pyx_n_s_name_2; -static PyObject *__pyx_n_s_ndim; -static PyObject *__pyx_n_s_new; -static PyObject *__pyx_kp_s_no_default___reduce___due_to_non; -static PyObject *__pyx_n_s_obj; -static PyObject *__pyx_n_s_pack; -static PyObject *__pyx_n_s_paths; -static PyObject *__pyx_n_s_pickle; -static PyObject *__pyx_n_s_pyx_PickleError; -static PyObject *__pyx_n_s_pyx_checksum; -static PyObject *__pyx_n_s_pyx_getbuffer; -static PyObject *__pyx_n_s_pyx_result; -static PyObject *__pyx_n_s_pyx_state; -static PyObject *__pyx_n_s_pyx_type; -static PyObject *__pyx_n_s_pyx_unpickle_Enum; -static PyObject *__pyx_n_s_pyx_vtable; -static PyObject *__pyx_n_s_range; -static PyObject *__pyx_n_s_reduce; -static PyObject *__pyx_n_s_reduce_cython; -static PyObject *__pyx_n_s_reduce_ex; -static PyObject *__pyx_n_s_setstate; -static PyObject *__pyx_n_s_setstate_cython; -static PyObject *__pyx_n_s_shape; -static PyObject *__pyx_n_s_size; -static PyObject *__pyx_n_s_start; -static PyObject *__pyx_n_s_step; -static PyObject *__pyx_n_s_stop; -static PyObject *__pyx_kp_s_strided_and_direct; -static PyObject *__pyx_kp_s_strided_and_direct_or_indirect; -static PyObject *__pyx_kp_s_strided_and_indirect; -static PyObject *__pyx_kp_s_stringsource; -static PyObject *__pyx_n_s_struct; -static PyObject *__pyx_n_s_t_xs; -static PyObject *__pyx_n_s_t_ys; -static PyObject *__pyx_n_s_test; -static PyObject *__pyx_kp_s_unable_to_allocate_array_data; -static PyObject *__pyx_kp_s_unable_to_allocate_shape_and_str; -static PyObject *__pyx_n_s_unpack; -static PyObject *__pyx_n_s_update; -static PyObject *__pyx_n_s_values; -static PyObject *__pyx_pf_15monotonic_align_4core_maximum_path_c(CYTHON_UNUSED PyObject *__pyx_self, __Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs); /* proto */ -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, PyObject *__pyx_v_format, PyObject *__pyx_v_mode, int __pyx_v_allocate_buffer); /* proto */ -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(struct __pyx_array_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */ -static void __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(struct __pyx_array_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_5array_7memview___get__(struct __pyx_array_obj *__pyx_v_self); /* proto */ -static Py_ssize_t __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(struct __pyx_array_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_attr); /* proto */ -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item); /* proto */ -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value); /* proto */ -static PyObject *__pyx_pf___pyx_array___reduce_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_array_2__setstate_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ -static int __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v_name); /* proto */ -static PyObject *__pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(struct __pyx_MemviewEnum_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_MemviewEnum___reduce_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_MemviewEnum_2__setstate_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v___pyx_state); /* proto */ -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj, int __pyx_v_flags, int __pyx_v_dtype_is_object); /* proto */ -static void __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index); /* proto */ -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /* proto */ -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(struct __pyx_memoryview_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static Py_ssize_t __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryview___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryview_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ -static void __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base___get__(struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryviewslice___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryviewslice_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v___pyx_type, long __pyx_v___pyx_checksum, PyObject *__pyx_v___pyx_state); /* proto */ -static PyObject *__pyx_tp_new_array(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_Enum(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_memoryview(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new__memoryviewslice(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_int_0; -static PyObject *__pyx_int_1; -static PyObject *__pyx_int_184977713; -static PyObject *__pyx_int_neg_1; -static float __pyx_k_; -static PyObject *__pyx_tuple__2; -static PyObject *__pyx_tuple__3; -static PyObject *__pyx_tuple__4; -static PyObject *__pyx_tuple__5; -static PyObject *__pyx_tuple__6; -static PyObject *__pyx_tuple__7; -static PyObject *__pyx_tuple__8; -static PyObject *__pyx_tuple__9; -static PyObject *__pyx_slice__16; -static PyObject *__pyx_tuple__10; -static PyObject *__pyx_tuple__11; -static PyObject *__pyx_tuple__12; -static PyObject *__pyx_tuple__13; -static PyObject *__pyx_tuple__14; -static PyObject *__pyx_tuple__15; -static PyObject *__pyx_tuple__17; -static PyObject *__pyx_tuple__18; -static PyObject *__pyx_tuple__19; -static PyObject *__pyx_tuple__20; -static PyObject *__pyx_tuple__21; -static PyObject *__pyx_tuple__22; -static PyObject *__pyx_tuple__23; -static PyObject *__pyx_tuple__24; -static PyObject *__pyx_tuple__25; -static PyObject *__pyx_codeobj__26; -/* Late includes */ - -/* "monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ - -static void __pyx_f_15monotonic_align_4core_maximum_path_each(__Pyx_memviewslice __pyx_v_path, __Pyx_memviewslice __pyx_v_value, int __pyx_v_t_y, int __pyx_v_t_x, struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each *__pyx_optional_args) { - float __pyx_v_max_neg_val = __pyx_k_; - int __pyx_v_x; - int __pyx_v_y; - float __pyx_v_v_prev; - float __pyx_v_v_cur; - int __pyx_v_index; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - long __pyx_t_4; - int __pyx_t_5; - long __pyx_t_6; - long __pyx_t_7; - int __pyx_t_8; - Py_ssize_t __pyx_t_9; - Py_ssize_t __pyx_t_10; - float __pyx_t_11; - float __pyx_t_12; - float __pyx_t_13; - int __pyx_t_14; - Py_ssize_t __pyx_t_15; - Py_ssize_t __pyx_t_16; - if (__pyx_optional_args) { - if (__pyx_optional_args->__pyx_n > 0) { - __pyx_v_max_neg_val = __pyx_optional_args->max_neg_val; - } - } - - /* "monotonic_align/core.pyx":13 - * cdef float v_cur - * cdef float tmp - * cdef int index = t_x - 1 # <<<<<<<<<<<<<< - * - * for y in range(t_y): - */ - __pyx_v_index = (__pyx_v_t_x - 1); - - /* "monotonic_align/core.pyx":15 - * cdef int index = t_x - 1 - * - * for y in range(t_y): # <<<<<<<<<<<<<< - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: - */ - __pyx_t_1 = __pyx_v_t_y; - __pyx_t_2 = __pyx_t_1; - for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { - __pyx_v_y = __pyx_t_3; - - /* "monotonic_align/core.pyx":16 - * - * for y in range(t_y): - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): # <<<<<<<<<<<<<< - * if x == y: - * v_cur = max_neg_val - */ - __pyx_t_4 = (__pyx_v_y + 1); - __pyx_t_5 = __pyx_v_t_x; - if (((__pyx_t_4 < __pyx_t_5) != 0)) { - __pyx_t_6 = __pyx_t_4; - } else { - __pyx_t_6 = __pyx_t_5; - } - __pyx_t_4 = __pyx_t_6; - __pyx_t_5 = ((__pyx_v_t_x + __pyx_v_y) - __pyx_v_t_y); - __pyx_t_6 = 0; - if (((__pyx_t_5 > __pyx_t_6) != 0)) { - __pyx_t_7 = __pyx_t_5; - } else { - __pyx_t_7 = __pyx_t_6; - } - __pyx_t_6 = __pyx_t_4; - for (__pyx_t_5 = __pyx_t_7; __pyx_t_5 < __pyx_t_6; __pyx_t_5+=1) { - __pyx_v_x = __pyx_t_5; - - /* "monotonic_align/core.pyx":17 - * for y in range(t_y): - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: # <<<<<<<<<<<<<< - * v_cur = max_neg_val - * else: - */ - __pyx_t_8 = ((__pyx_v_x == __pyx_v_y) != 0); - if (__pyx_t_8) { - - /* "monotonic_align/core.pyx":18 - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: - * v_cur = max_neg_val # <<<<<<<<<<<<<< - * else: - * v_cur = value[y-1, x] - */ - __pyx_v_v_cur = __pyx_v_max_neg_val; - - /* "monotonic_align/core.pyx":17 - * for y in range(t_y): - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: # <<<<<<<<<<<<<< - * v_cur = max_neg_val - * else: - */ - goto __pyx_L7; - } - - /* "monotonic_align/core.pyx":20 - * v_cur = max_neg_val - * else: - * v_cur = value[y-1, x] # <<<<<<<<<<<<<< - * if x == 0: - * if y == 0: - */ - /*else*/ { - __pyx_t_9 = (__pyx_v_y - 1); - __pyx_t_10 = __pyx_v_x; - __pyx_v_v_cur = (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) ))); - } - __pyx_L7:; - - /* "monotonic_align/core.pyx":21 - * else: - * v_cur = value[y-1, x] - * if x == 0: # <<<<<<<<<<<<<< - * if y == 0: - * v_prev = 0. - */ - __pyx_t_8 = ((__pyx_v_x == 0) != 0); - if (__pyx_t_8) { - - /* "monotonic_align/core.pyx":22 - * v_cur = value[y-1, x] - * if x == 0: - * if y == 0: # <<<<<<<<<<<<<< - * v_prev = 0. - * else: - */ - __pyx_t_8 = ((__pyx_v_y == 0) != 0); - if (__pyx_t_8) { - - /* "monotonic_align/core.pyx":23 - * if x == 0: - * if y == 0: - * v_prev = 0. # <<<<<<<<<<<<<< - * else: - * v_prev = max_neg_val - */ - __pyx_v_v_prev = 0.; - - /* "monotonic_align/core.pyx":22 - * v_cur = value[y-1, x] - * if x == 0: - * if y == 0: # <<<<<<<<<<<<<< - * v_prev = 0. - * else: - */ - goto __pyx_L9; - } - - /* "monotonic_align/core.pyx":25 - * v_prev = 0. - * else: - * v_prev = max_neg_val # <<<<<<<<<<<<<< - * else: - * v_prev = value[y-1, x-1] - */ - /*else*/ { - __pyx_v_v_prev = __pyx_v_max_neg_val; - } - __pyx_L9:; - - /* "monotonic_align/core.pyx":21 - * else: - * v_cur = value[y-1, x] - * if x == 0: # <<<<<<<<<<<<<< - * if y == 0: - * v_prev = 0. - */ - goto __pyx_L8; - } - - /* "monotonic_align/core.pyx":27 - * v_prev = max_neg_val - * else: - * v_prev = value[y-1, x-1] # <<<<<<<<<<<<<< - * value[y, x] += max(v_prev, v_cur) - * - */ - /*else*/ { - __pyx_t_10 = (__pyx_v_y - 1); - __pyx_t_9 = (__pyx_v_x - 1); - __pyx_v_v_prev = (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_10 * __pyx_v_value.strides[0]) )) + __pyx_t_9)) ))); - } - __pyx_L8:; - - /* "monotonic_align/core.pyx":28 - * else: - * v_prev = value[y-1, x-1] - * value[y, x] += max(v_prev, v_cur) # <<<<<<<<<<<<<< - * - * for y in range(t_y - 1, -1, -1): - */ - __pyx_t_11 = __pyx_v_v_cur; - __pyx_t_12 = __pyx_v_v_prev; - if (((__pyx_t_11 > __pyx_t_12) != 0)) { - __pyx_t_13 = __pyx_t_11; - } else { - __pyx_t_13 = __pyx_t_12; - } - __pyx_t_9 = __pyx_v_y; - __pyx_t_10 = __pyx_v_x; - *((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) )) += __pyx_t_13; - } - } - - /* "monotonic_align/core.pyx":30 - * value[y, x] += max(v_prev, v_cur) - * - * for y in range(t_y - 1, -1, -1): # <<<<<<<<<<<<<< - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): - */ - for (__pyx_t_1 = (__pyx_v_t_y - 1); __pyx_t_1 > -1; __pyx_t_1-=1) { - __pyx_v_y = __pyx_t_1; - - /* "monotonic_align/core.pyx":31 - * - * for y in range(t_y - 1, -1, -1): - * path[y, index] = 1 # <<<<<<<<<<<<<< - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): - * index = index - 1 - */ - __pyx_t_10 = __pyx_v_y; - __pyx_t_9 = __pyx_v_index; - *((int *) ( /* dim=1 */ ((char *) (((int *) ( /* dim=0 */ (__pyx_v_path.data + __pyx_t_10 * __pyx_v_path.strides[0]) )) + __pyx_t_9)) )) = 1; - - /* "monotonic_align/core.pyx":32 - * for y in range(t_y - 1, -1, -1): - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): # <<<<<<<<<<<<<< - * index = index - 1 - * - */ - __pyx_t_14 = ((__pyx_v_index != 0) != 0); - if (__pyx_t_14) { - } else { - __pyx_t_8 = __pyx_t_14; - goto __pyx_L13_bool_binop_done; - } - __pyx_t_14 = ((__pyx_v_index == __pyx_v_y) != 0); - if (!__pyx_t_14) { - } else { - __pyx_t_8 = __pyx_t_14; - goto __pyx_L13_bool_binop_done; - } - __pyx_t_9 = (__pyx_v_y - 1); - __pyx_t_10 = __pyx_v_index; - __pyx_t_15 = (__pyx_v_y - 1); - __pyx_t_16 = (__pyx_v_index - 1); - __pyx_t_14 = (((*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) ))) < (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_15 * __pyx_v_value.strides[0]) )) + __pyx_t_16)) )))) != 0); - __pyx_t_8 = __pyx_t_14; - __pyx_L13_bool_binop_done:; - if (__pyx_t_8) { - - /* "monotonic_align/core.pyx":33 - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): - * index = index - 1 # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_index = (__pyx_v_index - 1); - - /* "monotonic_align/core.pyx":32 - * for y in range(t_y - 1, -1, -1): - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): # <<<<<<<<<<<<<< - * index = index - 1 - * - */ - } - } - - /* "monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ - - /* function exit code */ -} - -/* "monotonic_align/core.pyx":38 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: # <<<<<<<<<<<<<< - * cdef int b = paths.shape[0] - * cdef int i - */ - -static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static void __pyx_f_15monotonic_align_4core_maximum_path_c(__Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs, CYTHON_UNUSED int __pyx_skip_dispatch) { - CYTHON_UNUSED int __pyx_v_b; - int __pyx_v_i; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - __Pyx_memviewslice __pyx_t_4 = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_t_5 = { 0, 0, { 0 }, { 0 }, { 0 } }; - Py_ssize_t __pyx_t_6; - Py_ssize_t __pyx_t_7; - - /* "monotonic_align/core.pyx":39 - * @cython.wraparound(False) - * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: - * cdef int b = paths.shape[0] # <<<<<<<<<<<<<< - * cdef int i - * for i in prange(b, nogil=True): - */ - __pyx_v_b = (__pyx_v_paths.shape[0]); - - /* "monotonic_align/core.pyx":41 - * cdef int b = paths.shape[0] - * cdef int i - * for i in prange(b, nogil=True): # <<<<<<<<<<<<<< - * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i]) - */ - { - #ifdef WITH_THREAD - PyThreadState *_save; - Py_UNBLOCK_THREADS - __Pyx_FastGIL_Remember(); - #endif - /*try:*/ { - __pyx_t_1 = __pyx_v_b; - if ((1 == 0)) abort(); - { - #if ((defined(__APPLE__) || defined(__OSX__)) && (defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))))) - #undef likely - #undef unlikely - #define likely(x) (x) - #define unlikely(x) (x) - #endif - __pyx_t_3 = (__pyx_t_1 - 0 + 1 - 1/abs(1)) / 1; - if (__pyx_t_3 > 0) - { - #ifdef _OPENMP - #pragma omp parallel private(__pyx_t_6, __pyx_t_7) firstprivate(__pyx_t_4, __pyx_t_5) - #endif /* _OPENMP */ - { - #ifdef _OPENMP - #pragma omp for firstprivate(__pyx_v_i) lastprivate(__pyx_v_i) - #endif /* _OPENMP */ - for (__pyx_t_2 = 0; __pyx_t_2 < __pyx_t_3; __pyx_t_2++){ - { - __pyx_v_i = (int)(0 + 1 * __pyx_t_2); - - /* "monotonic_align/core.pyx":42 - * cdef int i - * for i in prange(b, nogil=True): - * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i]) # <<<<<<<<<<<<<< - */ - __pyx_t_4.data = __pyx_v_paths.data; - __pyx_t_4.memview = __pyx_v_paths.memview; - __PYX_INC_MEMVIEW(&__pyx_t_4, 0); - { - Py_ssize_t __pyx_tmp_idx = __pyx_v_i; - Py_ssize_t __pyx_tmp_stride = __pyx_v_paths.strides[0]; - __pyx_t_4.data += __pyx_tmp_idx * __pyx_tmp_stride; -} - -__pyx_t_4.shape[0] = __pyx_v_paths.shape[1]; -__pyx_t_4.strides[0] = __pyx_v_paths.strides[1]; - __pyx_t_4.suboffsets[0] = -1; - -__pyx_t_4.shape[1] = __pyx_v_paths.shape[2]; -__pyx_t_4.strides[1] = __pyx_v_paths.strides[2]; - __pyx_t_4.suboffsets[1] = -1; - -__pyx_t_5.data = __pyx_v_values.data; - __pyx_t_5.memview = __pyx_v_values.memview; - __PYX_INC_MEMVIEW(&__pyx_t_5, 0); - { - Py_ssize_t __pyx_tmp_idx = __pyx_v_i; - Py_ssize_t __pyx_tmp_stride = __pyx_v_values.strides[0]; - __pyx_t_5.data += __pyx_tmp_idx * __pyx_tmp_stride; -} - -__pyx_t_5.shape[0] = __pyx_v_values.shape[1]; -__pyx_t_5.strides[0] = __pyx_v_values.strides[1]; - __pyx_t_5.suboffsets[0] = -1; - -__pyx_t_5.shape[1] = __pyx_v_values.shape[2]; -__pyx_t_5.strides[1] = __pyx_v_values.strides[2]; - __pyx_t_5.suboffsets[1] = -1; - -__pyx_t_6 = __pyx_v_i; - __pyx_t_7 = __pyx_v_i; - __pyx_f_15monotonic_align_4core_maximum_path_each(__pyx_t_4, __pyx_t_5, (*((int *) ( /* dim=0 */ ((char *) (((int *) __pyx_v_t_ys.data) + __pyx_t_6)) ))), (*((int *) ( /* dim=0 */ ((char *) (((int *) __pyx_v_t_xs.data) + __pyx_t_7)) ))), NULL); - __PYX_XDEC_MEMVIEW(&__pyx_t_4, 0); - __pyx_t_4.memview = NULL; - __pyx_t_4.data = NULL; - __PYX_XDEC_MEMVIEW(&__pyx_t_5, 0); - __pyx_t_5.memview = NULL; - __pyx_t_5.data = NULL; - } - } - } - } - } - #if ((defined(__APPLE__) || defined(__OSX__)) && (defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))))) - #undef likely - #undef unlikely - #define likely(x) __builtin_expect(!!(x), 1) - #define unlikely(x) __builtin_expect(!!(x), 0) - #endif - } - - /* "monotonic_align/core.pyx":41 - * cdef int b = paths.shape[0] - * cdef int i - * for i in prange(b, nogil=True): # <<<<<<<<<<<<<< - * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i]) - */ - /*finally:*/ { - /*normal exit:*/{ - #ifdef WITH_THREAD - __Pyx_FastGIL_Forget(); - Py_BLOCK_THREADS - #endif - goto __pyx_L5; - } - __pyx_L5:; - } - } - - /* "monotonic_align/core.pyx":38 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: # <<<<<<<<<<<<<< - * cdef int b = paths.shape[0] - * cdef int i - */ - - /* function exit code */ -} - -/* Python wrapper */ -static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - __Pyx_memviewslice __pyx_v_paths = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_v_values = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_v_t_ys = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_v_t_xs = { 0, 0, { 0 }, { 0 }, { 0 } }; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("maximum_path_c (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_paths,&__pyx_n_s_values,&__pyx_n_s_t_ys,&__pyx_n_s_t_xs,0}; - PyObject* values[4] = {0,0,0,0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_paths)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_values)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 1); __PYX_ERR(0, 38, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_t_ys)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 2); __PYX_ERR(0, 38, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_t_xs)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 3); __PYX_ERR(0, 38, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "maximum_path_c") < 0)) __PYX_ERR(0, 38, __pyx_L3_error) - } - } else if (PyTuple_GET_SIZE(__pyx_args) != 4) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - } - __pyx_v_paths = __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(values[0], PyBUF_WRITABLE); if (unlikely(!__pyx_v_paths.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_v_values = __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(values[1], PyBUF_WRITABLE); if (unlikely(!__pyx_v_values.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_v_t_ys = __Pyx_PyObject_to_MemoryviewSlice_dc_int(values[2], PyBUF_WRITABLE); if (unlikely(!__pyx_v_t_ys.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_v_t_xs = __Pyx_PyObject_to_MemoryviewSlice_dc_int(values[3], PyBUF_WRITABLE); if (unlikely(!__pyx_v_t_xs.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("monotonic_align.core.maximum_path_c", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_15monotonic_align_4core_maximum_path_c(__pyx_self, __pyx_v_paths, __pyx_v_values, __pyx_v_t_ys, __pyx_v_t_xs); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15monotonic_align_4core_maximum_path_c(CYTHON_UNUSED PyObject *__pyx_self, __Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("maximum_path_c", 0); - __Pyx_XDECREF(__pyx_r); - if (unlikely(!__pyx_v_paths.memview)) { __Pyx_RaiseUnboundLocalError("paths"); __PYX_ERR(0, 38, __pyx_L1_error) } - if (unlikely(!__pyx_v_values.memview)) { __Pyx_RaiseUnboundLocalError("values"); __PYX_ERR(0, 38, __pyx_L1_error) } - if (unlikely(!__pyx_v_t_ys.memview)) { __Pyx_RaiseUnboundLocalError("t_ys"); __PYX_ERR(0, 38, __pyx_L1_error) } - if (unlikely(!__pyx_v_t_xs.memview)) { __Pyx_RaiseUnboundLocalError("t_xs"); __PYX_ERR(0, 38, __pyx_L1_error) } - __pyx_t_1 = __Pyx_void_to_None(__pyx_f_15monotonic_align_4core_maximum_path_c(__pyx_v_paths, __pyx_v_values, __pyx_v_t_ys, __pyx_v_t_xs, 0)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("monotonic_align.core.maximum_path_c", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __PYX_XDEC_MEMVIEW(&__pyx_v_paths, 1); - __PYX_XDEC_MEMVIEW(&__pyx_v_values, 1); - __PYX_XDEC_MEMVIEW(&__pyx_v_t_ys, 1); - __PYX_XDEC_MEMVIEW(&__pyx_v_t_xs, 1); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":122 - * cdef bint dtype_is_object - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<< - * mode="c", bint allocate_buffer=True): - * - */ - -/* Python wrapper */ -static int __pyx_array___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static int __pyx_array___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_shape = 0; - Py_ssize_t __pyx_v_itemsize; - PyObject *__pyx_v_format = 0; - PyObject *__pyx_v_mode = 0; - int __pyx_v_allocate_buffer; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_shape,&__pyx_n_s_itemsize,&__pyx_n_s_format,&__pyx_n_s_mode,&__pyx_n_s_allocate_buffer,0}; - PyObject* values[5] = {0,0,0,0,0}; - values[3] = ((PyObject *)__pyx_n_s_c); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_shape)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_itemsize)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, 1); __PYX_ERR(1, 122, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_format)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, 2); __PYX_ERR(1, 122, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (kw_args > 0) { - PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_mode); - if (value) { values[3] = value; kw_args--; } - } - CYTHON_FALLTHROUGH; - case 4: - if (kw_args > 0) { - PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_allocate_buffer); - if (value) { values[4] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__cinit__") < 0)) __PYX_ERR(1, 122, __pyx_L3_error) - } - } else { - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_shape = ((PyObject*)values[0]); - __pyx_v_itemsize = __Pyx_PyIndex_AsSsize_t(values[1]); if (unlikely((__pyx_v_itemsize == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 122, __pyx_L3_error) - __pyx_v_format = values[2]; - __pyx_v_mode = values[3]; - if (values[4]) { - __pyx_v_allocate_buffer = __Pyx_PyObject_IsTrue(values[4]); if (unlikely((__pyx_v_allocate_buffer == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 123, __pyx_L3_error) - } else { - - /* "View.MemoryView":123 - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, - * mode="c", bint allocate_buffer=True): # <<<<<<<<<<<<<< - * - * cdef int idx - */ - __pyx_v_allocate_buffer = ((int)1); - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 122, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.array.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return -1; - __pyx_L4_argument_unpacking_done:; - if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_shape), (&PyTuple_Type), 1, "shape", 1))) __PYX_ERR(1, 122, __pyx_L1_error) - if (unlikely(((PyObject *)__pyx_v_format) == Py_None)) { - PyErr_Format(PyExc_TypeError, "Argument '%.200s' must not be None", "format"); __PYX_ERR(1, 122, __pyx_L1_error) - } - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(((struct __pyx_array_obj *)__pyx_v_self), __pyx_v_shape, __pyx_v_itemsize, __pyx_v_format, __pyx_v_mode, __pyx_v_allocate_buffer); - - /* "View.MemoryView":122 - * cdef bint dtype_is_object - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<< - * mode="c", bint allocate_buffer=True): - * - */ - - /* function exit code */ - goto __pyx_L0; - __pyx_L1_error:; - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, PyObject *__pyx_v_format, PyObject *__pyx_v_mode, int __pyx_v_allocate_buffer) { - int __pyx_v_idx; - Py_ssize_t __pyx_v_i; - Py_ssize_t __pyx_v_dim; - PyObject **__pyx_v_p; - char __pyx_v_order; - int __pyx_r; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - char *__pyx_t_7; - int __pyx_t_8; - Py_ssize_t __pyx_t_9; - PyObject *__pyx_t_10 = NULL; - Py_ssize_t __pyx_t_11; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__cinit__", 0); - __Pyx_INCREF(__pyx_v_format); - - /* "View.MemoryView":129 - * cdef PyObject **p - * - * self.ndim = len(shape) # <<<<<<<<<<<<<< - * self.itemsize = itemsize - * - */ - if (unlikely(__pyx_v_shape == Py_None)) { - PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()"); - __PYX_ERR(1, 129, __pyx_L1_error) - } - __pyx_t_1 = PyTuple_GET_SIZE(__pyx_v_shape); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(1, 129, __pyx_L1_error) - __pyx_v_self->ndim = ((int)__pyx_t_1); - - /* "View.MemoryView":130 - * - * self.ndim = len(shape) - * self.itemsize = itemsize # <<<<<<<<<<<<<< - * - * if not self.ndim: - */ - __pyx_v_self->itemsize = __pyx_v_itemsize; - - /* "View.MemoryView":132 - * self.itemsize = itemsize - * - * if not self.ndim: # <<<<<<<<<<<<<< - * raise ValueError("Empty shape tuple for cython.array") - * - */ - __pyx_t_2 = ((!(__pyx_v_self->ndim != 0)) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":133 - * - * if not self.ndim: - * raise ValueError("Empty shape tuple for cython.array") # <<<<<<<<<<<<<< - * - * if itemsize <= 0: - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__2, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 133, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 133, __pyx_L1_error) - - /* "View.MemoryView":132 - * self.itemsize = itemsize - * - * if not self.ndim: # <<<<<<<<<<<<<< - * raise ValueError("Empty shape tuple for cython.array") - * - */ - } - - /* "View.MemoryView":135 - * raise ValueError("Empty shape tuple for cython.array") - * - * if itemsize <= 0: # <<<<<<<<<<<<<< - * raise ValueError("itemsize <= 0 for cython.array") - * - */ - __pyx_t_2 = ((__pyx_v_itemsize <= 0) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":136 - * - * if itemsize <= 0: - * raise ValueError("itemsize <= 0 for cython.array") # <<<<<<<<<<<<<< - * - * if not isinstance(format, bytes): - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__3, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 136, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 136, __pyx_L1_error) - - /* "View.MemoryView":135 - * raise ValueError("Empty shape tuple for cython.array") - * - * if itemsize <= 0: # <<<<<<<<<<<<<< - * raise ValueError("itemsize <= 0 for cython.array") - * - */ - } - - /* "View.MemoryView":138 - * raise ValueError("itemsize <= 0 for cython.array") - * - * if not isinstance(format, bytes): # <<<<<<<<<<<<<< - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string - */ - __pyx_t_2 = PyBytes_Check(__pyx_v_format); - __pyx_t_4 = ((!(__pyx_t_2 != 0)) != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":139 - * - * if not isinstance(format, bytes): - * format = format.encode('ASCII') # <<<<<<<<<<<<<< - * self._format = format # keep a reference to the byte string - * self.format = self._format - */ - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_format, __pyx_n_s_encode); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 139, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_6)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - } - } - __pyx_t_3 = (__pyx_t_6) ? __Pyx_PyObject_Call2Args(__pyx_t_5, __pyx_t_6, __pyx_n_s_ASCII) : __Pyx_PyObject_CallOneArg(__pyx_t_5, __pyx_n_s_ASCII); - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 139, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF_SET(__pyx_v_format, __pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":138 - * raise ValueError("itemsize <= 0 for cython.array") - * - * if not isinstance(format, bytes): # <<<<<<<<<<<<<< - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string - */ - } - - /* "View.MemoryView":140 - * if not isinstance(format, bytes): - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string # <<<<<<<<<<<<<< - * self.format = self._format - * - */ - if (!(likely(PyBytes_CheckExact(__pyx_v_format))||((__pyx_v_format) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_v_format)->tp_name), 0))) __PYX_ERR(1, 140, __pyx_L1_error) - __pyx_t_3 = __pyx_v_format; - __Pyx_INCREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __Pyx_GOTREF(__pyx_v_self->_format); - __Pyx_DECREF(__pyx_v_self->_format); - __pyx_v_self->_format = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":141 - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string - * self.format = self._format # <<<<<<<<<<<<<< - * - * - */ - if (unlikely(__pyx_v_self->_format == Py_None)) { - PyErr_SetString(PyExc_TypeError, "expected bytes, NoneType found"); - __PYX_ERR(1, 141, __pyx_L1_error) - } - __pyx_t_7 = __Pyx_PyBytes_AsWritableString(__pyx_v_self->_format); if (unlikely((!__pyx_t_7) && PyErr_Occurred())) __PYX_ERR(1, 141, __pyx_L1_error) - __pyx_v_self->format = __pyx_t_7; - - /* "View.MemoryView":144 - * - * - * self._shape = PyObject_Malloc(sizeof(Py_ssize_t)*self.ndim*2) # <<<<<<<<<<<<<< - * self._strides = self._shape + self.ndim - * - */ - __pyx_v_self->_shape = ((Py_ssize_t *)PyObject_Malloc((((sizeof(Py_ssize_t)) * __pyx_v_self->ndim) * 2))); - - /* "View.MemoryView":145 - * - * self._shape = PyObject_Malloc(sizeof(Py_ssize_t)*self.ndim*2) - * self._strides = self._shape + self.ndim # <<<<<<<<<<<<<< - * - * if not self._shape: - */ - __pyx_v_self->_strides = (__pyx_v_self->_shape + __pyx_v_self->ndim); - - /* "View.MemoryView":147 - * self._strides = self._shape + self.ndim - * - * if not self._shape: # <<<<<<<<<<<<<< - * raise MemoryError("unable to allocate shape and strides.") - * - */ - __pyx_t_4 = ((!(__pyx_v_self->_shape != 0)) != 0); - if (unlikely(__pyx_t_4)) { - - /* "View.MemoryView":148 - * - * if not self._shape: - * raise MemoryError("unable to allocate shape and strides.") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_MemoryError, __pyx_tuple__4, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 148, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 148, __pyx_L1_error) - - /* "View.MemoryView":147 - * self._strides = self._shape + self.ndim - * - * if not self._shape: # <<<<<<<<<<<<<< - * raise MemoryError("unable to allocate shape and strides.") - * - */ - } - - /* "View.MemoryView":151 - * - * - * for idx, dim in enumerate(shape): # <<<<<<<<<<<<<< - * if dim <= 0: - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - */ - __pyx_t_8 = 0; - __pyx_t_3 = __pyx_v_shape; __Pyx_INCREF(__pyx_t_3); __pyx_t_1 = 0; - for (;;) { - if (__pyx_t_1 >= PyTuple_GET_SIZE(__pyx_t_3)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_3, __pyx_t_1); __Pyx_INCREF(__pyx_t_5); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(1, 151, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_3, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 151, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - __pyx_t_9 = __Pyx_PyIndex_AsSsize_t(__pyx_t_5); if (unlikely((__pyx_t_9 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 151, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_v_dim = __pyx_t_9; - __pyx_v_idx = __pyx_t_8; - __pyx_t_8 = (__pyx_t_8 + 1); - - /* "View.MemoryView":152 - * - * for idx, dim in enumerate(shape): - * if dim <= 0: # <<<<<<<<<<<<<< - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - * self._shape[idx] = dim - */ - __pyx_t_4 = ((__pyx_v_dim <= 0) != 0); - if (unlikely(__pyx_t_4)) { - - /* "View.MemoryView":153 - * for idx, dim in enumerate(shape): - * if dim <= 0: - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) # <<<<<<<<<<<<<< - * self._shape[idx] = dim - * - */ - __pyx_t_5 = __Pyx_PyInt_From_int(__pyx_v_idx); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = PyInt_FromSsize_t(__pyx_v_dim); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_10 = PyTuple_New(2); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_10, 1, __pyx_t_6); - __pyx_t_5 = 0; - __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyString_Format(__pyx_kp_s_Invalid_shape_in_axis_d_d, __pyx_t_10); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_10 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_6); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_Raise(__pyx_t_10, 0, 0, 0); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __PYX_ERR(1, 153, __pyx_L1_error) - - /* "View.MemoryView":152 - * - * for idx, dim in enumerate(shape): - * if dim <= 0: # <<<<<<<<<<<<<< - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - * self._shape[idx] = dim - */ - } - - /* "View.MemoryView":154 - * if dim <= 0: - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - * self._shape[idx] = dim # <<<<<<<<<<<<<< - * - * cdef char order - */ - (__pyx_v_self->_shape[__pyx_v_idx]) = __pyx_v_dim; - - /* "View.MemoryView":151 - * - * - * for idx, dim in enumerate(shape): # <<<<<<<<<<<<<< - * if dim <= 0: - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - */ - } - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":157 - * - * cdef char order - * if mode == 'fortran': # <<<<<<<<<<<<<< - * order = b'F' - * self.mode = u'fortran' - */ - __pyx_t_4 = (__Pyx_PyString_Equals(__pyx_v_mode, __pyx_n_s_fortran, Py_EQ)); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(1, 157, __pyx_L1_error) - if (__pyx_t_4) { - - /* "View.MemoryView":158 - * cdef char order - * if mode == 'fortran': - * order = b'F' # <<<<<<<<<<<<<< - * self.mode = u'fortran' - * elif mode == 'c': - */ - __pyx_v_order = 'F'; - - /* "View.MemoryView":159 - * if mode == 'fortran': - * order = b'F' - * self.mode = u'fortran' # <<<<<<<<<<<<<< - * elif mode == 'c': - * order = b'C' - */ - __Pyx_INCREF(__pyx_n_u_fortran); - __Pyx_GIVEREF(__pyx_n_u_fortran); - __Pyx_GOTREF(__pyx_v_self->mode); - __Pyx_DECREF(__pyx_v_self->mode); - __pyx_v_self->mode = __pyx_n_u_fortran; - - /* "View.MemoryView":157 - * - * cdef char order - * if mode == 'fortran': # <<<<<<<<<<<<<< - * order = b'F' - * self.mode = u'fortran' - */ - goto __pyx_L10; - } - - /* "View.MemoryView":160 - * order = b'F' - * self.mode = u'fortran' - * elif mode == 'c': # <<<<<<<<<<<<<< - * order = b'C' - * self.mode = u'c' - */ - __pyx_t_4 = (__Pyx_PyString_Equals(__pyx_v_mode, __pyx_n_s_c, Py_EQ)); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(1, 160, __pyx_L1_error) - if (likely(__pyx_t_4)) { - - /* "View.MemoryView":161 - * self.mode = u'fortran' - * elif mode == 'c': - * order = b'C' # <<<<<<<<<<<<<< - * self.mode = u'c' - * else: - */ - __pyx_v_order = 'C'; - - /* "View.MemoryView":162 - * elif mode == 'c': - * order = b'C' - * self.mode = u'c' # <<<<<<<<<<<<<< - * else: - * raise ValueError("Invalid mode, expected 'c' or 'fortran', got %s" % mode) - */ - __Pyx_INCREF(__pyx_n_u_c); - __Pyx_GIVEREF(__pyx_n_u_c); - __Pyx_GOTREF(__pyx_v_self->mode); - __Pyx_DECREF(__pyx_v_self->mode); - __pyx_v_self->mode = __pyx_n_u_c; - - /* "View.MemoryView":160 - * order = b'F' - * self.mode = u'fortran' - * elif mode == 'c': # <<<<<<<<<<<<<< - * order = b'C' - * self.mode = u'c' - */ - goto __pyx_L10; - } - - /* "View.MemoryView":164 - * self.mode = u'c' - * else: - * raise ValueError("Invalid mode, expected 'c' or 'fortran', got %s" % mode) # <<<<<<<<<<<<<< - * - * self.len = fill_contig_strides_array(self._shape, self._strides, - */ - /*else*/ { - __pyx_t_3 = __Pyx_PyString_FormatSafe(__pyx_kp_s_Invalid_mode_expected_c_or_fortr, __pyx_v_mode); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 164, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_10 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_3); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 164, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_10, 0, 0, 0); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __PYX_ERR(1, 164, __pyx_L1_error) - } - __pyx_L10:; - - /* "View.MemoryView":166 - * raise ValueError("Invalid mode, expected 'c' or 'fortran', got %s" % mode) - * - * self.len = fill_contig_strides_array(self._shape, self._strides, # <<<<<<<<<<<<<< - * itemsize, self.ndim, order) - * - */ - __pyx_v_self->len = __pyx_fill_contig_strides_array(__pyx_v_self->_shape, __pyx_v_self->_strides, __pyx_v_itemsize, __pyx_v_self->ndim, __pyx_v_order); - - /* "View.MemoryView":169 - * itemsize, self.ndim, order) - * - * self.free_data = allocate_buffer # <<<<<<<<<<<<<< - * self.dtype_is_object = format == b'O' - * if allocate_buffer: - */ - __pyx_v_self->free_data = __pyx_v_allocate_buffer; - - /* "View.MemoryView":170 - * - * self.free_data = allocate_buffer - * self.dtype_is_object = format == b'O' # <<<<<<<<<<<<<< - * if allocate_buffer: - * - */ - __pyx_t_10 = PyObject_RichCompare(__pyx_v_format, __pyx_n_b_O, Py_EQ); __Pyx_XGOTREF(__pyx_t_10); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 170, __pyx_L1_error) - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_10); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 170, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_v_self->dtype_is_object = __pyx_t_4; - - /* "View.MemoryView":171 - * self.free_data = allocate_buffer - * self.dtype_is_object = format == b'O' - * if allocate_buffer: # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_4 = (__pyx_v_allocate_buffer != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":174 - * - * - * self.data = malloc(self.len) # <<<<<<<<<<<<<< - * if not self.data: - * raise MemoryError("unable to allocate array data.") - */ - __pyx_v_self->data = ((char *)malloc(__pyx_v_self->len)); - - /* "View.MemoryView":175 - * - * self.data = malloc(self.len) - * if not self.data: # <<<<<<<<<<<<<< - * raise MemoryError("unable to allocate array data.") - * - */ - __pyx_t_4 = ((!(__pyx_v_self->data != 0)) != 0); - if (unlikely(__pyx_t_4)) { - - /* "View.MemoryView":176 - * self.data = malloc(self.len) - * if not self.data: - * raise MemoryError("unable to allocate array data.") # <<<<<<<<<<<<<< - * - * if self.dtype_is_object: - */ - __pyx_t_10 = __Pyx_PyObject_Call(__pyx_builtin_MemoryError, __pyx_tuple__5, NULL); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 176, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_Raise(__pyx_t_10, 0, 0, 0); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __PYX_ERR(1, 176, __pyx_L1_error) - - /* "View.MemoryView":175 - * - * self.data = malloc(self.len) - * if not self.data: # <<<<<<<<<<<<<< - * raise MemoryError("unable to allocate array data.") - * - */ - } - - /* "View.MemoryView":178 - * raise MemoryError("unable to allocate array data.") - * - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * p = self.data - * for i in range(self.len / itemsize): - */ - __pyx_t_4 = (__pyx_v_self->dtype_is_object != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":179 - * - * if self.dtype_is_object: - * p = self.data # <<<<<<<<<<<<<< - * for i in range(self.len / itemsize): - * p[i] = Py_None - */ - __pyx_v_p = ((PyObject **)__pyx_v_self->data); - - /* "View.MemoryView":180 - * if self.dtype_is_object: - * p = self.data - * for i in range(self.len / itemsize): # <<<<<<<<<<<<<< - * p[i] = Py_None - * Py_INCREF(Py_None) - */ - if (unlikely(__pyx_v_itemsize == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "integer division or modulo by zero"); - __PYX_ERR(1, 180, __pyx_L1_error) - } - else if (sizeof(Py_ssize_t) == sizeof(long) && (!(((Py_ssize_t)-1) > 0)) && unlikely(__pyx_v_itemsize == (Py_ssize_t)-1) && unlikely(UNARY_NEG_WOULD_OVERFLOW(__pyx_v_self->len))) { - PyErr_SetString(PyExc_OverflowError, "value too large to perform division"); - __PYX_ERR(1, 180, __pyx_L1_error) - } - __pyx_t_1 = __Pyx_div_Py_ssize_t(__pyx_v_self->len, __pyx_v_itemsize); - __pyx_t_9 = __pyx_t_1; - for (__pyx_t_11 = 0; __pyx_t_11 < __pyx_t_9; __pyx_t_11+=1) { - __pyx_v_i = __pyx_t_11; - - /* "View.MemoryView":181 - * p = self.data - * for i in range(self.len / itemsize): - * p[i] = Py_None # <<<<<<<<<<<<<< - * Py_INCREF(Py_None) - * - */ - (__pyx_v_p[__pyx_v_i]) = Py_None; - - /* "View.MemoryView":182 - * for i in range(self.len / itemsize): - * p[i] = Py_None - * Py_INCREF(Py_None) # <<<<<<<<<<<<<< - * - * @cname('getbuffer') - */ - Py_INCREF(Py_None); - } - - /* "View.MemoryView":178 - * raise MemoryError("unable to allocate array data.") - * - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * p = self.data - * for i in range(self.len / itemsize): - */ - } - - /* "View.MemoryView":171 - * self.free_data = allocate_buffer - * self.dtype_is_object = format == b'O' - * if allocate_buffer: # <<<<<<<<<<<<<< - * - * - */ - } - - /* "View.MemoryView":122 - * cdef bint dtype_is_object - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<< - * mode="c", bint allocate_buffer=True): - * - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_AddTraceback("View.MemoryView.array.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_format); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":185 - * - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<< - * cdef int bufmode = -1 - * if self.mode == u"c": - */ - -/* Python wrapper */ -static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getbuffer__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(((struct __pyx_array_obj *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(struct __pyx_array_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - int __pyx_v_bufmode; - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - char *__pyx_t_4; - Py_ssize_t __pyx_t_5; - int __pyx_t_6; - Py_ssize_t *__pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - if (__pyx_v_info == NULL) { - PyErr_SetString(PyExc_BufferError, "PyObject_GetBuffer: view==NULL argument is obsolete"); - return -1; - } - __Pyx_RefNannySetupContext("__getbuffer__", 0); - __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(__pyx_v_info->obj); - - /* "View.MemoryView":186 - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): - * cdef int bufmode = -1 # <<<<<<<<<<<<<< - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - */ - __pyx_v_bufmode = -1; - - /* "View.MemoryView":187 - * def __getbuffer__(self, Py_buffer *info, int flags): - * cdef int bufmode = -1 - * if self.mode == u"c": # <<<<<<<<<<<<<< - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": - */ - __pyx_t_1 = (__Pyx_PyUnicode_Equals(__pyx_v_self->mode, __pyx_n_u_c, Py_EQ)); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 187, __pyx_L1_error) - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":188 - * cdef int bufmode = -1 - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS # <<<<<<<<<<<<<< - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - */ - __pyx_v_bufmode = (PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS); - - /* "View.MemoryView":187 - * def __getbuffer__(self, Py_buffer *info, int flags): - * cdef int bufmode = -1 - * if self.mode == u"c": # <<<<<<<<<<<<<< - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": - */ - goto __pyx_L3; - } - - /* "View.MemoryView":189 - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": # <<<<<<<<<<<<<< - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - */ - __pyx_t_2 = (__Pyx_PyUnicode_Equals(__pyx_v_self->mode, __pyx_n_u_fortran, Py_EQ)); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(1, 189, __pyx_L1_error) - __pyx_t_1 = (__pyx_t_2 != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":190 - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS # <<<<<<<<<<<<<< - * if not (flags & bufmode): - * raise ValueError("Can only create a buffer that is contiguous in memory.") - */ - __pyx_v_bufmode = (PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS); - - /* "View.MemoryView":189 - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": # <<<<<<<<<<<<<< - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - */ - } - __pyx_L3:; - - /* "View.MemoryView":191 - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): # <<<<<<<<<<<<<< - * raise ValueError("Can only create a buffer that is contiguous in memory.") - * info.buf = self.data - */ - __pyx_t_1 = ((!((__pyx_v_flags & __pyx_v_bufmode) != 0)) != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":192 - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - * raise ValueError("Can only create a buffer that is contiguous in memory.") # <<<<<<<<<<<<<< - * info.buf = self.data - * info.len = self.len - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__6, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 192, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 192, __pyx_L1_error) - - /* "View.MemoryView":191 - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): # <<<<<<<<<<<<<< - * raise ValueError("Can only create a buffer that is contiguous in memory.") - * info.buf = self.data - */ - } - - /* "View.MemoryView":193 - * if not (flags & bufmode): - * raise ValueError("Can only create a buffer that is contiguous in memory.") - * info.buf = self.data # <<<<<<<<<<<<<< - * info.len = self.len - * info.ndim = self.ndim - */ - __pyx_t_4 = __pyx_v_self->data; - __pyx_v_info->buf = __pyx_t_4; - - /* "View.MemoryView":194 - * raise ValueError("Can only create a buffer that is contiguous in memory.") - * info.buf = self.data - * info.len = self.len # <<<<<<<<<<<<<< - * info.ndim = self.ndim - * info.shape = self._shape - */ - __pyx_t_5 = __pyx_v_self->len; - __pyx_v_info->len = __pyx_t_5; - - /* "View.MemoryView":195 - * info.buf = self.data - * info.len = self.len - * info.ndim = self.ndim # <<<<<<<<<<<<<< - * info.shape = self._shape - * info.strides = self._strides - */ - __pyx_t_6 = __pyx_v_self->ndim; - __pyx_v_info->ndim = __pyx_t_6; - - /* "View.MemoryView":196 - * info.len = self.len - * info.ndim = self.ndim - * info.shape = self._shape # <<<<<<<<<<<<<< - * info.strides = self._strides - * info.suboffsets = NULL - */ - __pyx_t_7 = __pyx_v_self->_shape; - __pyx_v_info->shape = __pyx_t_7; - - /* "View.MemoryView":197 - * info.ndim = self.ndim - * info.shape = self._shape - * info.strides = self._strides # <<<<<<<<<<<<<< - * info.suboffsets = NULL - * info.itemsize = self.itemsize - */ - __pyx_t_7 = __pyx_v_self->_strides; - __pyx_v_info->strides = __pyx_t_7; - - /* "View.MemoryView":198 - * info.shape = self._shape - * info.strides = self._strides - * info.suboffsets = NULL # <<<<<<<<<<<<<< - * info.itemsize = self.itemsize - * info.readonly = 0 - */ - __pyx_v_info->suboffsets = NULL; - - /* "View.MemoryView":199 - * info.strides = self._strides - * info.suboffsets = NULL - * info.itemsize = self.itemsize # <<<<<<<<<<<<<< - * info.readonly = 0 - * - */ - __pyx_t_5 = __pyx_v_self->itemsize; - __pyx_v_info->itemsize = __pyx_t_5; - - /* "View.MemoryView":200 - * info.suboffsets = NULL - * info.itemsize = self.itemsize - * info.readonly = 0 # <<<<<<<<<<<<<< - * - * if flags & PyBUF_FORMAT: - */ - __pyx_v_info->readonly = 0; - - /* "View.MemoryView":202 - * info.readonly = 0 - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * info.format = self.format - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":203 - * - * if flags & PyBUF_FORMAT: - * info.format = self.format # <<<<<<<<<<<<<< - * else: - * info.format = NULL - */ - __pyx_t_4 = __pyx_v_self->format; - __pyx_v_info->format = __pyx_t_4; - - /* "View.MemoryView":202 - * info.readonly = 0 - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * info.format = self.format - * else: - */ - goto __pyx_L5; - } - - /* "View.MemoryView":205 - * info.format = self.format - * else: - * info.format = NULL # <<<<<<<<<<<<<< - * - * info.obj = self - */ - /*else*/ { - __pyx_v_info->format = NULL; - } - __pyx_L5:; - - /* "View.MemoryView":207 - * info.format = NULL - * - * info.obj = self # <<<<<<<<<<<<<< - * - * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)") - */ - __Pyx_INCREF(((PyObject *)__pyx_v_self)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_self)); - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); - __pyx_v_info->obj = ((PyObject *)__pyx_v_self); - - /* "View.MemoryView":185 - * - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<< - * cdef int bufmode = -1 - * if self.mode == u"c": - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.array.__getbuffer__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - if (__pyx_v_info->obj != NULL) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - goto __pyx_L2; - __pyx_L0:; - if (__pyx_v_info->obj == Py_None) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - __pyx_L2:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":211 - * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)") - * - * def __dealloc__(array self): # <<<<<<<<<<<<<< - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - */ - -/* Python wrapper */ -static void __pyx_array___dealloc__(PyObject *__pyx_v_self); /*proto*/ -static void __pyx_array___dealloc__(PyObject *__pyx_v_self) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0); - __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -static void __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(struct __pyx_array_obj *__pyx_v_self) { - __Pyx_RefNannyDeclarations - int __pyx_t_1; - __Pyx_RefNannySetupContext("__dealloc__", 0); - - /* "View.MemoryView":212 - * - * def __dealloc__(array self): - * if self.callback_free_data != NULL: # <<<<<<<<<<<<<< - * self.callback_free_data(self.data) - * elif self.free_data: - */ - __pyx_t_1 = ((__pyx_v_self->callback_free_data != NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":213 - * def __dealloc__(array self): - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) # <<<<<<<<<<<<<< - * elif self.free_data: - * if self.dtype_is_object: - */ - __pyx_v_self->callback_free_data(__pyx_v_self->data); - - /* "View.MemoryView":212 - * - * def __dealloc__(array self): - * if self.callback_free_data != NULL: # <<<<<<<<<<<<<< - * self.callback_free_data(self.data) - * elif self.free_data: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":214 - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - * elif self.free_data: # <<<<<<<<<<<<<< - * if self.dtype_is_object: - * refcount_objects_in_slice(self.data, self._shape, - */ - __pyx_t_1 = (__pyx_v_self->free_data != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":215 - * self.callback_free_data(self.data) - * elif self.free_data: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice(self.data, self._shape, - * self._strides, self.ndim, False) - */ - __pyx_t_1 = (__pyx_v_self->dtype_is_object != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":216 - * elif self.free_data: - * if self.dtype_is_object: - * refcount_objects_in_slice(self.data, self._shape, # <<<<<<<<<<<<<< - * self._strides, self.ndim, False) - * free(self.data) - */ - __pyx_memoryview_refcount_objects_in_slice(__pyx_v_self->data, __pyx_v_self->_shape, __pyx_v_self->_strides, __pyx_v_self->ndim, 0); - - /* "View.MemoryView":215 - * self.callback_free_data(self.data) - * elif self.free_data: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice(self.data, self._shape, - * self._strides, self.ndim, False) - */ - } - - /* "View.MemoryView":218 - * refcount_objects_in_slice(self.data, self._shape, - * self._strides, self.ndim, False) - * free(self.data) # <<<<<<<<<<<<<< - * PyObject_Free(self._shape) - * - */ - free(__pyx_v_self->data); - - /* "View.MemoryView":214 - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - * elif self.free_data: # <<<<<<<<<<<<<< - * if self.dtype_is_object: - * refcount_objects_in_slice(self.data, self._shape, - */ - } - __pyx_L3:; - - /* "View.MemoryView":219 - * self._strides, self.ndim, False) - * free(self.data) - * PyObject_Free(self._shape) # <<<<<<<<<<<<<< - * - * @property - */ - PyObject_Free(__pyx_v_self->_shape); - - /* "View.MemoryView":211 - * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)") - * - * def __dealloc__(array self): # <<<<<<<<<<<<<< - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":222 - * - * @property - * def memview(self): # <<<<<<<<<<<<<< - * return self.get_memview() - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_5array_7memview___get__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_5array_7memview___get__(struct __pyx_array_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":223 - * @property - * def memview(self): - * return self.get_memview() # <<<<<<<<<<<<<< - * - * @cname('get_memview') - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = ((struct __pyx_vtabstruct_array *)__pyx_v_self->__pyx_vtab)->get_memview(__pyx_v_self); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 223, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":222 - * - * @property - * def memview(self): # <<<<<<<<<<<<<< - * return self.get_memview() - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.array.memview.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":226 - * - * @cname('get_memview') - * cdef get_memview(self): # <<<<<<<<<<<<<< - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE - * return memoryview(self, flags, self.dtype_is_object) - */ - -static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *__pyx_v_self) { - int __pyx_v_flags; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("get_memview", 0); - - /* "View.MemoryView":227 - * @cname('get_memview') - * cdef get_memview(self): - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE # <<<<<<<<<<<<<< - * return memoryview(self, flags, self.dtype_is_object) - * - */ - __pyx_v_flags = ((PyBUF_ANY_CONTIGUOUS | PyBUF_FORMAT) | PyBUF_WRITABLE); - - /* "View.MemoryView":228 - * cdef get_memview(self): - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE - * return memoryview(self, flags, self.dtype_is_object) # <<<<<<<<<<<<<< - * - * def __len__(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_flags); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 228, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_self->dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 228, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 228, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_v_self)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_self)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_v_self)); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 228, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":226 - * - * @cname('get_memview') - * cdef get_memview(self): # <<<<<<<<<<<<<< - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE - * return memoryview(self, flags, self.dtype_is_object) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.array.get_memview", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":230 - * return memoryview(self, flags, self.dtype_is_object) - * - * def __len__(self): # <<<<<<<<<<<<<< - * return self._shape[0] - * - */ - -/* Python wrapper */ -static Py_ssize_t __pyx_array___len__(PyObject *__pyx_v_self); /*proto*/ -static Py_ssize_t __pyx_array___len__(PyObject *__pyx_v_self) { - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__len__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static Py_ssize_t __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(struct __pyx_array_obj *__pyx_v_self) { - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__len__", 0); - - /* "View.MemoryView":231 - * - * def __len__(self): - * return self._shape[0] # <<<<<<<<<<<<<< - * - * def __getattr__(self, attr): - */ - __pyx_r = (__pyx_v_self->_shape[0]); - goto __pyx_L0; - - /* "View.MemoryView":230 - * return memoryview(self, flags, self.dtype_is_object) - * - * def __len__(self): # <<<<<<<<<<<<<< - * return self._shape[0] - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":233 - * return self._shape[0] - * - * def __getattr__(self, attr): # <<<<<<<<<<<<<< - * return getattr(self.memview, attr) - * - */ - -/* Python wrapper */ -static PyObject *__pyx_array___getattr__(PyObject *__pyx_v_self, PyObject *__pyx_v_attr); /*proto*/ -static PyObject *__pyx_array___getattr__(PyObject *__pyx_v_self, PyObject *__pyx_v_attr) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getattr__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_attr)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_attr) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__getattr__", 0); - - /* "View.MemoryView":234 - * - * def __getattr__(self, attr): - * return getattr(self.memview, attr) # <<<<<<<<<<<<<< - * - * def __getitem__(self, item): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 234, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_GetAttr(__pyx_t_1, __pyx_v_attr); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 234, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":233 - * return self._shape[0] - * - * def __getattr__(self, attr): # <<<<<<<<<<<<<< - * return getattr(self.memview, attr) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.array.__getattr__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":236 - * return getattr(self.memview, attr) - * - * def __getitem__(self, item): # <<<<<<<<<<<<<< - * return self.memview[item] - * - */ - -/* Python wrapper */ -static PyObject *__pyx_array___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item); /*proto*/ -static PyObject *__pyx_array___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getitem__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_item)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__getitem__", 0); - - /* "View.MemoryView":237 - * - * def __getitem__(self, item): - * return self.memview[item] # <<<<<<<<<<<<<< - * - * def __setitem__(self, item, value): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 237, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_t_1, __pyx_v_item); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 237, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":236 - * return getattr(self.memview, attr) - * - * def __getitem__(self, item): # <<<<<<<<<<<<<< - * return self.memview[item] - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.array.__getitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":239 - * return self.memview[item] - * - * def __setitem__(self, item, value): # <<<<<<<<<<<<<< - * self.memview[item] = value - * - */ - -/* Python wrapper */ -static int __pyx_array___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value); /*proto*/ -static int __pyx_array___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setitem__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_item), ((PyObject *)__pyx_v_value)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value) { - int __pyx_r; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setitem__", 0); - - /* "View.MemoryView":240 - * - * def __setitem__(self, item, value): - * self.memview[item] = value # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 240, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (unlikely(PyObject_SetItem(__pyx_t_1, __pyx_v_item, __pyx_v_value) < 0)) __PYX_ERR(1, 240, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "View.MemoryView":239 - * return self.memview[item] - * - * def __setitem__(self, item, value): # <<<<<<<<<<<<<< - * self.memview[item] = value - * - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.array.__setitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_array_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_pw___pyx_array_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_array___reduce_cython__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_array___reduce_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__7, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 2, __pyx_L1_error) - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.array.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_array_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ -static PyObject *__pyx_pw___pyx_array_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_array_2__setstate_cython__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_array_2__setstate_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__8, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 4, __pyx_L1_error) - - /* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.array.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":244 - * - * @cname("__pyx_array_new") - * cdef array array_cwrapper(tuple shape, Py_ssize_t itemsize, char *format, # <<<<<<<<<<<<<< - * char *mode, char *buf): - * cdef array result - */ - -static struct __pyx_array_obj *__pyx_array_new(PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, char *__pyx_v_format, char *__pyx_v_mode, char *__pyx_v_buf) { - struct __pyx_array_obj *__pyx_v_result = 0; - struct __pyx_array_obj *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("array_cwrapper", 0); - - /* "View.MemoryView":248 - * cdef array result - * - * if buf == NULL: # <<<<<<<<<<<<<< - * result = array(shape, itemsize, format, mode.decode('ASCII')) - * else: - */ - __pyx_t_1 = ((__pyx_v_buf == NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":249 - * - * if buf == NULL: - * result = array(shape, itemsize, format, mode.decode('ASCII')) # <<<<<<<<<<<<<< - * else: - * result = array(shape, itemsize, format, mode.decode('ASCII'), - */ - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_itemsize); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyBytes_FromString(__pyx_v_format); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_decode_c_string(__pyx_v_mode, 0, strlen(__pyx_v_mode), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PyTuple_New(4); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(__pyx_v_shape); - __Pyx_GIVEREF(__pyx_v_shape); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_v_shape); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_5, 3, __pyx_t_4); - __pyx_t_2 = 0; - __pyx_t_3 = 0; - __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_Call(((PyObject *)__pyx_array_type), __pyx_t_5, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_v_result = ((struct __pyx_array_obj *)__pyx_t_4); - __pyx_t_4 = 0; - - /* "View.MemoryView":248 - * cdef array result - * - * if buf == NULL: # <<<<<<<<<<<<<< - * result = array(shape, itemsize, format, mode.decode('ASCII')) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":251 - * result = array(shape, itemsize, format, mode.decode('ASCII')) - * else: - * result = array(shape, itemsize, format, mode.decode('ASCII'), # <<<<<<<<<<<<<< - * allocate_buffer=False) - * result.data = buf - */ - /*else*/ { - __pyx_t_4 = PyInt_FromSsize_t(__pyx_v_itemsize); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __Pyx_PyBytes_FromString(__pyx_v_format); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_3 = __Pyx_decode_c_string(__pyx_v_mode, 0, strlen(__pyx_v_mode), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyTuple_New(4); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_shape); - __Pyx_GIVEREF(__pyx_v_shape); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_shape); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_2, 3, __pyx_t_3); - __pyx_t_4 = 0; - __pyx_t_5 = 0; - __pyx_t_3 = 0; - - /* "View.MemoryView":252 - * else: - * result = array(shape, itemsize, format, mode.decode('ASCII'), - * allocate_buffer=False) # <<<<<<<<<<<<<< - * result.data = buf - * - */ - __pyx_t_3 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 252, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_allocate_buffer, Py_False) < 0) __PYX_ERR(1, 252, __pyx_L1_error) - - /* "View.MemoryView":251 - * result = array(shape, itemsize, format, mode.decode('ASCII')) - * else: - * result = array(shape, itemsize, format, mode.decode('ASCII'), # <<<<<<<<<<<<<< - * allocate_buffer=False) - * result.data = buf - */ - __pyx_t_5 = __Pyx_PyObject_Call(((PyObject *)__pyx_array_type), __pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result = ((struct __pyx_array_obj *)__pyx_t_5); - __pyx_t_5 = 0; - - /* "View.MemoryView":253 - * result = array(shape, itemsize, format, mode.decode('ASCII'), - * allocate_buffer=False) - * result.data = buf # <<<<<<<<<<<<<< - * - * return result - */ - __pyx_v_result->data = __pyx_v_buf; - } - __pyx_L3:; - - /* "View.MemoryView":255 - * result.data = buf - * - * return result # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(((PyObject *)__pyx_r)); - __Pyx_INCREF(((PyObject *)__pyx_v_result)); - __pyx_r = __pyx_v_result; - goto __pyx_L0; - - /* "View.MemoryView":244 - * - * @cname("__pyx_array_new") - * cdef array array_cwrapper(tuple shape, Py_ssize_t itemsize, char *format, # <<<<<<<<<<<<<< - * char *mode, char *buf): - * cdef array result - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.array_cwrapper", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XGIVEREF((PyObject *)__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":281 - * cdef class Enum(object): - * cdef object name - * def __init__(self, name): # <<<<<<<<<<<<<< - * self.name = name - * def __repr__(self): - */ - -/* Python wrapper */ -static int __pyx_MemviewEnum___init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static int __pyx_MemviewEnum___init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_name = 0; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__init__ (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_name,0}; - PyObject* values[1] = {0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_name)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__init__") < 0)) __PYX_ERR(1, 281, __pyx_L3_error) - } - } else if (PyTuple_GET_SIZE(__pyx_args) != 1) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - } - __pyx_v_name = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__init__", 1, 1, 1, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 281, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.Enum.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return -1; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self), __pyx_v_name); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v_name) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__init__", 0); - - /* "View.MemoryView":282 - * cdef object name - * def __init__(self, name): - * self.name = name # <<<<<<<<<<<<<< - * def __repr__(self): - * return self.name - */ - __Pyx_INCREF(__pyx_v_name); - __Pyx_GIVEREF(__pyx_v_name); - __Pyx_GOTREF(__pyx_v_self->name); - __Pyx_DECREF(__pyx_v_self->name); - __pyx_v_self->name = __pyx_v_name; - - /* "View.MemoryView":281 - * cdef class Enum(object): - * cdef object name - * def __init__(self, name): # <<<<<<<<<<<<<< - * self.name = name - * def __repr__(self): - */ - - /* function exit code */ - __pyx_r = 0; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":283 - * def __init__(self, name): - * self.name = name - * def __repr__(self): # <<<<<<<<<<<<<< - * return self.name - * - */ - -/* Python wrapper */ -static PyObject *__pyx_MemviewEnum___repr__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_MemviewEnum___repr__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__repr__ (wrapper)", 0); - __pyx_r = __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(struct __pyx_MemviewEnum_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__repr__", 0); - - /* "View.MemoryView":284 - * self.name = name - * def __repr__(self): - * return self.name # <<<<<<<<<<<<<< - * - * cdef generic = Enum("") - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->name); - __pyx_r = __pyx_v_self->name; - goto __pyx_L0; - - /* "View.MemoryView":283 - * def __init__(self, name): - * self.name = name - * def __repr__(self): # <<<<<<<<<<<<<< - * return self.name - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * cdef tuple state - * cdef object _dict - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_MemviewEnum_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_pw___pyx_MemviewEnum_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_MemviewEnum___reduce_cython__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_MemviewEnum___reduce_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self) { - PyObject *__pyx_v_state = 0; - PyObject *__pyx_v__dict = 0; - int __pyx_v_use_setstate; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - int __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":5 - * cdef object _dict - * cdef bint use_setstate - * state = (self.name,) # <<<<<<<<<<<<<< - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: - */ - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_self->name); - __Pyx_GIVEREF(__pyx_v_self->name); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_self->name); - __pyx_v_state = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "(tree fragment)":6 - * cdef bint use_setstate - * state = (self.name,) - * _dict = getattr(self, '__dict__', None) # <<<<<<<<<<<<<< - * if _dict is not None: - * state += (_dict,) - */ - __pyx_t_1 = __Pyx_GetAttr3(((PyObject *)__pyx_v_self), __pyx_n_s_dict, Py_None); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v__dict = __pyx_t_1; - __pyx_t_1 = 0; - - /* "(tree fragment)":7 - * state = (self.name,) - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: # <<<<<<<<<<<<<< - * state += (_dict,) - * use_setstate = True - */ - __pyx_t_2 = (__pyx_v__dict != Py_None); - __pyx_t_3 = (__pyx_t_2 != 0); - if (__pyx_t_3) { - - /* "(tree fragment)":8 - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: - * state += (_dict,) # <<<<<<<<<<<<<< - * use_setstate = True - * else: - */ - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 8, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v__dict); - __Pyx_GIVEREF(__pyx_v__dict); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v__dict); - __pyx_t_4 = PyNumber_InPlaceAdd(__pyx_v_state, __pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 8, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF_SET(__pyx_v_state, ((PyObject*)__pyx_t_4)); - __pyx_t_4 = 0; - - /* "(tree fragment)":9 - * if _dict is not None: - * state += (_dict,) - * use_setstate = True # <<<<<<<<<<<<<< - * else: - * use_setstate = self.name is not None - */ - __pyx_v_use_setstate = 1; - - /* "(tree fragment)":7 - * state = (self.name,) - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: # <<<<<<<<<<<<<< - * state += (_dict,) - * use_setstate = True - */ - goto __pyx_L3; - } - - /* "(tree fragment)":11 - * use_setstate = True - * else: - * use_setstate = self.name is not None # <<<<<<<<<<<<<< - * if use_setstate: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state - */ - /*else*/ { - __pyx_t_3 = (__pyx_v_self->name != Py_None); - __pyx_v_use_setstate = __pyx_t_3; - } - __pyx_L3:; - - /* "(tree fragment)":12 - * else: - * use_setstate = self.name is not None - * if use_setstate: # <<<<<<<<<<<<<< - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state - * else: - */ - __pyx_t_3 = (__pyx_v_use_setstate != 0); - if (__pyx_t_3) { - - /* "(tree fragment)":13 - * use_setstate = self.name is not None - * if use_setstate: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state # <<<<<<<<<<<<<< - * else: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_pyx_unpickle_Enum); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_INCREF(__pyx_int_184977713); - __Pyx_GIVEREF(__pyx_int_184977713); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_int_184977713); - __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(Py_None); - PyTuple_SET_ITEM(__pyx_t_1, 2, Py_None); - __pyx_t_5 = PyTuple_New(3); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_1); - __Pyx_INCREF(__pyx_v_state); - __Pyx_GIVEREF(__pyx_v_state); - PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_v_state); - __pyx_t_4 = 0; - __pyx_t_1 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "(tree fragment)":12 - * else: - * use_setstate = self.name is not None - * if use_setstate: # <<<<<<<<<<<<<< - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state - * else: - */ - } - - /* "(tree fragment)":15 - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state - * else: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * __pyx_unpickle_Enum__set_state(self, __pyx_state) - */ - /*else*/ { - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_pyx_unpickle_Enum); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_INCREF(__pyx_int_184977713); - __Pyx_GIVEREF(__pyx_int_184977713); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_int_184977713); - __Pyx_INCREF(__pyx_v_state); - __Pyx_GIVEREF(__pyx_v_state); - PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_v_state); - __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_1); - __pyx_t_5 = 0; - __pyx_t_1 = 0; - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - } - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * cdef tuple state - * cdef object _dict - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.Enum.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_state); - __Pyx_XDECREF(__pyx_v__dict); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":16 - * else: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state(self, __pyx_state) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_MemviewEnum_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ -static PyObject *__pyx_pw___pyx_MemviewEnum_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_MemviewEnum_2__setstate_cython__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_MemviewEnum_2__setstate_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":17 - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) - * def __setstate_cython__(self, __pyx_state): - * __pyx_unpickle_Enum__set_state(self, __pyx_state) # <<<<<<<<<<<<<< - */ - if (!(likely(PyTuple_CheckExact(__pyx_v___pyx_state))||((__pyx_v___pyx_state) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "tuple", Py_TYPE(__pyx_v___pyx_state)->tp_name), 0))) __PYX_ERR(1, 17, __pyx_L1_error) - __pyx_t_1 = __pyx_unpickle_Enum__set_state(__pyx_v_self, ((PyObject*)__pyx_v___pyx_state)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "(tree fragment)":16 - * else: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state(self, __pyx_state) - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.Enum.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":298 - * - * @cname('__pyx_align_pointer') - * cdef void *align_pointer(void *memory, size_t alignment) nogil: # <<<<<<<<<<<<<< - * "Align pointer memory on a given boundary" - * cdef Py_intptr_t aligned_p = memory - */ - -static void *__pyx_align_pointer(void *__pyx_v_memory, size_t __pyx_v_alignment) { - Py_intptr_t __pyx_v_aligned_p; - size_t __pyx_v_offset; - void *__pyx_r; - int __pyx_t_1; - - /* "View.MemoryView":300 - * cdef void *align_pointer(void *memory, size_t alignment) nogil: - * "Align pointer memory on a given boundary" - * cdef Py_intptr_t aligned_p = memory # <<<<<<<<<<<<<< - * cdef size_t offset - * - */ - __pyx_v_aligned_p = ((Py_intptr_t)__pyx_v_memory); - - /* "View.MemoryView":304 - * - * with cython.cdivision(True): - * offset = aligned_p % alignment # <<<<<<<<<<<<<< - * - * if offset > 0: - */ - __pyx_v_offset = (__pyx_v_aligned_p % __pyx_v_alignment); - - /* "View.MemoryView":306 - * offset = aligned_p % alignment - * - * if offset > 0: # <<<<<<<<<<<<<< - * aligned_p += alignment - offset - * - */ - __pyx_t_1 = ((__pyx_v_offset > 0) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":307 - * - * if offset > 0: - * aligned_p += alignment - offset # <<<<<<<<<<<<<< - * - * return aligned_p - */ - __pyx_v_aligned_p = (__pyx_v_aligned_p + (__pyx_v_alignment - __pyx_v_offset)); - - /* "View.MemoryView":306 - * offset = aligned_p % alignment - * - * if offset > 0: # <<<<<<<<<<<<<< - * aligned_p += alignment - offset - * - */ - } - - /* "View.MemoryView":309 - * aligned_p += alignment - offset - * - * return aligned_p # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = ((void *)__pyx_v_aligned_p); - goto __pyx_L0; - - /* "View.MemoryView":298 - * - * @cname('__pyx_align_pointer') - * cdef void *align_pointer(void *memory, size_t alignment) nogil: # <<<<<<<<<<<<<< - * "Align pointer memory on a given boundary" - * cdef Py_intptr_t aligned_p = memory - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":345 - * cdef __Pyx_TypeInfo *typeinfo - * - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): # <<<<<<<<<<<<<< - * self.obj = obj - * self.flags = flags - */ - -/* Python wrapper */ -static int __pyx_memoryview___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static int __pyx_memoryview___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_obj = 0; - int __pyx_v_flags; - int __pyx_v_dtype_is_object; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_obj,&__pyx_n_s_flags,&__pyx_n_s_dtype_is_object,0}; - PyObject* values[3] = {0,0,0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_obj)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_flags)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 2, 3, 1); __PYX_ERR(1, 345, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (kw_args > 0) { - PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_dtype_is_object); - if (value) { values[2] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__cinit__") < 0)) __PYX_ERR(1, 345, __pyx_L3_error) - } - } else { - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_obj = values[0]; - __pyx_v_flags = __Pyx_PyInt_As_int(values[1]); if (unlikely((__pyx_v_flags == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 345, __pyx_L3_error) - if (values[2]) { - __pyx_v_dtype_is_object = __Pyx_PyObject_IsTrue(values[2]); if (unlikely((__pyx_v_dtype_is_object == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 345, __pyx_L3_error) - } else { - __pyx_v_dtype_is_object = ((int)0); - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 345, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.memoryview.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return -1; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_obj, __pyx_v_flags, __pyx_v_dtype_is_object); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj, int __pyx_v_flags, int __pyx_v_dtype_is_object) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__cinit__", 0); - - /* "View.MemoryView":346 - * - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): - * self.obj = obj # <<<<<<<<<<<<<< - * self.flags = flags - * if type(self) is memoryview or obj is not None: - */ - __Pyx_INCREF(__pyx_v_obj); - __Pyx_GIVEREF(__pyx_v_obj); - __Pyx_GOTREF(__pyx_v_self->obj); - __Pyx_DECREF(__pyx_v_self->obj); - __pyx_v_self->obj = __pyx_v_obj; - - /* "View.MemoryView":347 - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): - * self.obj = obj - * self.flags = flags # <<<<<<<<<<<<<< - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) - */ - __pyx_v_self->flags = __pyx_v_flags; - - /* "View.MemoryView":348 - * self.obj = obj - * self.flags = flags - * if type(self) is memoryview or obj is not None: # <<<<<<<<<<<<<< - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: - */ - __pyx_t_2 = (((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))) == ((PyObject *)__pyx_memoryview_type)); - __pyx_t_3 = (__pyx_t_2 != 0); - if (!__pyx_t_3) { - } else { - __pyx_t_1 = __pyx_t_3; - goto __pyx_L4_bool_binop_done; - } - __pyx_t_3 = (__pyx_v_obj != Py_None); - __pyx_t_2 = (__pyx_t_3 != 0); - __pyx_t_1 = __pyx_t_2; - __pyx_L4_bool_binop_done:; - if (__pyx_t_1) { - - /* "View.MemoryView":349 - * self.flags = flags - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) # <<<<<<<<<<<<<< - * if self.view.obj == NULL: - * (<__pyx_buffer *> &self.view).obj = Py_None - */ - __pyx_t_4 = __Pyx_GetBuffer(__pyx_v_obj, (&__pyx_v_self->view), __pyx_v_flags); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(1, 349, __pyx_L1_error) - - /* "View.MemoryView":350 - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: # <<<<<<<<<<<<<< - * (<__pyx_buffer *> &self.view).obj = Py_None - * Py_INCREF(Py_None) - */ - __pyx_t_1 = ((((PyObject *)__pyx_v_self->view.obj) == NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":351 - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: - * (<__pyx_buffer *> &self.view).obj = Py_None # <<<<<<<<<<<<<< - * Py_INCREF(Py_None) - * - */ - ((Py_buffer *)(&__pyx_v_self->view))->obj = Py_None; - - /* "View.MemoryView":352 - * if self.view.obj == NULL: - * (<__pyx_buffer *> &self.view).obj = Py_None - * Py_INCREF(Py_None) # <<<<<<<<<<<<<< - * - * global __pyx_memoryview_thread_locks_used - */ - Py_INCREF(Py_None); - - /* "View.MemoryView":350 - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: # <<<<<<<<<<<<<< - * (<__pyx_buffer *> &self.view).obj = Py_None - * Py_INCREF(Py_None) - */ - } - - /* "View.MemoryView":348 - * self.obj = obj - * self.flags = flags - * if type(self) is memoryview or obj is not None: # <<<<<<<<<<<<<< - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: - */ - } - - /* "View.MemoryView":355 - * - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: # <<<<<<<<<<<<<< - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - */ - __pyx_t_1 = ((__pyx_memoryview_thread_locks_used < 8) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":356 - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: - */ - __pyx_v_self->lock = (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]); - - /* "View.MemoryView":357 - * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 # <<<<<<<<<<<<<< - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() - */ - __pyx_memoryview_thread_locks_used = (__pyx_memoryview_thread_locks_used + 1); - - /* "View.MemoryView":355 - * - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: # <<<<<<<<<<<<<< - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - */ - } - - /* "View.MemoryView":358 - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: # <<<<<<<<<<<<<< - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: - */ - __pyx_t_1 = ((__pyx_v_self->lock == NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":359 - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() # <<<<<<<<<<<<<< - * if self.lock is NULL: - * raise MemoryError - */ - __pyx_v_self->lock = PyThread_allocate_lock(); - - /* "View.MemoryView":360 - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * - */ - __pyx_t_1 = ((__pyx_v_self->lock == NULL) != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":361 - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: - * raise MemoryError # <<<<<<<<<<<<<< - * - * if flags & PyBUF_FORMAT: - */ - PyErr_NoMemory(); __PYX_ERR(1, 361, __pyx_L1_error) - - /* "View.MemoryView":360 - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * - */ - } - - /* "View.MemoryView":358 - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: # <<<<<<<<<<<<<< - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: - */ - } - - /* "View.MemoryView":363 - * raise MemoryError - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":364 - * - * if flags & PyBUF_FORMAT: - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') # <<<<<<<<<<<<<< - * else: - * self.dtype_is_object = dtype_is_object - */ - __pyx_t_2 = (((__pyx_v_self->view.format[0]) == 'O') != 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L11_bool_binop_done; - } - __pyx_t_2 = (((__pyx_v_self->view.format[1]) == '\x00') != 0); - __pyx_t_1 = __pyx_t_2; - __pyx_L11_bool_binop_done:; - __pyx_v_self->dtype_is_object = __pyx_t_1; - - /* "View.MemoryView":363 - * raise MemoryError - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') - * else: - */ - goto __pyx_L10; - } - - /* "View.MemoryView":366 - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') - * else: - * self.dtype_is_object = dtype_is_object # <<<<<<<<<<<<<< - * - * self.acquisition_count_aligned_p = <__pyx_atomic_int *> align_pointer( - */ - /*else*/ { - __pyx_v_self->dtype_is_object = __pyx_v_dtype_is_object; - } - __pyx_L10:; - - /* "View.MemoryView":368 - * self.dtype_is_object = dtype_is_object - * - * self.acquisition_count_aligned_p = <__pyx_atomic_int *> align_pointer( # <<<<<<<<<<<<<< - * &self.acquisition_count[0], sizeof(__pyx_atomic_int)) - * self.typeinfo = NULL - */ - __pyx_v_self->acquisition_count_aligned_p = ((__pyx_atomic_int *)__pyx_align_pointer(((void *)(&(__pyx_v_self->acquisition_count[0]))), (sizeof(__pyx_atomic_int)))); - - /* "View.MemoryView":370 - * self.acquisition_count_aligned_p = <__pyx_atomic_int *> align_pointer( - * &self.acquisition_count[0], sizeof(__pyx_atomic_int)) - * self.typeinfo = NULL # <<<<<<<<<<<<<< - * - * def __dealloc__(memoryview self): - */ - __pyx_v_self->typeinfo = NULL; - - /* "View.MemoryView":345 - * cdef __Pyx_TypeInfo *typeinfo - * - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): # <<<<<<<<<<<<<< - * self.obj = obj - * self.flags = flags - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView.memoryview.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":372 - * self.typeinfo = NULL - * - * def __dealloc__(memoryview self): # <<<<<<<<<<<<<< - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - */ - -/* Python wrapper */ -static void __pyx_memoryview___dealloc__(PyObject *__pyx_v_self); /*proto*/ -static void __pyx_memoryview___dealloc__(PyObject *__pyx_v_self) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0); - __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -static void __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(struct __pyx_memoryview_obj *__pyx_v_self) { - int __pyx_v_i; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - int __pyx_t_5; - PyThread_type_lock __pyx_t_6; - PyThread_type_lock __pyx_t_7; - __Pyx_RefNannySetupContext("__dealloc__", 0); - - /* "View.MemoryView":373 - * - * def __dealloc__(memoryview self): - * if self.obj is not None: # <<<<<<<<<<<<<< - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - */ - __pyx_t_1 = (__pyx_v_self->obj != Py_None); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":374 - * def __dealloc__(memoryview self): - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) # <<<<<<<<<<<<<< - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - * - */ - __Pyx_ReleaseBuffer((&__pyx_v_self->view)); - - /* "View.MemoryView":373 - * - * def __dealloc__(memoryview self): - * if self.obj is not None: # <<<<<<<<<<<<<< - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":375 - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: # <<<<<<<<<<<<<< - * - * (<__pyx_buffer *> &self.view).obj = NULL - */ - __pyx_t_2 = ((((Py_buffer *)(&__pyx_v_self->view))->obj == Py_None) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":377 - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - * - * (<__pyx_buffer *> &self.view).obj = NULL # <<<<<<<<<<<<<< - * Py_DECREF(Py_None) - * - */ - ((Py_buffer *)(&__pyx_v_self->view))->obj = NULL; - - /* "View.MemoryView":378 - * - * (<__pyx_buffer *> &self.view).obj = NULL - * Py_DECREF(Py_None) # <<<<<<<<<<<<<< - * - * cdef int i - */ - Py_DECREF(Py_None); - - /* "View.MemoryView":375 - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: # <<<<<<<<<<<<<< - * - * (<__pyx_buffer *> &self.view).obj = NULL - */ - } - __pyx_L3:; - - /* "View.MemoryView":382 - * cdef int i - * global __pyx_memoryview_thread_locks_used - * if self.lock != NULL: # <<<<<<<<<<<<<< - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: - */ - __pyx_t_2 = ((__pyx_v_self->lock != NULL) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":383 - * global __pyx_memoryview_thread_locks_used - * if self.lock != NULL: - * for i in range(__pyx_memoryview_thread_locks_used): # <<<<<<<<<<<<<< - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 - */ - __pyx_t_3 = __pyx_memoryview_thread_locks_used; - __pyx_t_4 = __pyx_t_3; - for (__pyx_t_5 = 0; __pyx_t_5 < __pyx_t_4; __pyx_t_5+=1) { - __pyx_v_i = __pyx_t_5; - - /* "View.MemoryView":384 - * if self.lock != NULL: - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: - */ - __pyx_t_2 = (((__pyx_memoryview_thread_locks[__pyx_v_i]) == __pyx_v_self->lock) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":385 - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 # <<<<<<<<<<<<<< - * if i != __pyx_memoryview_thread_locks_used: - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - */ - __pyx_memoryview_thread_locks_used = (__pyx_memoryview_thread_locks_used - 1); - - /* "View.MemoryView":386 - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - */ - __pyx_t_2 = ((__pyx_v_i != __pyx_memoryview_thread_locks_used) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":388 - * if i != __pyx_memoryview_thread_locks_used: - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) # <<<<<<<<<<<<<< - * break - * else: - */ - __pyx_t_6 = (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]); - __pyx_t_7 = (__pyx_memoryview_thread_locks[__pyx_v_i]); - - /* "View.MemoryView":387 - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - * break - */ - (__pyx_memoryview_thread_locks[__pyx_v_i]) = __pyx_t_6; - (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]) = __pyx_t_7; - - /* "View.MemoryView":386 - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - */ - } - - /* "View.MemoryView":389 - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - * break # <<<<<<<<<<<<<< - * else: - * PyThread_free_lock(self.lock) - */ - goto __pyx_L6_break; - - /* "View.MemoryView":384 - * if self.lock != NULL: - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: - */ - } - } - /*else*/ { - - /* "View.MemoryView":391 - * break - * else: - * PyThread_free_lock(self.lock) # <<<<<<<<<<<<<< - * - * cdef char *get_item_pointer(memoryview self, object index) except NULL: - */ - PyThread_free_lock(__pyx_v_self->lock); - } - __pyx_L6_break:; - - /* "View.MemoryView":382 - * cdef int i - * global __pyx_memoryview_thread_locks_used - * if self.lock != NULL: # <<<<<<<<<<<<<< - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: - */ - } - - /* "View.MemoryView":372 - * self.typeinfo = NULL - * - * def __dealloc__(memoryview self): # <<<<<<<<<<<<<< - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":393 - * PyThread_free_lock(self.lock) - * - * cdef char *get_item_pointer(memoryview self, object index) except NULL: # <<<<<<<<<<<<<< - * cdef Py_ssize_t dim - * cdef char *itemp = self.view.buf - */ - -static char *__pyx_memoryview_get_item_pointer(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index) { - Py_ssize_t __pyx_v_dim; - char *__pyx_v_itemp; - PyObject *__pyx_v_idx = NULL; - char *__pyx_r; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t __pyx_t_3; - PyObject *(*__pyx_t_4)(PyObject *); - PyObject *__pyx_t_5 = NULL; - Py_ssize_t __pyx_t_6; - char *__pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("get_item_pointer", 0); - - /* "View.MemoryView":395 - * cdef char *get_item_pointer(memoryview self, object index) except NULL: - * cdef Py_ssize_t dim - * cdef char *itemp = self.view.buf # <<<<<<<<<<<<<< - * - * for dim, idx in enumerate(index): - */ - __pyx_v_itemp = ((char *)__pyx_v_self->view.buf); - - /* "View.MemoryView":397 - * cdef char *itemp = self.view.buf - * - * for dim, idx in enumerate(index): # <<<<<<<<<<<<<< - * itemp = pybuffer_index(&self.view, itemp, idx, dim) - * - */ - __pyx_t_1 = 0; - if (likely(PyList_CheckExact(__pyx_v_index)) || PyTuple_CheckExact(__pyx_v_index)) { - __pyx_t_2 = __pyx_v_index; __Pyx_INCREF(__pyx_t_2); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - } else { - __pyx_t_3 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_v_index); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 397, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = Py_TYPE(__pyx_t_2)->tp_iternext; if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 397, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_4)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - if (__pyx_t_3 >= PyList_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_5); __pyx_t_3++; if (unlikely(0 < 0)) __PYX_ERR(1, 397, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 397, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - } else { - if (__pyx_t_3 >= PyTuple_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_5); __pyx_t_3++; if (unlikely(0 < 0)) __PYX_ERR(1, 397, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 397, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - } - } else { - __pyx_t_5 = __pyx_t_4(__pyx_t_2); - if (unlikely(!__pyx_t_5)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(1, 397, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_5); - } - __Pyx_XDECREF_SET(__pyx_v_idx, __pyx_t_5); - __pyx_t_5 = 0; - __pyx_v_dim = __pyx_t_1; - __pyx_t_1 = (__pyx_t_1 + 1); - - /* "View.MemoryView":398 - * - * for dim, idx in enumerate(index): - * itemp = pybuffer_index(&self.view, itemp, idx, dim) # <<<<<<<<<<<<<< - * - * return itemp - */ - __pyx_t_6 = __Pyx_PyIndex_AsSsize_t(__pyx_v_idx); if (unlikely((__pyx_t_6 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 398, __pyx_L1_error) - __pyx_t_7 = __pyx_pybuffer_index((&__pyx_v_self->view), __pyx_v_itemp, __pyx_t_6, __pyx_v_dim); if (unlikely(__pyx_t_7 == ((char *)NULL))) __PYX_ERR(1, 398, __pyx_L1_error) - __pyx_v_itemp = __pyx_t_7; - - /* "View.MemoryView":397 - * cdef char *itemp = self.view.buf - * - * for dim, idx in enumerate(index): # <<<<<<<<<<<<<< - * itemp = pybuffer_index(&self.view, itemp, idx, dim) - * - */ - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "View.MemoryView":400 - * itemp = pybuffer_index(&self.view, itemp, idx, dim) - * - * return itemp # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = __pyx_v_itemp; - goto __pyx_L0; - - /* "View.MemoryView":393 - * PyThread_free_lock(self.lock) - * - * cdef char *get_item_pointer(memoryview self, object index) except NULL: # <<<<<<<<<<<<<< - * cdef Py_ssize_t dim - * cdef char *itemp = self.view.buf - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview.get_item_pointer", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_idx); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":403 - * - * - * def __getitem__(memoryview self, object index): # <<<<<<<<<<<<<< - * if index is Ellipsis: - * return self - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index); /*proto*/ -static PyObject *__pyx_memoryview___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getitem__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v_index)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index) { - PyObject *__pyx_v_have_slices = NULL; - PyObject *__pyx_v_indices = NULL; - char *__pyx_v_itemp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - char *__pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__getitem__", 0); - - /* "View.MemoryView":404 - * - * def __getitem__(memoryview self, object index): - * if index is Ellipsis: # <<<<<<<<<<<<<< - * return self - * - */ - __pyx_t_1 = (__pyx_v_index == __pyx_builtin_Ellipsis); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":405 - * def __getitem__(memoryview self, object index): - * if index is Ellipsis: - * return self # <<<<<<<<<<<<<< - * - * have_slices, indices = _unellipsify(index, self.view.ndim) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)__pyx_v_self)); - __pyx_r = ((PyObject *)__pyx_v_self); - goto __pyx_L0; - - /* "View.MemoryView":404 - * - * def __getitem__(memoryview self, object index): - * if index is Ellipsis: # <<<<<<<<<<<<<< - * return self - * - */ - } - - /* "View.MemoryView":407 - * return self - * - * have_slices, indices = _unellipsify(index, self.view.ndim) # <<<<<<<<<<<<<< - * - * cdef char *itemp - */ - __pyx_t_3 = _unellipsify(__pyx_v_index, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 407, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (likely(__pyx_t_3 != Py_None)) { - PyObject* sequence = __pyx_t_3; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(1, 407, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_5 = PyTuple_GET_ITEM(sequence, 1); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(__pyx_t_5); - #else - __pyx_t_4 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 407, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 407, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } else { - __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(1, 407, __pyx_L1_error) - } - __pyx_v_have_slices = __pyx_t_4; - __pyx_t_4 = 0; - __pyx_v_indices = __pyx_t_5; - __pyx_t_5 = 0; - - /* "View.MemoryView":410 - * - * cdef char *itemp - * if have_slices: # <<<<<<<<<<<<<< - * return memview_slice(self, indices) - * else: - */ - __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_v_have_slices); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(1, 410, __pyx_L1_error) - if (__pyx_t_2) { - - /* "View.MemoryView":411 - * cdef char *itemp - * if have_slices: - * return memview_slice(self, indices) # <<<<<<<<<<<<<< - * else: - * itemp = self.get_item_pointer(indices) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = ((PyObject *)__pyx_memview_slice(__pyx_v_self, __pyx_v_indices)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 411, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "View.MemoryView":410 - * - * cdef char *itemp - * if have_slices: # <<<<<<<<<<<<<< - * return memview_slice(self, indices) - * else: - */ - } - - /* "View.MemoryView":413 - * return memview_slice(self, indices) - * else: - * itemp = self.get_item_pointer(indices) # <<<<<<<<<<<<<< - * return self.convert_item_to_object(itemp) - * - */ - /*else*/ { - __pyx_t_6 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->get_item_pointer(__pyx_v_self, __pyx_v_indices); if (unlikely(__pyx_t_6 == ((char *)NULL))) __PYX_ERR(1, 413, __pyx_L1_error) - __pyx_v_itemp = __pyx_t_6; - - /* "View.MemoryView":414 - * else: - * itemp = self.get_item_pointer(indices) - * return self.convert_item_to_object(itemp) # <<<<<<<<<<<<<< - * - * def __setitem__(memoryview self, object index, object value): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->convert_item_to_object(__pyx_v_self, __pyx_v_itemp); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 414, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - } - - /* "View.MemoryView":403 - * - * - * def __getitem__(memoryview self, object index): # <<<<<<<<<<<<<< - * if index is Ellipsis: - * return self - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview.__getitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_have_slices); - __Pyx_XDECREF(__pyx_v_indices); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":416 - * return self.convert_item_to_object(itemp) - * - * def __setitem__(memoryview self, object index, object value): # <<<<<<<<<<<<<< - * if self.view.readonly: - * raise TypeError("Cannot assign to read-only memoryview") - */ - -/* Python wrapper */ -static int __pyx_memoryview___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /*proto*/ -static int __pyx_memoryview___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setitem__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v_index), ((PyObject *)__pyx_v_value)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) { - PyObject *__pyx_v_have_slices = NULL; - PyObject *__pyx_v_obj = NULL; - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setitem__", 0); - __Pyx_INCREF(__pyx_v_index); - - /* "View.MemoryView":417 - * - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: # <<<<<<<<<<<<<< - * raise TypeError("Cannot assign to read-only memoryview") - * - */ - __pyx_t_1 = (__pyx_v_self->view.readonly != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":418 - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: - * raise TypeError("Cannot assign to read-only memoryview") # <<<<<<<<<<<<<< - * - * have_slices, index = _unellipsify(index, self.view.ndim) - */ - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__9, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 418, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_Raise(__pyx_t_2, 0, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(1, 418, __pyx_L1_error) - - /* "View.MemoryView":417 - * - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: # <<<<<<<<<<<<<< - * raise TypeError("Cannot assign to read-only memoryview") - * - */ - } - - /* "View.MemoryView":420 - * raise TypeError("Cannot assign to read-only memoryview") - * - * have_slices, index = _unellipsify(index, self.view.ndim) # <<<<<<<<<<<<<< - * - * if have_slices: - */ - __pyx_t_2 = _unellipsify(__pyx_v_index, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 420, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (likely(__pyx_t_2 != Py_None)) { - PyObject* sequence = __pyx_t_2; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(1, 420, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_4 = PyTuple_GET_ITEM(sequence, 1); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_4); - #else - __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 420, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 420, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } else { - __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(1, 420, __pyx_L1_error) - } - __pyx_v_have_slices = __pyx_t_3; - __pyx_t_3 = 0; - __Pyx_DECREF_SET(__pyx_v_index, __pyx_t_4); - __pyx_t_4 = 0; - - /* "View.MemoryView":422 - * have_slices, index = _unellipsify(index, self.view.ndim) - * - * if have_slices: # <<<<<<<<<<<<<< - * obj = self.is_slice(value) - * if obj: - */ - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_v_have_slices); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 422, __pyx_L1_error) - if (__pyx_t_1) { - - /* "View.MemoryView":423 - * - * if have_slices: - * obj = self.is_slice(value) # <<<<<<<<<<<<<< - * if obj: - * self.setitem_slice_assignment(self[index], obj) - */ - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->is_slice(__pyx_v_self, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 423, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_v_obj = __pyx_t_2; - __pyx_t_2 = 0; - - /* "View.MemoryView":424 - * if have_slices: - * obj = self.is_slice(value) - * if obj: # <<<<<<<<<<<<<< - * self.setitem_slice_assignment(self[index], obj) - * else: - */ - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_v_obj); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 424, __pyx_L1_error) - if (__pyx_t_1) { - - /* "View.MemoryView":425 - * obj = self.is_slice(value) - * if obj: - * self.setitem_slice_assignment(self[index], obj) # <<<<<<<<<<<<<< - * else: - * self.setitem_slice_assign_scalar(self[index], value) - */ - __pyx_t_2 = __Pyx_PyObject_GetItem(((PyObject *)__pyx_v_self), __pyx_v_index); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 425, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_slice_assignment(__pyx_v_self, __pyx_t_2, __pyx_v_obj); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 425, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "View.MemoryView":424 - * if have_slices: - * obj = self.is_slice(value) - * if obj: # <<<<<<<<<<<<<< - * self.setitem_slice_assignment(self[index], obj) - * else: - */ - goto __pyx_L5; - } - - /* "View.MemoryView":427 - * self.setitem_slice_assignment(self[index], obj) - * else: - * self.setitem_slice_assign_scalar(self[index], value) # <<<<<<<<<<<<<< - * else: - * self.setitem_indexed(index, value) - */ - /*else*/ { - __pyx_t_4 = __Pyx_PyObject_GetItem(((PyObject *)__pyx_v_self), __pyx_v_index); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 427, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - if (!(likely(((__pyx_t_4) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_4, __pyx_memoryview_type))))) __PYX_ERR(1, 427, __pyx_L1_error) - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_slice_assign_scalar(__pyx_v_self, ((struct __pyx_memoryview_obj *)__pyx_t_4), __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 427, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_L5:; - - /* "View.MemoryView":422 - * have_slices, index = _unellipsify(index, self.view.ndim) - * - * if have_slices: # <<<<<<<<<<<<<< - * obj = self.is_slice(value) - * if obj: - */ - goto __pyx_L4; - } - - /* "View.MemoryView":429 - * self.setitem_slice_assign_scalar(self[index], value) - * else: - * self.setitem_indexed(index, value) # <<<<<<<<<<<<<< - * - * cdef is_slice(self, obj): - */ - /*else*/ { - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_indexed(__pyx_v_self, __pyx_v_index, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 429, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_L4:; - - /* "View.MemoryView":416 - * return self.convert_item_to_object(itemp) - * - * def __setitem__(memoryview self, object index, object value): # <<<<<<<<<<<<<< - * if self.view.readonly: - * raise TypeError("Cannot assign to read-only memoryview") - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView.memoryview.__setitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_have_slices); - __Pyx_XDECREF(__pyx_v_obj); - __Pyx_XDECREF(__pyx_v_index); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":431 - * self.setitem_indexed(index, value) - * - * cdef is_slice(self, obj): # <<<<<<<<<<<<<< - * if not isinstance(obj, memoryview): - * try: - */ - -static PyObject *__pyx_memoryview_is_slice(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - int __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("is_slice", 0); - __Pyx_INCREF(__pyx_v_obj); - - /* "View.MemoryView":432 - * - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): # <<<<<<<<<<<<<< - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - */ - __pyx_t_1 = __Pyx_TypeCheck(__pyx_v_obj, __pyx_memoryview_type); - __pyx_t_2 = ((!(__pyx_t_1 != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":433 - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): - * try: # <<<<<<<<<<<<<< - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_3, &__pyx_t_4, &__pyx_t_5); - __Pyx_XGOTREF(__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_4); - __Pyx_XGOTREF(__pyx_t_5); - /*try:*/ { - - /* "View.MemoryView":434 - * if not isinstance(obj, memoryview): - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, # <<<<<<<<<<<<<< - * self.dtype_is_object) - * except TypeError: - */ - __pyx_t_6 = __Pyx_PyInt_From_int(((__pyx_v_self->flags & (~PyBUF_WRITABLE)) | PyBUF_ANY_CONTIGUOUS)); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 434, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_6); - - /* "View.MemoryView":435 - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) # <<<<<<<<<<<<<< - * except TypeError: - * return None - */ - __pyx_t_7 = __Pyx_PyBool_FromLong(__pyx_v_self->dtype_is_object); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 435, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_7); - - /* "View.MemoryView":434 - * if not isinstance(obj, memoryview): - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, # <<<<<<<<<<<<<< - * self.dtype_is_object) - * except TypeError: - */ - __pyx_t_8 = PyTuple_New(3); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 434, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_INCREF(__pyx_v_obj); - __Pyx_GIVEREF(__pyx_v_obj); - PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_v_obj); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_8, 1, __pyx_t_6); - __Pyx_GIVEREF(__pyx_t_7); - PyTuple_SET_ITEM(__pyx_t_8, 2, __pyx_t_7); - __pyx_t_6 = 0; - __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_8, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 434, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF_SET(__pyx_v_obj, __pyx_t_7); - __pyx_t_7 = 0; - - /* "View.MemoryView":433 - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): - * try: # <<<<<<<<<<<<<< - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - */ - } - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - goto __pyx_L9_try_end; - __pyx_L4_error:; - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - - /* "View.MemoryView":436 - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - * except TypeError: # <<<<<<<<<<<<<< - * return None - * - */ - __pyx_t_9 = __Pyx_PyErr_ExceptionMatches(__pyx_builtin_TypeError); - if (__pyx_t_9) { - __Pyx_AddTraceback("View.MemoryView.memoryview.is_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_7, &__pyx_t_8, &__pyx_t_6) < 0) __PYX_ERR(1, 436, __pyx_L6_except_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_GOTREF(__pyx_t_8); - __Pyx_GOTREF(__pyx_t_6); - - /* "View.MemoryView":437 - * self.dtype_is_object) - * except TypeError: - * return None # <<<<<<<<<<<<<< - * - * return obj - */ - __Pyx_XDECREF(__pyx_r); - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - goto __pyx_L7_except_return; - } - goto __pyx_L6_except_error; - __pyx_L6_except_error:; - - /* "View.MemoryView":433 - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): - * try: # <<<<<<<<<<<<<< - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - */ - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_XGIVEREF(__pyx_t_5); - __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_4, __pyx_t_5); - goto __pyx_L1_error; - __pyx_L7_except_return:; - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_XGIVEREF(__pyx_t_5); - __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_4, __pyx_t_5); - goto __pyx_L0; - __pyx_L9_try_end:; - } - - /* "View.MemoryView":432 - * - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): # <<<<<<<<<<<<<< - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - */ - } - - /* "View.MemoryView":439 - * return None - * - * return obj # <<<<<<<<<<<<<< - * - * cdef setitem_slice_assignment(self, dst, src): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_obj); - __pyx_r = __pyx_v_obj; - goto __pyx_L0; - - /* "View.MemoryView":431 - * self.setitem_indexed(index, value) - * - * cdef is_slice(self, obj): # <<<<<<<<<<<<<< - * if not isinstance(obj, memoryview): - * try: - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_AddTraceback("View.MemoryView.memoryview.is_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_obj); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":441 - * return obj - * - * cdef setitem_slice_assignment(self, dst, src): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice dst_slice - * cdef __Pyx_memviewslice src_slice - */ - -static PyObject *__pyx_memoryview_setitem_slice_assignment(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_dst, PyObject *__pyx_v_src) { - __Pyx_memviewslice __pyx_v_dst_slice; - __Pyx_memviewslice __pyx_v_src_slice; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - __Pyx_memviewslice *__pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_t_5; - int __pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("setitem_slice_assignment", 0); - - /* "View.MemoryView":445 - * cdef __Pyx_memviewslice src_slice - * - * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], # <<<<<<<<<<<<<< - * get_slice_from_memview(dst, &dst_slice)[0], - * src.ndim, dst.ndim, self.dtype_is_object) - */ - if (!(likely(((__pyx_v_src) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_src, __pyx_memoryview_type))))) __PYX_ERR(1, 445, __pyx_L1_error) - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(((struct __pyx_memoryview_obj *)__pyx_v_src), (&__pyx_v_src_slice)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 445, __pyx_L1_error) - - /* "View.MemoryView":446 - * - * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], - * get_slice_from_memview(dst, &dst_slice)[0], # <<<<<<<<<<<<<< - * src.ndim, dst.ndim, self.dtype_is_object) - * - */ - if (!(likely(((__pyx_v_dst) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_dst, __pyx_memoryview_type))))) __PYX_ERR(1, 446, __pyx_L1_error) - __pyx_t_2 = __pyx_memoryview_get_slice_from_memoryview(((struct __pyx_memoryview_obj *)__pyx_v_dst), (&__pyx_v_dst_slice)); if (unlikely(__pyx_t_2 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 446, __pyx_L1_error) - - /* "View.MemoryView":447 - * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], - * get_slice_from_memview(dst, &dst_slice)[0], - * src.ndim, dst.ndim, self.dtype_is_object) # <<<<<<<<<<<<<< - * - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_src, __pyx_n_s_ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 447, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyInt_As_int(__pyx_t_3); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 447, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_dst, __pyx_n_s_ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 447, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = __Pyx_PyInt_As_int(__pyx_t_3); if (unlikely((__pyx_t_5 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 447, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":445 - * cdef __Pyx_memviewslice src_slice - * - * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], # <<<<<<<<<<<<<< - * get_slice_from_memview(dst, &dst_slice)[0], - * src.ndim, dst.ndim, self.dtype_is_object) - */ - __pyx_t_6 = __pyx_memoryview_copy_contents((__pyx_t_1[0]), (__pyx_t_2[0]), __pyx_t_4, __pyx_t_5, __pyx_v_self->dtype_is_object); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(1, 445, __pyx_L1_error) - - /* "View.MemoryView":441 - * return obj - * - * cdef setitem_slice_assignment(self, dst, src): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice dst_slice - * cdef __Pyx_memviewslice src_slice - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_slice_assignment", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":449 - * src.ndim, dst.ndim, self.dtype_is_object) - * - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): # <<<<<<<<<<<<<< - * cdef int array[128] - * cdef void *tmp = NULL - */ - -static PyObject *__pyx_memoryview_setitem_slice_assign_scalar(struct __pyx_memoryview_obj *__pyx_v_self, struct __pyx_memoryview_obj *__pyx_v_dst, PyObject *__pyx_v_value) { - int __pyx_v_array[0x80]; - void *__pyx_v_tmp; - void *__pyx_v_item; - __Pyx_memviewslice *__pyx_v_dst_slice; - __Pyx_memviewslice __pyx_v_tmp_slice; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_t_5; - char const *__pyx_t_6; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - PyObject *__pyx_t_10 = NULL; - PyObject *__pyx_t_11 = NULL; - PyObject *__pyx_t_12 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("setitem_slice_assign_scalar", 0); - - /* "View.MemoryView":451 - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): - * cdef int array[128] - * cdef void *tmp = NULL # <<<<<<<<<<<<<< - * cdef void *item - * - */ - __pyx_v_tmp = NULL; - - /* "View.MemoryView":456 - * cdef __Pyx_memviewslice *dst_slice - * cdef __Pyx_memviewslice tmp_slice - * dst_slice = get_slice_from_memview(dst, &tmp_slice) # <<<<<<<<<<<<<< - * - * if self.view.itemsize > sizeof(array): - */ - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_dst, (&__pyx_v_tmp_slice)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 456, __pyx_L1_error) - __pyx_v_dst_slice = __pyx_t_1; - - /* "View.MemoryView":458 - * dst_slice = get_slice_from_memview(dst, &tmp_slice) - * - * if self.view.itemsize > sizeof(array): # <<<<<<<<<<<<<< - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: - */ - __pyx_t_2 = ((((size_t)__pyx_v_self->view.itemsize) > (sizeof(__pyx_v_array))) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":459 - * - * if self.view.itemsize > sizeof(array): - * tmp = PyMem_Malloc(self.view.itemsize) # <<<<<<<<<<<<<< - * if tmp == NULL: - * raise MemoryError - */ - __pyx_v_tmp = PyMem_Malloc(__pyx_v_self->view.itemsize); - - /* "View.MemoryView":460 - * if self.view.itemsize > sizeof(array): - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * item = tmp - */ - __pyx_t_2 = ((__pyx_v_tmp == NULL) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":461 - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: - * raise MemoryError # <<<<<<<<<<<<<< - * item = tmp - * else: - */ - PyErr_NoMemory(); __PYX_ERR(1, 461, __pyx_L1_error) - - /* "View.MemoryView":460 - * if self.view.itemsize > sizeof(array): - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * item = tmp - */ - } - - /* "View.MemoryView":462 - * if tmp == NULL: - * raise MemoryError - * item = tmp # <<<<<<<<<<<<<< - * else: - * item = array - */ - __pyx_v_item = __pyx_v_tmp; - - /* "View.MemoryView":458 - * dst_slice = get_slice_from_memview(dst, &tmp_slice) - * - * if self.view.itemsize > sizeof(array): # <<<<<<<<<<<<<< - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":464 - * item = tmp - * else: - * item = array # <<<<<<<<<<<<<< - * - * try: - */ - /*else*/ { - __pyx_v_item = ((void *)__pyx_v_array); - } - __pyx_L3:; - - /* "View.MemoryView":466 - * item = array - * - * try: # <<<<<<<<<<<<<< - * if self.dtype_is_object: - * ( item)[0] = value - */ - /*try:*/ { - - /* "View.MemoryView":467 - * - * try: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * ( item)[0] = value - * else: - */ - __pyx_t_2 = (__pyx_v_self->dtype_is_object != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":468 - * try: - * if self.dtype_is_object: - * ( item)[0] = value # <<<<<<<<<<<<<< - * else: - * self.assign_item_from_object( item, value) - */ - (((PyObject **)__pyx_v_item)[0]) = ((PyObject *)__pyx_v_value); - - /* "View.MemoryView":467 - * - * try: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * ( item)[0] = value - * else: - */ - goto __pyx_L8; - } - - /* "View.MemoryView":470 - * ( item)[0] = value - * else: - * self.assign_item_from_object( item, value) # <<<<<<<<<<<<<< - * - * - */ - /*else*/ { - __pyx_t_3 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->assign_item_from_object(__pyx_v_self, ((char *)__pyx_v_item), __pyx_v_value); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 470, __pyx_L6_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_L8:; - - /* "View.MemoryView":474 - * - * - * if self.view.suboffsets != NULL: # <<<<<<<<<<<<<< - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, - */ - __pyx_t_2 = ((__pyx_v_self->view.suboffsets != NULL) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":475 - * - * if self.view.suboffsets != NULL: - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) # <<<<<<<<<<<<<< - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, - * item, self.dtype_is_object) - */ - __pyx_t_3 = assert_direct_dimensions(__pyx_v_self->view.suboffsets, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 475, __pyx_L6_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":474 - * - * - * if self.view.suboffsets != NULL: # <<<<<<<<<<<<<< - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, - */ - } - - /* "View.MemoryView":476 - * if self.view.suboffsets != NULL: - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, # <<<<<<<<<<<<<< - * item, self.dtype_is_object) - * finally: - */ - __pyx_memoryview_slice_assign_scalar(__pyx_v_dst_slice, __pyx_v_dst->view.ndim, __pyx_v_self->view.itemsize, __pyx_v_item, __pyx_v_self->dtype_is_object); - } - - /* "View.MemoryView":479 - * item, self.dtype_is_object) - * finally: - * PyMem_Free(tmp) # <<<<<<<<<<<<<< - * - * cdef setitem_indexed(self, index, value): - */ - /*finally:*/ { - /*normal exit:*/{ - PyMem_Free(__pyx_v_tmp); - goto __pyx_L7; - } - __pyx_L6_error:; - /*exception exit:*/{ - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0; - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PY_MAJOR_VERSION >= 3) __Pyx_ExceptionSwap(&__pyx_t_10, &__pyx_t_11, &__pyx_t_12); - if ((PY_MAJOR_VERSION < 3) || unlikely(__Pyx_GetException(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9) < 0)) __Pyx_ErrFetch(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9); - __Pyx_XGOTREF(__pyx_t_7); - __Pyx_XGOTREF(__pyx_t_8); - __Pyx_XGOTREF(__pyx_t_9); - __Pyx_XGOTREF(__pyx_t_10); - __Pyx_XGOTREF(__pyx_t_11); - __Pyx_XGOTREF(__pyx_t_12); - __pyx_t_4 = __pyx_lineno; __pyx_t_5 = __pyx_clineno; __pyx_t_6 = __pyx_filename; - { - PyMem_Free(__pyx_v_tmp); - } - if (PY_MAJOR_VERSION >= 3) { - __Pyx_XGIVEREF(__pyx_t_10); - __Pyx_XGIVEREF(__pyx_t_11); - __Pyx_XGIVEREF(__pyx_t_12); - __Pyx_ExceptionReset(__pyx_t_10, __pyx_t_11, __pyx_t_12); - } - __Pyx_XGIVEREF(__pyx_t_7); - __Pyx_XGIVEREF(__pyx_t_8); - __Pyx_XGIVEREF(__pyx_t_9); - __Pyx_ErrRestore(__pyx_t_7, __pyx_t_8, __pyx_t_9); - __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0; - __pyx_lineno = __pyx_t_4; __pyx_clineno = __pyx_t_5; __pyx_filename = __pyx_t_6; - goto __pyx_L1_error; - } - __pyx_L7:; - } - - /* "View.MemoryView":449 - * src.ndim, dst.ndim, self.dtype_is_object) - * - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): # <<<<<<<<<<<<<< - * cdef int array[128] - * cdef void *tmp = NULL - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_slice_assign_scalar", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":481 - * PyMem_Free(tmp) - * - * cdef setitem_indexed(self, index, value): # <<<<<<<<<<<<<< - * cdef char *itemp = self.get_item_pointer(index) - * self.assign_item_from_object(itemp, value) - */ - -static PyObject *__pyx_memoryview_setitem_indexed(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) { - char *__pyx_v_itemp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - char *__pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("setitem_indexed", 0); - - /* "View.MemoryView":482 - * - * cdef setitem_indexed(self, index, value): - * cdef char *itemp = self.get_item_pointer(index) # <<<<<<<<<<<<<< - * self.assign_item_from_object(itemp, value) - * - */ - __pyx_t_1 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->get_item_pointer(__pyx_v_self, __pyx_v_index); if (unlikely(__pyx_t_1 == ((char *)NULL))) __PYX_ERR(1, 482, __pyx_L1_error) - __pyx_v_itemp = __pyx_t_1; - - /* "View.MemoryView":483 - * cdef setitem_indexed(self, index, value): - * cdef char *itemp = self.get_item_pointer(index) - * self.assign_item_from_object(itemp, value) # <<<<<<<<<<<<<< - * - * cdef convert_item_to_object(self, char *itemp): - */ - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->assign_item_from_object(__pyx_v_self, __pyx_v_itemp, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 483, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "View.MemoryView":481 - * PyMem_Free(tmp) - * - * cdef setitem_indexed(self, index, value): # <<<<<<<<<<<<<< - * cdef char *itemp = self.get_item_pointer(index) - * self.assign_item_from_object(itemp, value) - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_indexed", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":485 - * self.assign_item_from_object(itemp, value) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - -static PyObject *__pyx_memoryview_convert_item_to_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp) { - PyObject *__pyx_v_struct = NULL; - PyObject *__pyx_v_bytesitem = 0; - PyObject *__pyx_v_result = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - int __pyx_t_8; - PyObject *__pyx_t_9 = NULL; - size_t __pyx_t_10; - int __pyx_t_11; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("convert_item_to_object", 0); - - /* "View.MemoryView":488 - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - * import struct # <<<<<<<<<<<<<< - * cdef bytes bytesitem - * - */ - __pyx_t_1 = __Pyx_Import(__pyx_n_s_struct, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 488, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_struct = __pyx_t_1; - __pyx_t_1 = 0; - - /* "View.MemoryView":491 - * cdef bytes bytesitem - * - * bytesitem = itemp[:self.view.itemsize] # <<<<<<<<<<<<<< - * try: - * result = struct.unpack(self.view.format, bytesitem) - */ - __pyx_t_1 = __Pyx_PyBytes_FromStringAndSize(__pyx_v_itemp + 0, __pyx_v_self->view.itemsize - 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 491, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_bytesitem = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":492 - * - * bytesitem = itemp[:self.view.itemsize] - * try: # <<<<<<<<<<<<<< - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_2, &__pyx_t_3, &__pyx_t_4); - __Pyx_XGOTREF(__pyx_t_2); - __Pyx_XGOTREF(__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_4); - /*try:*/ { - - /* "View.MemoryView":493 - * bytesitem = itemp[:self.view.itemsize] - * try: - * result = struct.unpack(self.view.format, bytesitem) # <<<<<<<<<<<<<< - * except struct.error: - * raise ValueError("Unable to convert item to object") - */ - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_unpack); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 493, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 493, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_8 = 1; - } - } - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(__pyx_t_5)) { - PyObject *__pyx_temp[3] = {__pyx_t_7, __pyx_t_6, __pyx_v_bytesitem}; - __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_5, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 493, __pyx_L3_error) - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } else - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(__pyx_t_5)) { - PyObject *__pyx_temp[3] = {__pyx_t_7, __pyx_t_6, __pyx_v_bytesitem}; - __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_5, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 493, __pyx_L3_error) - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } else - #endif - { - __pyx_t_9 = PyTuple_New(2+__pyx_t_8); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 493, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_9); - if (__pyx_t_7) { - __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_9, 0, __pyx_t_7); __pyx_t_7 = NULL; - } - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_9, 0+__pyx_t_8, __pyx_t_6); - __Pyx_INCREF(__pyx_v_bytesitem); - __Pyx_GIVEREF(__pyx_v_bytesitem); - PyTuple_SET_ITEM(__pyx_t_9, 1+__pyx_t_8, __pyx_v_bytesitem); - __pyx_t_6 = 0; - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_t_9, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 493, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_v_result = __pyx_t_1; - __pyx_t_1 = 0; - - /* "View.MemoryView":492 - * - * bytesitem = itemp[:self.view.itemsize] - * try: # <<<<<<<<<<<<<< - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - */ - } - - /* "View.MemoryView":497 - * raise ValueError("Unable to convert item to object") - * else: - * if len(self.view.format) == 1: # <<<<<<<<<<<<<< - * return result[0] - * return result - */ - /*else:*/ { - __pyx_t_10 = strlen(__pyx_v_self->view.format); - __pyx_t_11 = ((__pyx_t_10 == 1) != 0); - if (__pyx_t_11) { - - /* "View.MemoryView":498 - * else: - * if len(self.view.format) == 1: - * return result[0] # <<<<<<<<<<<<<< - * return result - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_result, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 498, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L6_except_return; - - /* "View.MemoryView":497 - * raise ValueError("Unable to convert item to object") - * else: - * if len(self.view.format) == 1: # <<<<<<<<<<<<<< - * return result[0] - * return result - */ - } - - /* "View.MemoryView":499 - * if len(self.view.format) == 1: - * return result[0] - * return result # <<<<<<<<<<<<<< - * - * cdef assign_item_from_object(self, char *itemp, object value): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_result); - __pyx_r = __pyx_v_result; - goto __pyx_L6_except_return; - } - __pyx_L3_error:; - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "View.MemoryView":494 - * try: - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: # <<<<<<<<<<<<<< - * raise ValueError("Unable to convert item to object") - * else: - */ - __Pyx_ErrFetch(&__pyx_t_1, &__pyx_t_5, &__pyx_t_9); - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_error); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 494, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_8 = __Pyx_PyErr_GivenExceptionMatches(__pyx_t_1, __pyx_t_6); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_ErrRestore(__pyx_t_1, __pyx_t_5, __pyx_t_9); - __pyx_t_1 = 0; __pyx_t_5 = 0; __pyx_t_9 = 0; - if (__pyx_t_8) { - __Pyx_AddTraceback("View.MemoryView.memoryview.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_9, &__pyx_t_5, &__pyx_t_1) < 0) __PYX_ERR(1, 494, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GOTREF(__pyx_t_1); - - /* "View.MemoryView":495 - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - * raise ValueError("Unable to convert item to object") # <<<<<<<<<<<<<< - * else: - * if len(self.view.format) == 1: - */ - __pyx_t_6 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__10, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 495, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_Raise(__pyx_t_6, 0, 0, 0); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __PYX_ERR(1, 495, __pyx_L5_except_error) - } - goto __pyx_L5_except_error; - __pyx_L5_except_error:; - - /* "View.MemoryView":492 - * - * bytesitem = itemp[:self.view.itemsize] - * try: # <<<<<<<<<<<<<< - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - */ - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_ExceptionReset(__pyx_t_2, __pyx_t_3, __pyx_t_4); - goto __pyx_L1_error; - __pyx_L6_except_return:; - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_ExceptionReset(__pyx_t_2, __pyx_t_3, __pyx_t_4); - goto __pyx_L0; - } - - /* "View.MemoryView":485 - * self.assign_item_from_object(itemp, value) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_AddTraceback("View.MemoryView.memoryview.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_struct); - __Pyx_XDECREF(__pyx_v_bytesitem); - __Pyx_XDECREF(__pyx_v_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":501 - * return result - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - -static PyObject *__pyx_memoryview_assign_item_from_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value) { - PyObject *__pyx_v_struct = NULL; - char __pyx_v_c; - PyObject *__pyx_v_bytesvalue = 0; - Py_ssize_t __pyx_v_i; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - int __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - int __pyx_t_7; - PyObject *__pyx_t_8 = NULL; - Py_ssize_t __pyx_t_9; - PyObject *__pyx_t_10 = NULL; - char *__pyx_t_11; - char *__pyx_t_12; - char *__pyx_t_13; - char *__pyx_t_14; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("assign_item_from_object", 0); - - /* "View.MemoryView":504 - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - * import struct # <<<<<<<<<<<<<< - * cdef char c - * cdef bytes bytesvalue - */ - __pyx_t_1 = __Pyx_Import(__pyx_n_s_struct, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 504, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_struct = __pyx_t_1; - __pyx_t_1 = 0; - - /* "View.MemoryView":509 - * cdef Py_ssize_t i - * - * if isinstance(value, tuple): # <<<<<<<<<<<<<< - * bytesvalue = struct.pack(self.view.format, *value) - * else: - */ - __pyx_t_2 = PyTuple_Check(__pyx_v_value); - __pyx_t_3 = (__pyx_t_2 != 0); - if (__pyx_t_3) { - - /* "View.MemoryView":510 - * - * if isinstance(value, tuple): - * bytesvalue = struct.pack(self.view.format, *value) # <<<<<<<<<<<<<< - * else: - * bytesvalue = struct.pack(self.view.format, value) - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_pack); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 510, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 510, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 510, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4); - __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PySequence_Tuple(__pyx_v_value); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 510, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_6 = PyNumber_Add(__pyx_t_5, __pyx_t_4); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 510, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_6, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 510, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (!(likely(PyBytes_CheckExact(__pyx_t_4))||((__pyx_t_4) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_t_4)->tp_name), 0))) __PYX_ERR(1, 510, __pyx_L1_error) - __pyx_v_bytesvalue = ((PyObject*)__pyx_t_4); - __pyx_t_4 = 0; - - /* "View.MemoryView":509 - * cdef Py_ssize_t i - * - * if isinstance(value, tuple): # <<<<<<<<<<<<<< - * bytesvalue = struct.pack(self.view.format, *value) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":512 - * bytesvalue = struct.pack(self.view.format, *value) - * else: - * bytesvalue = struct.pack(self.view.format, value) # <<<<<<<<<<<<<< - * - * for i, c in enumerate(bytesvalue): - */ - /*else*/ { - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_pack); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_1 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_5 = NULL; - __pyx_t_7 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_7 = 1; - } - } - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(__pyx_t_6)) { - PyObject *__pyx_temp[3] = {__pyx_t_5, __pyx_t_1, __pyx_v_value}; - __pyx_t_4 = __Pyx_PyFunction_FastCall(__pyx_t_6, __pyx_temp+1-__pyx_t_7, 2+__pyx_t_7); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(__pyx_t_6)) { - PyObject *__pyx_temp[3] = {__pyx_t_5, __pyx_t_1, __pyx_v_value}; - __pyx_t_4 = __Pyx_PyCFunction_FastCall(__pyx_t_6, __pyx_temp+1-__pyx_t_7, 2+__pyx_t_7); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else - #endif - { - __pyx_t_8 = PyTuple_New(2+__pyx_t_7); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - if (__pyx_t_5) { - __Pyx_GIVEREF(__pyx_t_5); PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_t_5); __pyx_t_5 = NULL; - } - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_8, 0+__pyx_t_7, __pyx_t_1); - __Pyx_INCREF(__pyx_v_value); - __Pyx_GIVEREF(__pyx_v_value); - PyTuple_SET_ITEM(__pyx_t_8, 1+__pyx_t_7, __pyx_v_value); - __pyx_t_1 = 0; - __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_6, __pyx_t_8, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (!(likely(PyBytes_CheckExact(__pyx_t_4))||((__pyx_t_4) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_t_4)->tp_name), 0))) __PYX_ERR(1, 512, __pyx_L1_error) - __pyx_v_bytesvalue = ((PyObject*)__pyx_t_4); - __pyx_t_4 = 0; - } - __pyx_L3:; - - /* "View.MemoryView":514 - * bytesvalue = struct.pack(self.view.format, value) - * - * for i, c in enumerate(bytesvalue): # <<<<<<<<<<<<<< - * itemp[i] = c - * - */ - __pyx_t_9 = 0; - if (unlikely(__pyx_v_bytesvalue == Py_None)) { - PyErr_SetString(PyExc_TypeError, "'NoneType' is not iterable"); - __PYX_ERR(1, 514, __pyx_L1_error) - } - __Pyx_INCREF(__pyx_v_bytesvalue); - __pyx_t_10 = __pyx_v_bytesvalue; - __pyx_t_12 = PyBytes_AS_STRING(__pyx_t_10); - __pyx_t_13 = (__pyx_t_12 + PyBytes_GET_SIZE(__pyx_t_10)); - for (__pyx_t_14 = __pyx_t_12; __pyx_t_14 < __pyx_t_13; __pyx_t_14++) { - __pyx_t_11 = __pyx_t_14; - __pyx_v_c = (__pyx_t_11[0]); - - /* "View.MemoryView":515 - * - * for i, c in enumerate(bytesvalue): - * itemp[i] = c # <<<<<<<<<<<<<< - * - * @cname('getbuffer') - */ - __pyx_v_i = __pyx_t_9; - - /* "View.MemoryView":514 - * bytesvalue = struct.pack(self.view.format, value) - * - * for i, c in enumerate(bytesvalue): # <<<<<<<<<<<<<< - * itemp[i] = c - * - */ - __pyx_t_9 = (__pyx_t_9 + 1); - - /* "View.MemoryView":515 - * - * for i, c in enumerate(bytesvalue): - * itemp[i] = c # <<<<<<<<<<<<<< - * - * @cname('getbuffer') - */ - (__pyx_v_itemp[__pyx_v_i]) = __pyx_v_c; - } - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - - /* "View.MemoryView":501 - * return result - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_AddTraceback("View.MemoryView.memoryview.assign_item_from_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_struct); - __Pyx_XDECREF(__pyx_v_bytesvalue); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":518 - * - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<< - * if flags & PyBUF_WRITABLE and self.view.readonly: - * raise ValueError("Cannot create writable memory view from read-only memoryview") - */ - -/* Python wrapper */ -static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getbuffer__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(struct __pyx_memoryview_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - Py_ssize_t *__pyx_t_4; - char *__pyx_t_5; - void *__pyx_t_6; - int __pyx_t_7; - Py_ssize_t __pyx_t_8; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - if (__pyx_v_info == NULL) { - PyErr_SetString(PyExc_BufferError, "PyObject_GetBuffer: view==NULL argument is obsolete"); - return -1; - } - __Pyx_RefNannySetupContext("__getbuffer__", 0); - __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(__pyx_v_info->obj); - - /* "View.MemoryView":519 - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: # <<<<<<<<<<<<<< - * raise ValueError("Cannot create writable memory view from read-only memoryview") - * - */ - __pyx_t_2 = ((__pyx_v_flags & PyBUF_WRITABLE) != 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L4_bool_binop_done; - } - __pyx_t_2 = (__pyx_v_self->view.readonly != 0); - __pyx_t_1 = __pyx_t_2; - __pyx_L4_bool_binop_done:; - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":520 - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: - * raise ValueError("Cannot create writable memory view from read-only memoryview") # <<<<<<<<<<<<<< - * - * if flags & PyBUF_ND: - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__11, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 520, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 520, __pyx_L1_error) - - /* "View.MemoryView":519 - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: # <<<<<<<<<<<<<< - * raise ValueError("Cannot create writable memory view from read-only memoryview") - * - */ - } - - /* "View.MemoryView":522 - * raise ValueError("Cannot create writable memory view from read-only memoryview") - * - * if flags & PyBUF_ND: # <<<<<<<<<<<<<< - * info.shape = self.view.shape - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_ND) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":523 - * - * if flags & PyBUF_ND: - * info.shape = self.view.shape # <<<<<<<<<<<<<< - * else: - * info.shape = NULL - */ - __pyx_t_4 = __pyx_v_self->view.shape; - __pyx_v_info->shape = __pyx_t_4; - - /* "View.MemoryView":522 - * raise ValueError("Cannot create writable memory view from read-only memoryview") - * - * if flags & PyBUF_ND: # <<<<<<<<<<<<<< - * info.shape = self.view.shape - * else: - */ - goto __pyx_L6; - } - - /* "View.MemoryView":525 - * info.shape = self.view.shape - * else: - * info.shape = NULL # <<<<<<<<<<<<<< - * - * if flags & PyBUF_STRIDES: - */ - /*else*/ { - __pyx_v_info->shape = NULL; - } - __pyx_L6:; - - /* "View.MemoryView":527 - * info.shape = NULL - * - * if flags & PyBUF_STRIDES: # <<<<<<<<<<<<<< - * info.strides = self.view.strides - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_STRIDES) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":528 - * - * if flags & PyBUF_STRIDES: - * info.strides = self.view.strides # <<<<<<<<<<<<<< - * else: - * info.strides = NULL - */ - __pyx_t_4 = __pyx_v_self->view.strides; - __pyx_v_info->strides = __pyx_t_4; - - /* "View.MemoryView":527 - * info.shape = NULL - * - * if flags & PyBUF_STRIDES: # <<<<<<<<<<<<<< - * info.strides = self.view.strides - * else: - */ - goto __pyx_L7; - } - - /* "View.MemoryView":530 - * info.strides = self.view.strides - * else: - * info.strides = NULL # <<<<<<<<<<<<<< - * - * if flags & PyBUF_INDIRECT: - */ - /*else*/ { - __pyx_v_info->strides = NULL; - } - __pyx_L7:; - - /* "View.MemoryView":532 - * info.strides = NULL - * - * if flags & PyBUF_INDIRECT: # <<<<<<<<<<<<<< - * info.suboffsets = self.view.suboffsets - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_INDIRECT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":533 - * - * if flags & PyBUF_INDIRECT: - * info.suboffsets = self.view.suboffsets # <<<<<<<<<<<<<< - * else: - * info.suboffsets = NULL - */ - __pyx_t_4 = __pyx_v_self->view.suboffsets; - __pyx_v_info->suboffsets = __pyx_t_4; - - /* "View.MemoryView":532 - * info.strides = NULL - * - * if flags & PyBUF_INDIRECT: # <<<<<<<<<<<<<< - * info.suboffsets = self.view.suboffsets - * else: - */ - goto __pyx_L8; - } - - /* "View.MemoryView":535 - * info.suboffsets = self.view.suboffsets - * else: - * info.suboffsets = NULL # <<<<<<<<<<<<<< - * - * if flags & PyBUF_FORMAT: - */ - /*else*/ { - __pyx_v_info->suboffsets = NULL; - } - __pyx_L8:; - - /* "View.MemoryView":537 - * info.suboffsets = NULL - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * info.format = self.view.format - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":538 - * - * if flags & PyBUF_FORMAT: - * info.format = self.view.format # <<<<<<<<<<<<<< - * else: - * info.format = NULL - */ - __pyx_t_5 = __pyx_v_self->view.format; - __pyx_v_info->format = __pyx_t_5; - - /* "View.MemoryView":537 - * info.suboffsets = NULL - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * info.format = self.view.format - * else: - */ - goto __pyx_L9; - } - - /* "View.MemoryView":540 - * info.format = self.view.format - * else: - * info.format = NULL # <<<<<<<<<<<<<< - * - * info.buf = self.view.buf - */ - /*else*/ { - __pyx_v_info->format = NULL; - } - __pyx_L9:; - - /* "View.MemoryView":542 - * info.format = NULL - * - * info.buf = self.view.buf # <<<<<<<<<<<<<< - * info.ndim = self.view.ndim - * info.itemsize = self.view.itemsize - */ - __pyx_t_6 = __pyx_v_self->view.buf; - __pyx_v_info->buf = __pyx_t_6; - - /* "View.MemoryView":543 - * - * info.buf = self.view.buf - * info.ndim = self.view.ndim # <<<<<<<<<<<<<< - * info.itemsize = self.view.itemsize - * info.len = self.view.len - */ - __pyx_t_7 = __pyx_v_self->view.ndim; - __pyx_v_info->ndim = __pyx_t_7; - - /* "View.MemoryView":544 - * info.buf = self.view.buf - * info.ndim = self.view.ndim - * info.itemsize = self.view.itemsize # <<<<<<<<<<<<<< - * info.len = self.view.len - * info.readonly = self.view.readonly - */ - __pyx_t_8 = __pyx_v_self->view.itemsize; - __pyx_v_info->itemsize = __pyx_t_8; - - /* "View.MemoryView":545 - * info.ndim = self.view.ndim - * info.itemsize = self.view.itemsize - * info.len = self.view.len # <<<<<<<<<<<<<< - * info.readonly = self.view.readonly - * info.obj = self - */ - __pyx_t_8 = __pyx_v_self->view.len; - __pyx_v_info->len = __pyx_t_8; - - /* "View.MemoryView":546 - * info.itemsize = self.view.itemsize - * info.len = self.view.len - * info.readonly = self.view.readonly # <<<<<<<<<<<<<< - * info.obj = self - * - */ - __pyx_t_1 = __pyx_v_self->view.readonly; - __pyx_v_info->readonly = __pyx_t_1; - - /* "View.MemoryView":547 - * info.len = self.view.len - * info.readonly = self.view.readonly - * info.obj = self # <<<<<<<<<<<<<< - * - * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)") - */ - __Pyx_INCREF(((PyObject *)__pyx_v_self)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_self)); - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); - __pyx_v_info->obj = ((PyObject *)__pyx_v_self); - - /* "View.MemoryView":518 - * - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<< - * if flags & PyBUF_WRITABLE and self.view.readonly: - * raise ValueError("Cannot create writable memory view from read-only memoryview") - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.__getbuffer__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - if (__pyx_v_info->obj != NULL) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - goto __pyx_L2; - __pyx_L0:; - if (__pyx_v_info->obj == Py_None) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - __pyx_L2:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":553 - * - * @property - * def T(self): # <<<<<<<<<<<<<< - * cdef _memoryviewslice result = memoryview_copy(self) - * transpose_memslice(&result.from_slice) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - struct __pyx_memoryviewslice_obj *__pyx_v_result = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":554 - * @property - * def T(self): - * cdef _memoryviewslice result = memoryview_copy(self) # <<<<<<<<<<<<<< - * transpose_memslice(&result.from_slice) - * return result - */ - __pyx_t_1 = __pyx_memoryview_copy_object(__pyx_v_self); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 554, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_memoryviewslice_type))))) __PYX_ERR(1, 554, __pyx_L1_error) - __pyx_v_result = ((struct __pyx_memoryviewslice_obj *)__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":555 - * def T(self): - * cdef _memoryviewslice result = memoryview_copy(self) - * transpose_memslice(&result.from_slice) # <<<<<<<<<<<<<< - * return result - * - */ - __pyx_t_2 = __pyx_memslice_transpose((&__pyx_v_result->from_slice)); if (unlikely(__pyx_t_2 == ((int)0))) __PYX_ERR(1, 555, __pyx_L1_error) - - /* "View.MemoryView":556 - * cdef _memoryviewslice result = memoryview_copy(self) - * transpose_memslice(&result.from_slice) - * return result # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)__pyx_v_result)); - __pyx_r = ((PyObject *)__pyx_v_result); - goto __pyx_L0; - - /* "View.MemoryView":553 - * - * @property - * def T(self): # <<<<<<<<<<<<<< - * cdef _memoryviewslice result = memoryview_copy(self) - * transpose_memslice(&result.from_slice) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.T.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":559 - * - * @property - * def base(self): # <<<<<<<<<<<<<< - * return self.obj - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":560 - * @property - * def base(self): - * return self.obj # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->obj); - __pyx_r = __pyx_v_self->obj; - goto __pyx_L0; - - /* "View.MemoryView":559 - * - * @property - * def base(self): # <<<<<<<<<<<<<< - * return self.obj - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":563 - * - * @property - * def shape(self): # <<<<<<<<<<<<<< - * return tuple([length for length in self.view.shape[:self.view.ndim]]) - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_v_length; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - Py_ssize_t *__pyx_t_2; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":564 - * @property - * def shape(self): - * return tuple([length for length in self.view.shape[:self.view.ndim]]) # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 564, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = (__pyx_v_self->view.shape + __pyx_v_self->view.ndim); - for (__pyx_t_4 = __pyx_v_self->view.shape; __pyx_t_4 < __pyx_t_3; __pyx_t_4++) { - __pyx_t_2 = __pyx_t_4; - __pyx_v_length = (__pyx_t_2[0]); - __pyx_t_5 = PyInt_FromSsize_t(__pyx_v_length); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 564, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_t_5))) __PYX_ERR(1, 564, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __pyx_t_5 = PyList_AsTuple(((PyObject*)__pyx_t_1)); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 564, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "View.MemoryView":563 - * - * @property - * def shape(self): # <<<<<<<<<<<<<< - * return tuple([length for length in self.view.shape[:self.view.ndim]]) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview.shape.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":567 - * - * @property - * def strides(self): # <<<<<<<<<<<<<< - * if self.view.strides == NULL: - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_v_stride; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - Py_ssize_t *__pyx_t_5; - PyObject *__pyx_t_6 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":568 - * @property - * def strides(self): - * if self.view.strides == NULL: # <<<<<<<<<<<<<< - * - * raise ValueError("Buffer view does not expose strides") - */ - __pyx_t_1 = ((__pyx_v_self->view.strides == NULL) != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":570 - * if self.view.strides == NULL: - * - * raise ValueError("Buffer view does not expose strides") # <<<<<<<<<<<<<< - * - * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) - */ - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__12, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 570, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_Raise(__pyx_t_2, 0, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(1, 570, __pyx_L1_error) - - /* "View.MemoryView":568 - * @property - * def strides(self): - * if self.view.strides == NULL: # <<<<<<<<<<<<<< - * - * raise ValueError("Buffer view does not expose strides") - */ - } - - /* "View.MemoryView":572 - * raise ValueError("Buffer view does not expose strides") - * - * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 572, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = (__pyx_v_self->view.strides + __pyx_v_self->view.ndim); - for (__pyx_t_5 = __pyx_v_self->view.strides; __pyx_t_5 < __pyx_t_4; __pyx_t_5++) { - __pyx_t_3 = __pyx_t_5; - __pyx_v_stride = (__pyx_t_3[0]); - __pyx_t_6 = PyInt_FromSsize_t(__pyx_v_stride); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 572, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - if (unlikely(__Pyx_ListComp_Append(__pyx_t_2, (PyObject*)__pyx_t_6))) __PYX_ERR(1, 572, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - __pyx_t_6 = PyList_AsTuple(((PyObject*)__pyx_t_2)); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 572, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_6; - __pyx_t_6 = 0; - goto __pyx_L0; - - /* "View.MemoryView":567 - * - * @property - * def strides(self): # <<<<<<<<<<<<<< - * if self.view.strides == NULL: - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("View.MemoryView.memoryview.strides.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":575 - * - * @property - * def suboffsets(self): # <<<<<<<<<<<<<< - * if self.view.suboffsets == NULL: - * return (-1,) * self.view.ndim - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_v_suboffset; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - Py_ssize_t *__pyx_t_4; - Py_ssize_t *__pyx_t_5; - Py_ssize_t *__pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":576 - * @property - * def suboffsets(self): - * if self.view.suboffsets == NULL: # <<<<<<<<<<<<<< - * return (-1,) * self.view.ndim - * - */ - __pyx_t_1 = ((__pyx_v_self->view.suboffsets == NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":577 - * def suboffsets(self): - * if self.view.suboffsets == NULL: - * return (-1,) * self.view.ndim # <<<<<<<<<<<<<< - * - * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_PyInt_From_int(__pyx_v_self->view.ndim); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 577, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Multiply(__pyx_tuple__13, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 577, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "View.MemoryView":576 - * @property - * def suboffsets(self): - * if self.view.suboffsets == NULL: # <<<<<<<<<<<<<< - * return (-1,) * self.view.ndim - * - */ - } - - /* "View.MemoryView":579 - * return (-1,) * self.view.ndim - * - * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 579, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = (__pyx_v_self->view.suboffsets + __pyx_v_self->view.ndim); - for (__pyx_t_6 = __pyx_v_self->view.suboffsets; __pyx_t_6 < __pyx_t_5; __pyx_t_6++) { - __pyx_t_4 = __pyx_t_6; - __pyx_v_suboffset = (__pyx_t_4[0]); - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_suboffset); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 579, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (unlikely(__Pyx_ListComp_Append(__pyx_t_3, (PyObject*)__pyx_t_2))) __PYX_ERR(1, 579, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_2 = PyList_AsTuple(((PyObject*)__pyx_t_3)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 579, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":575 - * - * @property - * def suboffsets(self): # <<<<<<<<<<<<<< - * if self.view.suboffsets == NULL: - * return (-1,) * self.view.ndim - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.suboffsets.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":582 - * - * @property - * def ndim(self): # <<<<<<<<<<<<<< - * return self.view.ndim - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":583 - * @property - * def ndim(self): - * return self.view.ndim # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_self->view.ndim); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 583, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":582 - * - * @property - * def ndim(self): # <<<<<<<<<<<<<< - * return self.view.ndim - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.ndim.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":586 - * - * @property - * def itemsize(self): # <<<<<<<<<<<<<< - * return self.view.itemsize - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":587 - * @property - * def itemsize(self): - * return self.view.itemsize # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyInt_FromSsize_t(__pyx_v_self->view.itemsize); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 587, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":586 - * - * @property - * def itemsize(self): # <<<<<<<<<<<<<< - * return self.view.itemsize - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.itemsize.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":590 - * - * @property - * def nbytes(self): # <<<<<<<<<<<<<< - * return self.size * self.view.itemsize - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":591 - * @property - * def nbytes(self): - * return self.size * self.view.itemsize # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_size); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 591, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_self->view.itemsize); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 591, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Multiply(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 591, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "View.MemoryView":590 - * - * @property - * def nbytes(self): # <<<<<<<<<<<<<< - * return self.size * self.view.itemsize - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.nbytes.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":594 - * - * @property - * def size(self): # <<<<<<<<<<<<<< - * if self._size is None: - * result = 1 - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_v_result = NULL; - PyObject *__pyx_v_length = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - Py_ssize_t *__pyx_t_5; - PyObject *__pyx_t_6 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":595 - * @property - * def size(self): - * if self._size is None: # <<<<<<<<<<<<<< - * result = 1 - * - */ - __pyx_t_1 = (__pyx_v_self->_size == Py_None); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":596 - * def size(self): - * if self._size is None: - * result = 1 # <<<<<<<<<<<<<< - * - * for length in self.view.shape[:self.view.ndim]: - */ - __Pyx_INCREF(__pyx_int_1); - __pyx_v_result = __pyx_int_1; - - /* "View.MemoryView":598 - * result = 1 - * - * for length in self.view.shape[:self.view.ndim]: # <<<<<<<<<<<<<< - * result *= length - * - */ - __pyx_t_4 = (__pyx_v_self->view.shape + __pyx_v_self->view.ndim); - for (__pyx_t_5 = __pyx_v_self->view.shape; __pyx_t_5 < __pyx_t_4; __pyx_t_5++) { - __pyx_t_3 = __pyx_t_5; - __pyx_t_6 = PyInt_FromSsize_t((__pyx_t_3[0])); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 598, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_XDECREF_SET(__pyx_v_length, __pyx_t_6); - __pyx_t_6 = 0; - - /* "View.MemoryView":599 - * - * for length in self.view.shape[:self.view.ndim]: - * result *= length # <<<<<<<<<<<<<< - * - * self._size = result - */ - __pyx_t_6 = PyNumber_InPlaceMultiply(__pyx_v_result, __pyx_v_length); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 599, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF_SET(__pyx_v_result, __pyx_t_6); - __pyx_t_6 = 0; - } - - /* "View.MemoryView":601 - * result *= length - * - * self._size = result # <<<<<<<<<<<<<< - * - * return self._size - */ - __Pyx_INCREF(__pyx_v_result); - __Pyx_GIVEREF(__pyx_v_result); - __Pyx_GOTREF(__pyx_v_self->_size); - __Pyx_DECREF(__pyx_v_self->_size); - __pyx_v_self->_size = __pyx_v_result; - - /* "View.MemoryView":595 - * @property - * def size(self): - * if self._size is None: # <<<<<<<<<<<<<< - * result = 1 - * - */ - } - - /* "View.MemoryView":603 - * self._size = result - * - * return self._size # <<<<<<<<<<<<<< - * - * def __len__(self): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->_size); - __pyx_r = __pyx_v_self->_size; - goto __pyx_L0; - - /* "View.MemoryView":594 - * - * @property - * def size(self): # <<<<<<<<<<<<<< - * if self._size is None: - * result = 1 - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("View.MemoryView.memoryview.size.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_result); - __Pyx_XDECREF(__pyx_v_length); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":605 - * return self._size - * - * def __len__(self): # <<<<<<<<<<<<<< - * if self.view.ndim >= 1: - * return self.view.shape[0] - */ - -/* Python wrapper */ -static Py_ssize_t __pyx_memoryview___len__(PyObject *__pyx_v_self); /*proto*/ -static Py_ssize_t __pyx_memoryview___len__(PyObject *__pyx_v_self) { - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__len__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static Py_ssize_t __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - __Pyx_RefNannySetupContext("__len__", 0); - - /* "View.MemoryView":606 - * - * def __len__(self): - * if self.view.ndim >= 1: # <<<<<<<<<<<<<< - * return self.view.shape[0] - * - */ - __pyx_t_1 = ((__pyx_v_self->view.ndim >= 1) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":607 - * def __len__(self): - * if self.view.ndim >= 1: - * return self.view.shape[0] # <<<<<<<<<<<<<< - * - * return 0 - */ - __pyx_r = (__pyx_v_self->view.shape[0]); - goto __pyx_L0; - - /* "View.MemoryView":606 - * - * def __len__(self): - * if self.view.ndim >= 1: # <<<<<<<<<<<<<< - * return self.view.shape[0] - * - */ - } - - /* "View.MemoryView":609 - * return self.view.shape[0] - * - * return 0 # <<<<<<<<<<<<<< - * - * def __repr__(self): - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":605 - * return self._size - * - * def __len__(self): # <<<<<<<<<<<<<< - * if self.view.ndim >= 1: - * return self.view.shape[0] - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":611 - * return 0 - * - * def __repr__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__, - * id(self)) - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview___repr__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_memoryview___repr__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__repr__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__repr__", 0); - - /* "View.MemoryView":612 - * - * def __repr__(self): - * return "" % (self.base.__class__.__name__, # <<<<<<<<<<<<<< - * id(self)) - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_base); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 612, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_class); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 612, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_name_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 612, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "View.MemoryView":613 - * def __repr__(self): - * return "" % (self.base.__class__.__name__, - * id(self)) # <<<<<<<<<<<<<< - * - * def __str__(self): - */ - __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_builtin_id, ((PyObject *)__pyx_v_self)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 613, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "View.MemoryView":612 - * - * def __repr__(self): - * return "" % (self.base.__class__.__name__, # <<<<<<<<<<<<<< - * id(self)) - * - */ - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 612, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_2); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyString_Format(__pyx_kp_s_MemoryView_of_r_at_0x_x, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 612, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":611 - * return 0 - * - * def __repr__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__, - * id(self)) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.__repr__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":615 - * id(self)) - * - * def __str__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__,) - * - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview___str__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_memoryview___str__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__str__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__str__", 0); - - /* "View.MemoryView":616 - * - * def __str__(self): - * return "" % (self.base.__class__.__name__,) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_base); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 616, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_class); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 616, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_name_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 616, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 616, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyString_Format(__pyx_kp_s_MemoryView_of_r_object, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 616, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":615 - * id(self)) - * - * def __str__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__,) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.__str__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":619 - * - * - * def is_c_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_is_c_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_memoryview_is_c_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("is_c_contig (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice *__pyx_v_mslice; - __Pyx_memviewslice __pyx_v_tmp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("is_c_contig", 0); - - /* "View.MemoryView":622 - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) # <<<<<<<<<<<<<< - * return slice_is_contig(mslice[0], 'C', self.view.ndim) - * - */ - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_self, (&__pyx_v_tmp)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 622, __pyx_L1_error) - __pyx_v_mslice = __pyx_t_1; - - /* "View.MemoryView":623 - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) - * return slice_is_contig(mslice[0], 'C', self.view.ndim) # <<<<<<<<<<<<<< - * - * def is_f_contig(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_memviewslice_is_contig((__pyx_v_mslice[0]), 'C', __pyx_v_self->view.ndim)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 623, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":619 - * - * - * def is_c_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.is_c_contig", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":625 - * return slice_is_contig(mslice[0], 'C', self.view.ndim) - * - * def is_f_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_is_f_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_memoryview_is_f_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("is_f_contig (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice *__pyx_v_mslice; - __Pyx_memviewslice __pyx_v_tmp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("is_f_contig", 0); - - /* "View.MemoryView":628 - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) # <<<<<<<<<<<<<< - * return slice_is_contig(mslice[0], 'F', self.view.ndim) - * - */ - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_self, (&__pyx_v_tmp)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 628, __pyx_L1_error) - __pyx_v_mslice = __pyx_t_1; - - /* "View.MemoryView":629 - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) - * return slice_is_contig(mslice[0], 'F', self.view.ndim) # <<<<<<<<<<<<<< - * - * def copy(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_memviewslice_is_contig((__pyx_v_mslice[0]), 'F', __pyx_v_self->view.ndim)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 629, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":625 - * return slice_is_contig(mslice[0], 'C', self.view.ndim) - * - * def is_f_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.is_f_contig", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":631 - * return slice_is_contig(mslice[0], 'F', self.view.ndim) - * - * def copy(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice mslice - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_copy(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_memoryview_copy(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("copy (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice __pyx_v_mslice; - int __pyx_v_flags; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("copy", 0); - - /* "View.MemoryView":633 - * def copy(self): - * cdef __Pyx_memviewslice mslice - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS # <<<<<<<<<<<<<< - * - * slice_copy(self, &mslice) - */ - __pyx_v_flags = (__pyx_v_self->flags & (~PyBUF_F_CONTIGUOUS)); - - /* "View.MemoryView":635 - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS - * - * slice_copy(self, &mslice) # <<<<<<<<<<<<<< - * mslice = slice_copy_contig(&mslice, "c", self.view.ndim, - * self.view.itemsize, - */ - __pyx_memoryview_slice_copy(__pyx_v_self, (&__pyx_v_mslice)); - - /* "View.MemoryView":636 - * - * slice_copy(self, &mslice) - * mslice = slice_copy_contig(&mslice, "c", self.view.ndim, # <<<<<<<<<<<<<< - * self.view.itemsize, - * flags|PyBUF_C_CONTIGUOUS, - */ - __pyx_t_1 = __pyx_memoryview_copy_new_contig((&__pyx_v_mslice), ((char *)"c"), __pyx_v_self->view.ndim, __pyx_v_self->view.itemsize, (__pyx_v_flags | PyBUF_C_CONTIGUOUS), __pyx_v_self->dtype_is_object); if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 636, __pyx_L1_error) - __pyx_v_mslice = __pyx_t_1; - - /* "View.MemoryView":641 - * self.dtype_is_object) - * - * return memoryview_copy_from_slice(self, &mslice) # <<<<<<<<<<<<<< - * - * def copy_fortran(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_memoryview_copy_object_from_slice(__pyx_v_self, (&__pyx_v_mslice)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 641, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":631 - * return slice_is_contig(mslice[0], 'F', self.view.ndim) - * - * def copy(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice mslice - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.copy", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":643 - * return memoryview_copy_from_slice(self, &mslice) - * - * def copy_fortran(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice src, dst - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_copy_fortran(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_memoryview_copy_fortran(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("copy_fortran (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice __pyx_v_src; - __Pyx_memviewslice __pyx_v_dst; - int __pyx_v_flags; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("copy_fortran", 0); - - /* "View.MemoryView":645 - * def copy_fortran(self): - * cdef __Pyx_memviewslice src, dst - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS # <<<<<<<<<<<<<< - * - * slice_copy(self, &src) - */ - __pyx_v_flags = (__pyx_v_self->flags & (~PyBUF_C_CONTIGUOUS)); - - /* "View.MemoryView":647 - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS - * - * slice_copy(self, &src) # <<<<<<<<<<<<<< - * dst = slice_copy_contig(&src, "fortran", self.view.ndim, - * self.view.itemsize, - */ - __pyx_memoryview_slice_copy(__pyx_v_self, (&__pyx_v_src)); - - /* "View.MemoryView":648 - * - * slice_copy(self, &src) - * dst = slice_copy_contig(&src, "fortran", self.view.ndim, # <<<<<<<<<<<<<< - * self.view.itemsize, - * flags|PyBUF_F_CONTIGUOUS, - */ - __pyx_t_1 = __pyx_memoryview_copy_new_contig((&__pyx_v_src), ((char *)"fortran"), __pyx_v_self->view.ndim, __pyx_v_self->view.itemsize, (__pyx_v_flags | PyBUF_F_CONTIGUOUS), __pyx_v_self->dtype_is_object); if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 648, __pyx_L1_error) - __pyx_v_dst = __pyx_t_1; - - /* "View.MemoryView":653 - * self.dtype_is_object) - * - * return memoryview_copy_from_slice(self, &dst) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_memoryview_copy_object_from_slice(__pyx_v_self, (&__pyx_v_dst)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 653, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":643 - * return memoryview_copy_from_slice(self, &mslice) - * - * def copy_fortran(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice src, dst - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.copy_fortran", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryview_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryview_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_memoryview___reduce_cython__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryview___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__14, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 2, __pyx_L1_error) - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryview_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryview_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_memoryview_2__setstate_cython__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryview_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__15, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 4, __pyx_L1_error) - - /* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":657 - * - * @cname('__pyx_memoryview_new') - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): # <<<<<<<<<<<<<< - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo - */ - -static PyObject *__pyx_memoryview_new(PyObject *__pyx_v_o, int __pyx_v_flags, int __pyx_v_dtype_is_object, __Pyx_TypeInfo *__pyx_v_typeinfo) { - struct __pyx_memoryview_obj *__pyx_v_result = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_cwrapper", 0); - - /* "View.MemoryView":658 - * @cname('__pyx_memoryview_new') - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): - * cdef memoryview result = memoryview(o, flags, dtype_is_object) # <<<<<<<<<<<<<< - * result.typeinfo = typeinfo - * return result - */ - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_flags); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 658, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 658, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 658, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_o); - __Pyx_GIVEREF(__pyx_v_o); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_o); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 658, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result = ((struct __pyx_memoryview_obj *)__pyx_t_2); - __pyx_t_2 = 0; - - /* "View.MemoryView":659 - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo # <<<<<<<<<<<<<< - * return result - * - */ - __pyx_v_result->typeinfo = __pyx_v_typeinfo; - - /* "View.MemoryView":660 - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo - * return result # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_check') - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)__pyx_v_result)); - __pyx_r = ((PyObject *)__pyx_v_result); - goto __pyx_L0; - - /* "View.MemoryView":657 - * - * @cname('__pyx_memoryview_new') - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): # <<<<<<<<<<<<<< - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview_cwrapper", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":663 - * - * @cname('__pyx_memoryview_check') - * cdef inline bint memoryview_check(object o): # <<<<<<<<<<<<<< - * return isinstance(o, memoryview) - * - */ - -static CYTHON_INLINE int __pyx_memoryview_check(PyObject *__pyx_v_o) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - __Pyx_RefNannySetupContext("memoryview_check", 0); - - /* "View.MemoryView":664 - * @cname('__pyx_memoryview_check') - * cdef inline bint memoryview_check(object o): - * return isinstance(o, memoryview) # <<<<<<<<<<<<<< - * - * cdef tuple _unellipsify(object index, int ndim): - */ - __pyx_t_1 = __Pyx_TypeCheck(__pyx_v_o, __pyx_memoryview_type); - __pyx_r = __pyx_t_1; - goto __pyx_L0; - - /* "View.MemoryView":663 - * - * @cname('__pyx_memoryview_check') - * cdef inline bint memoryview_check(object o): # <<<<<<<<<<<<<< - * return isinstance(o, memoryview) - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":666 - * return isinstance(o, memoryview) - * - * cdef tuple _unellipsify(object index, int ndim): # <<<<<<<<<<<<<< - * """ - * Replace all ellipses with full slices and fill incomplete indices with - */ - -static PyObject *_unellipsify(PyObject *__pyx_v_index, int __pyx_v_ndim) { - PyObject *__pyx_v_tup = NULL; - PyObject *__pyx_v_result = NULL; - int __pyx_v_have_slices; - int __pyx_v_seen_ellipsis; - CYTHON_UNUSED PyObject *__pyx_v_idx = NULL; - PyObject *__pyx_v_item = NULL; - Py_ssize_t __pyx_v_nslices; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - Py_ssize_t __pyx_t_5; - PyObject *(*__pyx_t_6)(PyObject *); - PyObject *__pyx_t_7 = NULL; - Py_ssize_t __pyx_t_8; - int __pyx_t_9; - int __pyx_t_10; - PyObject *__pyx_t_11 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_unellipsify", 0); - - /* "View.MemoryView":671 - * full slices. - * """ - * if not isinstance(index, tuple): # <<<<<<<<<<<<<< - * tup = (index,) - * else: - */ - __pyx_t_1 = PyTuple_Check(__pyx_v_index); - __pyx_t_2 = ((!(__pyx_t_1 != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":672 - * """ - * if not isinstance(index, tuple): - * tup = (index,) # <<<<<<<<<<<<<< - * else: - * tup = index - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 672, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_index); - __Pyx_GIVEREF(__pyx_v_index); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_index); - __pyx_v_tup = __pyx_t_3; - __pyx_t_3 = 0; - - /* "View.MemoryView":671 - * full slices. - * """ - * if not isinstance(index, tuple): # <<<<<<<<<<<<<< - * tup = (index,) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":674 - * tup = (index,) - * else: - * tup = index # <<<<<<<<<<<<<< - * - * result = [] - */ - /*else*/ { - __Pyx_INCREF(__pyx_v_index); - __pyx_v_tup = __pyx_v_index; - } - __pyx_L3:; - - /* "View.MemoryView":676 - * tup = index - * - * result = [] # <<<<<<<<<<<<<< - * have_slices = False - * seen_ellipsis = False - */ - __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 676, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_v_result = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":677 - * - * result = [] - * have_slices = False # <<<<<<<<<<<<<< - * seen_ellipsis = False - * for idx, item in enumerate(tup): - */ - __pyx_v_have_slices = 0; - - /* "View.MemoryView":678 - * result = [] - * have_slices = False - * seen_ellipsis = False # <<<<<<<<<<<<<< - * for idx, item in enumerate(tup): - * if item is Ellipsis: - */ - __pyx_v_seen_ellipsis = 0; - - /* "View.MemoryView":679 - * have_slices = False - * seen_ellipsis = False - * for idx, item in enumerate(tup): # <<<<<<<<<<<<<< - * if item is Ellipsis: - * if not seen_ellipsis: - */ - __Pyx_INCREF(__pyx_int_0); - __pyx_t_3 = __pyx_int_0; - if (likely(PyList_CheckExact(__pyx_v_tup)) || PyTuple_CheckExact(__pyx_v_tup)) { - __pyx_t_4 = __pyx_v_tup; __Pyx_INCREF(__pyx_t_4); __pyx_t_5 = 0; - __pyx_t_6 = NULL; - } else { - __pyx_t_5 = -1; __pyx_t_4 = PyObject_GetIter(__pyx_v_tup); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 679, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_6 = Py_TYPE(__pyx_t_4)->tp_iternext; if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 679, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_6)) { - if (likely(PyList_CheckExact(__pyx_t_4))) { - if (__pyx_t_5 >= PyList_GET_SIZE(__pyx_t_4)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_7 = PyList_GET_ITEM(__pyx_t_4, __pyx_t_5); __Pyx_INCREF(__pyx_t_7); __pyx_t_5++; if (unlikely(0 < 0)) __PYX_ERR(1, 679, __pyx_L1_error) - #else - __pyx_t_7 = PySequence_ITEM(__pyx_t_4, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 679, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - #endif - } else { - if (__pyx_t_5 >= PyTuple_GET_SIZE(__pyx_t_4)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_7 = PyTuple_GET_ITEM(__pyx_t_4, __pyx_t_5); __Pyx_INCREF(__pyx_t_7); __pyx_t_5++; if (unlikely(0 < 0)) __PYX_ERR(1, 679, __pyx_L1_error) - #else - __pyx_t_7 = PySequence_ITEM(__pyx_t_4, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 679, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - #endif - } - } else { - __pyx_t_7 = __pyx_t_6(__pyx_t_4); - if (unlikely(!__pyx_t_7)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(1, 679, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_7); - } - __Pyx_XDECREF_SET(__pyx_v_item, __pyx_t_7); - __pyx_t_7 = 0; - __Pyx_INCREF(__pyx_t_3); - __Pyx_XDECREF_SET(__pyx_v_idx, __pyx_t_3); - __pyx_t_7 = __Pyx_PyInt_AddObjC(__pyx_t_3, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 679, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_3); - __pyx_t_3 = __pyx_t_7; - __pyx_t_7 = 0; - - /* "View.MemoryView":680 - * seen_ellipsis = False - * for idx, item in enumerate(tup): - * if item is Ellipsis: # <<<<<<<<<<<<<< - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - */ - __pyx_t_2 = (__pyx_v_item == __pyx_builtin_Ellipsis); - __pyx_t_1 = (__pyx_t_2 != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":681 - * for idx, item in enumerate(tup): - * if item is Ellipsis: - * if not seen_ellipsis: # <<<<<<<<<<<<<< - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - * seen_ellipsis = True - */ - __pyx_t_1 = ((!(__pyx_v_seen_ellipsis != 0)) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":682 - * if item is Ellipsis: - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) # <<<<<<<<<<<<<< - * seen_ellipsis = True - * else: - */ - __pyx_t_8 = PyObject_Length(__pyx_v_tup); if (unlikely(__pyx_t_8 == ((Py_ssize_t)-1))) __PYX_ERR(1, 682, __pyx_L1_error) - __pyx_t_7 = PyList_New(1 * ((((__pyx_v_ndim - __pyx_t_8) + 1)<0) ? 0:((__pyx_v_ndim - __pyx_t_8) + 1))); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 682, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - { Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < ((__pyx_v_ndim - __pyx_t_8) + 1); __pyx_temp++) { - __Pyx_INCREF(__pyx_slice__16); - __Pyx_GIVEREF(__pyx_slice__16); - PyList_SET_ITEM(__pyx_t_7, __pyx_temp, __pyx_slice__16); - } - } - __pyx_t_9 = __Pyx_PyList_Extend(__pyx_v_result, __pyx_t_7); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 682, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "View.MemoryView":683 - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - * seen_ellipsis = True # <<<<<<<<<<<<<< - * else: - * result.append(slice(None)) - */ - __pyx_v_seen_ellipsis = 1; - - /* "View.MemoryView":681 - * for idx, item in enumerate(tup): - * if item is Ellipsis: - * if not seen_ellipsis: # <<<<<<<<<<<<<< - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - * seen_ellipsis = True - */ - goto __pyx_L7; - } - - /* "View.MemoryView":685 - * seen_ellipsis = True - * else: - * result.append(slice(None)) # <<<<<<<<<<<<<< - * have_slices = True - * else: - */ - /*else*/ { - __pyx_t_9 = __Pyx_PyList_Append(__pyx_v_result, __pyx_slice__16); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 685, __pyx_L1_error) - } - __pyx_L7:; - - /* "View.MemoryView":686 - * else: - * result.append(slice(None)) - * have_slices = True # <<<<<<<<<<<<<< - * else: - * if not isinstance(item, slice) and not PyIndex_Check(item): - */ - __pyx_v_have_slices = 1; - - /* "View.MemoryView":680 - * seen_ellipsis = False - * for idx, item in enumerate(tup): - * if item is Ellipsis: # <<<<<<<<<<<<<< - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - */ - goto __pyx_L6; - } - - /* "View.MemoryView":688 - * have_slices = True - * else: - * if not isinstance(item, slice) and not PyIndex_Check(item): # <<<<<<<<<<<<<< - * raise TypeError("Cannot index with type '%s'" % type(item)) - * - */ - /*else*/ { - __pyx_t_2 = PySlice_Check(__pyx_v_item); - __pyx_t_10 = ((!(__pyx_t_2 != 0)) != 0); - if (__pyx_t_10) { - } else { - __pyx_t_1 = __pyx_t_10; - goto __pyx_L9_bool_binop_done; - } - __pyx_t_10 = ((!(PyIndex_Check(__pyx_v_item) != 0)) != 0); - __pyx_t_1 = __pyx_t_10; - __pyx_L9_bool_binop_done:; - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":689 - * else: - * if not isinstance(item, slice) and not PyIndex_Check(item): - * raise TypeError("Cannot index with type '%s'" % type(item)) # <<<<<<<<<<<<<< - * - * have_slices = have_slices or isinstance(item, slice) - */ - __pyx_t_7 = __Pyx_PyString_FormatSafe(__pyx_kp_s_Cannot_index_with_type_s, ((PyObject *)Py_TYPE(__pyx_v_item))); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 689, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_11 = __Pyx_PyObject_CallOneArg(__pyx_builtin_TypeError, __pyx_t_7); if (unlikely(!__pyx_t_11)) __PYX_ERR(1, 689, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_Raise(__pyx_t_11, 0, 0, 0); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __PYX_ERR(1, 689, __pyx_L1_error) - - /* "View.MemoryView":688 - * have_slices = True - * else: - * if not isinstance(item, slice) and not PyIndex_Check(item): # <<<<<<<<<<<<<< - * raise TypeError("Cannot index with type '%s'" % type(item)) - * - */ - } - - /* "View.MemoryView":691 - * raise TypeError("Cannot index with type '%s'" % type(item)) - * - * have_slices = have_slices or isinstance(item, slice) # <<<<<<<<<<<<<< - * result.append(item) - * - */ - __pyx_t_10 = (__pyx_v_have_slices != 0); - if (!__pyx_t_10) { - } else { - __pyx_t_1 = __pyx_t_10; - goto __pyx_L11_bool_binop_done; - } - __pyx_t_10 = PySlice_Check(__pyx_v_item); - __pyx_t_2 = (__pyx_t_10 != 0); - __pyx_t_1 = __pyx_t_2; - __pyx_L11_bool_binop_done:; - __pyx_v_have_slices = __pyx_t_1; - - /* "View.MemoryView":692 - * - * have_slices = have_slices or isinstance(item, slice) - * result.append(item) # <<<<<<<<<<<<<< - * - * nslices = ndim - len(result) - */ - __pyx_t_9 = __Pyx_PyList_Append(__pyx_v_result, __pyx_v_item); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 692, __pyx_L1_error) - } - __pyx_L6:; - - /* "View.MemoryView":679 - * have_slices = False - * seen_ellipsis = False - * for idx, item in enumerate(tup): # <<<<<<<<<<<<<< - * if item is Ellipsis: - * if not seen_ellipsis: - */ - } - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":694 - * result.append(item) - * - * nslices = ndim - len(result) # <<<<<<<<<<<<<< - * if nslices: - * result.extend([slice(None)] * nslices) - */ - __pyx_t_5 = PyList_GET_SIZE(__pyx_v_result); if (unlikely(__pyx_t_5 == ((Py_ssize_t)-1))) __PYX_ERR(1, 694, __pyx_L1_error) - __pyx_v_nslices = (__pyx_v_ndim - __pyx_t_5); - - /* "View.MemoryView":695 - * - * nslices = ndim - len(result) - * if nslices: # <<<<<<<<<<<<<< - * result.extend([slice(None)] * nslices) - * - */ - __pyx_t_1 = (__pyx_v_nslices != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":696 - * nslices = ndim - len(result) - * if nslices: - * result.extend([slice(None)] * nslices) # <<<<<<<<<<<<<< - * - * return have_slices or nslices, tuple(result) - */ - __pyx_t_3 = PyList_New(1 * ((__pyx_v_nslices<0) ? 0:__pyx_v_nslices)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 696, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - { Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < __pyx_v_nslices; __pyx_temp++) { - __Pyx_INCREF(__pyx_slice__16); - __Pyx_GIVEREF(__pyx_slice__16); - PyList_SET_ITEM(__pyx_t_3, __pyx_temp, __pyx_slice__16); - } - } - __pyx_t_9 = __Pyx_PyList_Extend(__pyx_v_result, __pyx_t_3); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 696, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":695 - * - * nslices = ndim - len(result) - * if nslices: # <<<<<<<<<<<<<< - * result.extend([slice(None)] * nslices) - * - */ - } - - /* "View.MemoryView":698 - * result.extend([slice(None)] * nslices) - * - * return have_slices or nslices, tuple(result) # <<<<<<<<<<<<<< - * - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): - */ - __Pyx_XDECREF(__pyx_r); - if (!__pyx_v_have_slices) { - } else { - __pyx_t_4 = __Pyx_PyBool_FromLong(__pyx_v_have_slices); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L14_bool_binop_done; - } - __pyx_t_4 = PyInt_FromSsize_t(__pyx_v_nslices); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __pyx_t_4; - __pyx_t_4 = 0; - __pyx_L14_bool_binop_done:; - __pyx_t_4 = PyList_AsTuple(__pyx_v_result); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_11 = PyTuple_New(2); if (unlikely(!__pyx_t_11)) __PYX_ERR(1, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_11, 1, __pyx_t_4); - __pyx_t_3 = 0; - __pyx_t_4 = 0; - __pyx_r = ((PyObject*)__pyx_t_11); - __pyx_t_11 = 0; - goto __pyx_L0; - - /* "View.MemoryView":666 - * return isinstance(o, memoryview) - * - * cdef tuple _unellipsify(object index, int ndim): # <<<<<<<<<<<<<< - * """ - * Replace all ellipses with full slices and fill incomplete indices with - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_11); - __Pyx_AddTraceback("View.MemoryView._unellipsify", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_tup); - __Pyx_XDECREF(__pyx_v_result); - __Pyx_XDECREF(__pyx_v_idx); - __Pyx_XDECREF(__pyx_v_item); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":700 - * return have_slices or nslices, tuple(result) - * - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): # <<<<<<<<<<<<<< - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - */ - -static PyObject *assert_direct_dimensions(Py_ssize_t *__pyx_v_suboffsets, int __pyx_v_ndim) { - Py_ssize_t __pyx_v_suboffset; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - Py_ssize_t *__pyx_t_1; - Py_ssize_t *__pyx_t_2; - Py_ssize_t *__pyx_t_3; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("assert_direct_dimensions", 0); - - /* "View.MemoryView":701 - * - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): - * for suboffset in suboffsets[:ndim]: # <<<<<<<<<<<<<< - * if suboffset >= 0: - * raise ValueError("Indirect dimensions not supported") - */ - __pyx_t_2 = (__pyx_v_suboffsets + __pyx_v_ndim); - for (__pyx_t_3 = __pyx_v_suboffsets; __pyx_t_3 < __pyx_t_2; __pyx_t_3++) { - __pyx_t_1 = __pyx_t_3; - __pyx_v_suboffset = (__pyx_t_1[0]); - - /* "View.MemoryView":702 - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * raise ValueError("Indirect dimensions not supported") - * - */ - __pyx_t_4 = ((__pyx_v_suboffset >= 0) != 0); - if (unlikely(__pyx_t_4)) { - - /* "View.MemoryView":703 - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - * raise ValueError("Indirect dimensions not supported") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_5 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__17, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 703, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_Raise(__pyx_t_5, 0, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __PYX_ERR(1, 703, __pyx_L1_error) - - /* "View.MemoryView":702 - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * raise ValueError("Indirect dimensions not supported") - * - */ - } - } - - /* "View.MemoryView":700 - * return have_slices or nslices, tuple(result) - * - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): # <<<<<<<<<<<<<< - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.assert_direct_dimensions", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":710 - * - * @cname('__pyx_memview_slice') - * cdef memoryview memview_slice(memoryview memview, object indices): # <<<<<<<<<<<<<< - * cdef int new_ndim = 0, suboffset_dim = -1, dim - * cdef bint negative_step - */ - -static struct __pyx_memoryview_obj *__pyx_memview_slice(struct __pyx_memoryview_obj *__pyx_v_memview, PyObject *__pyx_v_indices) { - int __pyx_v_new_ndim; - int __pyx_v_suboffset_dim; - int __pyx_v_dim; - __Pyx_memviewslice __pyx_v_src; - __Pyx_memviewslice __pyx_v_dst; - __Pyx_memviewslice *__pyx_v_p_src; - struct __pyx_memoryviewslice_obj *__pyx_v_memviewsliceobj = 0; - __Pyx_memviewslice *__pyx_v_p_dst; - int *__pyx_v_p_suboffset_dim; - Py_ssize_t __pyx_v_start; - Py_ssize_t __pyx_v_stop; - Py_ssize_t __pyx_v_step; - int __pyx_v_have_start; - int __pyx_v_have_stop; - int __pyx_v_have_step; - PyObject *__pyx_v_index = NULL; - struct __pyx_memoryview_obj *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - struct __pyx_memoryview_obj *__pyx_t_4; - char *__pyx_t_5; - int __pyx_t_6; - Py_ssize_t __pyx_t_7; - PyObject *(*__pyx_t_8)(PyObject *); - PyObject *__pyx_t_9 = NULL; - Py_ssize_t __pyx_t_10; - int __pyx_t_11; - Py_ssize_t __pyx_t_12; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memview_slice", 0); - - /* "View.MemoryView":711 - * @cname('__pyx_memview_slice') - * cdef memoryview memview_slice(memoryview memview, object indices): - * cdef int new_ndim = 0, suboffset_dim = -1, dim # <<<<<<<<<<<<<< - * cdef bint negative_step - * cdef __Pyx_memviewslice src, dst - */ - __pyx_v_new_ndim = 0; - __pyx_v_suboffset_dim = -1; - - /* "View.MemoryView":718 - * - * - * memset(&dst, 0, sizeof(dst)) # <<<<<<<<<<<<<< - * - * cdef _memoryviewslice memviewsliceobj - */ - (void)(memset((&__pyx_v_dst), 0, (sizeof(__pyx_v_dst)))); - - /* "View.MemoryView":722 - * cdef _memoryviewslice memviewsliceobj - * - * assert memview.view.ndim > 0 # <<<<<<<<<<<<<< - * - * if isinstance(memview, _memoryviewslice): - */ - #ifndef CYTHON_WITHOUT_ASSERTIONS - if (unlikely(!Py_OptimizeFlag)) { - if (unlikely(!((__pyx_v_memview->view.ndim > 0) != 0))) { - PyErr_SetNone(PyExc_AssertionError); - __PYX_ERR(1, 722, __pyx_L1_error) - } - } - #endif - - /* "View.MemoryView":724 - * assert memview.view.ndim > 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * memviewsliceobj = memview - * p_src = &memviewsliceobj.from_slice - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":725 - * - * if isinstance(memview, _memoryviewslice): - * memviewsliceobj = memview # <<<<<<<<<<<<<< - * p_src = &memviewsliceobj.from_slice - * else: - */ - if (!(likely(((((PyObject *)__pyx_v_memview)) == Py_None) || likely(__Pyx_TypeTest(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type))))) __PYX_ERR(1, 725, __pyx_L1_error) - __pyx_t_3 = ((PyObject *)__pyx_v_memview); - __Pyx_INCREF(__pyx_t_3); - __pyx_v_memviewsliceobj = ((struct __pyx_memoryviewslice_obj *)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":726 - * if isinstance(memview, _memoryviewslice): - * memviewsliceobj = memview - * p_src = &memviewsliceobj.from_slice # <<<<<<<<<<<<<< - * else: - * slice_copy(memview, &src) - */ - __pyx_v_p_src = (&__pyx_v_memviewsliceobj->from_slice); - - /* "View.MemoryView":724 - * assert memview.view.ndim > 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * memviewsliceobj = memview - * p_src = &memviewsliceobj.from_slice - */ - goto __pyx_L3; - } - - /* "View.MemoryView":728 - * p_src = &memviewsliceobj.from_slice - * else: - * slice_copy(memview, &src) # <<<<<<<<<<<<<< - * p_src = &src - * - */ - /*else*/ { - __pyx_memoryview_slice_copy(__pyx_v_memview, (&__pyx_v_src)); - - /* "View.MemoryView":729 - * else: - * slice_copy(memview, &src) - * p_src = &src # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_p_src = (&__pyx_v_src); - } - __pyx_L3:; - - /* "View.MemoryView":735 - * - * - * dst.memview = p_src.memview # <<<<<<<<<<<<<< - * dst.data = p_src.data - * - */ - __pyx_t_4 = __pyx_v_p_src->memview; - __pyx_v_dst.memview = __pyx_t_4; - - /* "View.MemoryView":736 - * - * dst.memview = p_src.memview - * dst.data = p_src.data # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_5 = __pyx_v_p_src->data; - __pyx_v_dst.data = __pyx_t_5; - - /* "View.MemoryView":741 - * - * - * cdef __Pyx_memviewslice *p_dst = &dst # <<<<<<<<<<<<<< - * cdef int *p_suboffset_dim = &suboffset_dim - * cdef Py_ssize_t start, stop, step - */ - __pyx_v_p_dst = (&__pyx_v_dst); - - /* "View.MemoryView":742 - * - * cdef __Pyx_memviewslice *p_dst = &dst - * cdef int *p_suboffset_dim = &suboffset_dim # <<<<<<<<<<<<<< - * cdef Py_ssize_t start, stop, step - * cdef bint have_start, have_stop, have_step - */ - __pyx_v_p_suboffset_dim = (&__pyx_v_suboffset_dim); - - /* "View.MemoryView":746 - * cdef bint have_start, have_stop, have_step - * - * for dim, index in enumerate(indices): # <<<<<<<<<<<<<< - * if PyIndex_Check(index): - * slice_memviewslice( - */ - __pyx_t_6 = 0; - if (likely(PyList_CheckExact(__pyx_v_indices)) || PyTuple_CheckExact(__pyx_v_indices)) { - __pyx_t_3 = __pyx_v_indices; __Pyx_INCREF(__pyx_t_3); __pyx_t_7 = 0; - __pyx_t_8 = NULL; - } else { - __pyx_t_7 = -1; __pyx_t_3 = PyObject_GetIter(__pyx_v_indices); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 746, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_8 = Py_TYPE(__pyx_t_3)->tp_iternext; if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 746, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_8)) { - if (likely(PyList_CheckExact(__pyx_t_3))) { - if (__pyx_t_7 >= PyList_GET_SIZE(__pyx_t_3)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_9 = PyList_GET_ITEM(__pyx_t_3, __pyx_t_7); __Pyx_INCREF(__pyx_t_9); __pyx_t_7++; if (unlikely(0 < 0)) __PYX_ERR(1, 746, __pyx_L1_error) - #else - __pyx_t_9 = PySequence_ITEM(__pyx_t_3, __pyx_t_7); __pyx_t_7++; if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 746, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - #endif - } else { - if (__pyx_t_7 >= PyTuple_GET_SIZE(__pyx_t_3)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_9 = PyTuple_GET_ITEM(__pyx_t_3, __pyx_t_7); __Pyx_INCREF(__pyx_t_9); __pyx_t_7++; if (unlikely(0 < 0)) __PYX_ERR(1, 746, __pyx_L1_error) - #else - __pyx_t_9 = PySequence_ITEM(__pyx_t_3, __pyx_t_7); __pyx_t_7++; if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 746, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - #endif - } - } else { - __pyx_t_9 = __pyx_t_8(__pyx_t_3); - if (unlikely(!__pyx_t_9)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(1, 746, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_9); - } - __Pyx_XDECREF_SET(__pyx_v_index, __pyx_t_9); - __pyx_t_9 = 0; - __pyx_v_dim = __pyx_t_6; - __pyx_t_6 = (__pyx_t_6 + 1); - - /* "View.MemoryView":747 - * - * for dim, index in enumerate(indices): - * if PyIndex_Check(index): # <<<<<<<<<<<<<< - * slice_memviewslice( - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - */ - __pyx_t_2 = (PyIndex_Check(__pyx_v_index) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":751 - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - * dim, new_ndim, p_suboffset_dim, - * index, 0, 0, # start, stop, step # <<<<<<<<<<<<<< - * 0, 0, 0, # have_{start,stop,step} - * False) - */ - __pyx_t_10 = __Pyx_PyIndex_AsSsize_t(__pyx_v_index); if (unlikely((__pyx_t_10 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 751, __pyx_L1_error) - - /* "View.MemoryView":748 - * for dim, index in enumerate(indices): - * if PyIndex_Check(index): - * slice_memviewslice( # <<<<<<<<<<<<<< - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - * dim, new_ndim, p_suboffset_dim, - */ - __pyx_t_11 = __pyx_memoryview_slice_memviewslice(__pyx_v_p_dst, (__pyx_v_p_src->shape[__pyx_v_dim]), (__pyx_v_p_src->strides[__pyx_v_dim]), (__pyx_v_p_src->suboffsets[__pyx_v_dim]), __pyx_v_dim, __pyx_v_new_ndim, __pyx_v_p_suboffset_dim, __pyx_t_10, 0, 0, 0, 0, 0, 0); if (unlikely(__pyx_t_11 == ((int)-1))) __PYX_ERR(1, 748, __pyx_L1_error) - - /* "View.MemoryView":747 - * - * for dim, index in enumerate(indices): - * if PyIndex_Check(index): # <<<<<<<<<<<<<< - * slice_memviewslice( - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - */ - goto __pyx_L6; - } - - /* "View.MemoryView":754 - * 0, 0, 0, # have_{start,stop,step} - * False) - * elif index is None: # <<<<<<<<<<<<<< - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 - */ - __pyx_t_2 = (__pyx_v_index == Py_None); - __pyx_t_1 = (__pyx_t_2 != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":755 - * False) - * elif index is None: - * p_dst.shape[new_ndim] = 1 # <<<<<<<<<<<<<< - * p_dst.strides[new_ndim] = 0 - * p_dst.suboffsets[new_ndim] = -1 - */ - (__pyx_v_p_dst->shape[__pyx_v_new_ndim]) = 1; - - /* "View.MemoryView":756 - * elif index is None: - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 # <<<<<<<<<<<<<< - * p_dst.suboffsets[new_ndim] = -1 - * new_ndim += 1 - */ - (__pyx_v_p_dst->strides[__pyx_v_new_ndim]) = 0; - - /* "View.MemoryView":757 - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 - * p_dst.suboffsets[new_ndim] = -1 # <<<<<<<<<<<<<< - * new_ndim += 1 - * else: - */ - (__pyx_v_p_dst->suboffsets[__pyx_v_new_ndim]) = -1L; - - /* "View.MemoryView":758 - * p_dst.strides[new_ndim] = 0 - * p_dst.suboffsets[new_ndim] = -1 - * new_ndim += 1 # <<<<<<<<<<<<<< - * else: - * start = index.start or 0 - */ - __pyx_v_new_ndim = (__pyx_v_new_ndim + 1); - - /* "View.MemoryView":754 - * 0, 0, 0, # have_{start,stop,step} - * False) - * elif index is None: # <<<<<<<<<<<<<< - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 - */ - goto __pyx_L6; - } - - /* "View.MemoryView":760 - * new_ndim += 1 - * else: - * start = index.start or 0 # <<<<<<<<<<<<<< - * stop = index.stop or 0 - * step = index.step or 0 - */ - /*else*/ { - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_start); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 760, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 760, __pyx_L1_error) - if (!__pyx_t_1) { - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } else { - __pyx_t_12 = __Pyx_PyIndex_AsSsize_t(__pyx_t_9); if (unlikely((__pyx_t_12 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 760, __pyx_L1_error) - __pyx_t_10 = __pyx_t_12; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - goto __pyx_L7_bool_binop_done; - } - __pyx_t_10 = 0; - __pyx_L7_bool_binop_done:; - __pyx_v_start = __pyx_t_10; - - /* "View.MemoryView":761 - * else: - * start = index.start or 0 - * stop = index.stop or 0 # <<<<<<<<<<<<<< - * step = index.step or 0 - * - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_stop); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 761, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 761, __pyx_L1_error) - if (!__pyx_t_1) { - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } else { - __pyx_t_12 = __Pyx_PyIndex_AsSsize_t(__pyx_t_9); if (unlikely((__pyx_t_12 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 761, __pyx_L1_error) - __pyx_t_10 = __pyx_t_12; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - goto __pyx_L9_bool_binop_done; - } - __pyx_t_10 = 0; - __pyx_L9_bool_binop_done:; - __pyx_v_stop = __pyx_t_10; - - /* "View.MemoryView":762 - * start = index.start or 0 - * stop = index.stop or 0 - * step = index.step or 0 # <<<<<<<<<<<<<< - * - * have_start = index.start is not None - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_step); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 762, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 762, __pyx_L1_error) - if (!__pyx_t_1) { - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } else { - __pyx_t_12 = __Pyx_PyIndex_AsSsize_t(__pyx_t_9); if (unlikely((__pyx_t_12 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 762, __pyx_L1_error) - __pyx_t_10 = __pyx_t_12; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - goto __pyx_L11_bool_binop_done; - } - __pyx_t_10 = 0; - __pyx_L11_bool_binop_done:; - __pyx_v_step = __pyx_t_10; - - /* "View.MemoryView":764 - * step = index.step or 0 - * - * have_start = index.start is not None # <<<<<<<<<<<<<< - * have_stop = index.stop is not None - * have_step = index.step is not None - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_start); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 764, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = (__pyx_t_9 != Py_None); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_v_have_start = __pyx_t_1; - - /* "View.MemoryView":765 - * - * have_start = index.start is not None - * have_stop = index.stop is not None # <<<<<<<<<<<<<< - * have_step = index.step is not None - * - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_stop); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 765, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = (__pyx_t_9 != Py_None); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_v_have_stop = __pyx_t_1; - - /* "View.MemoryView":766 - * have_start = index.start is not None - * have_stop = index.stop is not None - * have_step = index.step is not None # <<<<<<<<<<<<<< - * - * slice_memviewslice( - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_step); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 766, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = (__pyx_t_9 != Py_None); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_v_have_step = __pyx_t_1; - - /* "View.MemoryView":768 - * have_step = index.step is not None - * - * slice_memviewslice( # <<<<<<<<<<<<<< - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - * dim, new_ndim, p_suboffset_dim, - */ - __pyx_t_11 = __pyx_memoryview_slice_memviewslice(__pyx_v_p_dst, (__pyx_v_p_src->shape[__pyx_v_dim]), (__pyx_v_p_src->strides[__pyx_v_dim]), (__pyx_v_p_src->suboffsets[__pyx_v_dim]), __pyx_v_dim, __pyx_v_new_ndim, __pyx_v_p_suboffset_dim, __pyx_v_start, __pyx_v_stop, __pyx_v_step, __pyx_v_have_start, __pyx_v_have_stop, __pyx_v_have_step, 1); if (unlikely(__pyx_t_11 == ((int)-1))) __PYX_ERR(1, 768, __pyx_L1_error) - - /* "View.MemoryView":774 - * have_start, have_stop, have_step, - * True) - * new_ndim += 1 # <<<<<<<<<<<<<< - * - * if isinstance(memview, _memoryviewslice): - */ - __pyx_v_new_ndim = (__pyx_v_new_ndim + 1); - } - __pyx_L6:; - - /* "View.MemoryView":746 - * cdef bint have_start, have_stop, have_step - * - * for dim, index in enumerate(indices): # <<<<<<<<<<<<<< - * if PyIndex_Check(index): - * slice_memviewslice( - */ - } - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":776 - * new_ndim += 1 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":777 - * - * if isinstance(memview, _memoryviewslice): - * return memoryview_fromslice(dst, new_ndim, # <<<<<<<<<<<<<< - * memviewsliceobj.to_object_func, - * memviewsliceobj.to_dtype_func, - */ - __Pyx_XDECREF(((PyObject *)__pyx_r)); - - /* "View.MemoryView":778 - * if isinstance(memview, _memoryviewslice): - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, # <<<<<<<<<<<<<< - * memviewsliceobj.to_dtype_func, - * memview.dtype_is_object) - */ - if (unlikely(!__pyx_v_memviewsliceobj)) { __Pyx_RaiseUnboundLocalError("memviewsliceobj"); __PYX_ERR(1, 778, __pyx_L1_error) } - - /* "View.MemoryView":779 - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, - * memviewsliceobj.to_dtype_func, # <<<<<<<<<<<<<< - * memview.dtype_is_object) - * else: - */ - if (unlikely(!__pyx_v_memviewsliceobj)) { __Pyx_RaiseUnboundLocalError("memviewsliceobj"); __PYX_ERR(1, 779, __pyx_L1_error) } - - /* "View.MemoryView":777 - * - * if isinstance(memview, _memoryviewslice): - * return memoryview_fromslice(dst, new_ndim, # <<<<<<<<<<<<<< - * memviewsliceobj.to_object_func, - * memviewsliceobj.to_dtype_func, - */ - __pyx_t_3 = __pyx_memoryview_fromslice(__pyx_v_dst, __pyx_v_new_ndim, __pyx_v_memviewsliceobj->to_object_func, __pyx_v_memviewsliceobj->to_dtype_func, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 777, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_memoryview_type))))) __PYX_ERR(1, 777, __pyx_L1_error) - __pyx_r = ((struct __pyx_memoryview_obj *)__pyx_t_3); - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "View.MemoryView":776 - * new_ndim += 1 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, - */ - } - - /* "View.MemoryView":782 - * memview.dtype_is_object) - * else: - * return memoryview_fromslice(dst, new_ndim, NULL, NULL, # <<<<<<<<<<<<<< - * memview.dtype_is_object) - * - */ - /*else*/ { - __Pyx_XDECREF(((PyObject *)__pyx_r)); - - /* "View.MemoryView":783 - * else: - * return memoryview_fromslice(dst, new_ndim, NULL, NULL, - * memview.dtype_is_object) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = __pyx_memoryview_fromslice(__pyx_v_dst, __pyx_v_new_ndim, NULL, NULL, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 782, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "View.MemoryView":782 - * memview.dtype_is_object) - * else: - * return memoryview_fromslice(dst, new_ndim, NULL, NULL, # <<<<<<<<<<<<<< - * memview.dtype_is_object) - * - */ - if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_memoryview_type))))) __PYX_ERR(1, 782, __pyx_L1_error) - __pyx_r = ((struct __pyx_memoryview_obj *)__pyx_t_3); - __pyx_t_3 = 0; - goto __pyx_L0; - } - - /* "View.MemoryView":710 - * - * @cname('__pyx_memview_slice') - * cdef memoryview memview_slice(memoryview memview, object indices): # <<<<<<<<<<<<<< - * cdef int new_ndim = 0, suboffset_dim = -1, dim - * cdef bint negative_step - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_AddTraceback("View.MemoryView.memview_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_memviewsliceobj); - __Pyx_XDECREF(__pyx_v_index); - __Pyx_XGIVEREF((PyObject *)__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":807 - * - * @cname('__pyx_memoryview_slice_memviewslice') - * cdef int slice_memviewslice( # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * Py_ssize_t shape, Py_ssize_t stride, Py_ssize_t suboffset, - */ - -static int __pyx_memoryview_slice_memviewslice(__Pyx_memviewslice *__pyx_v_dst, Py_ssize_t __pyx_v_shape, Py_ssize_t __pyx_v_stride, Py_ssize_t __pyx_v_suboffset, int __pyx_v_dim, int __pyx_v_new_ndim, int *__pyx_v_suboffset_dim, Py_ssize_t __pyx_v_start, Py_ssize_t __pyx_v_stop, Py_ssize_t __pyx_v_step, int __pyx_v_have_start, int __pyx_v_have_stop, int __pyx_v_have_step, int __pyx_v_is_slice) { - Py_ssize_t __pyx_v_new_shape; - int __pyx_v_negative_step; - int __pyx_r; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - - /* "View.MemoryView":827 - * cdef bint negative_step - * - * if not is_slice: # <<<<<<<<<<<<<< - * - * if start < 0: - */ - __pyx_t_1 = ((!(__pyx_v_is_slice != 0)) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":829 - * if not is_slice: - * - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if not 0 <= start < shape: - */ - __pyx_t_1 = ((__pyx_v_start < 0) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":830 - * - * if start < 0: - * start += shape # <<<<<<<<<<<<<< - * if not 0 <= start < shape: - * _err_dim(IndexError, "Index out of bounds (axis %d)", dim) - */ - __pyx_v_start = (__pyx_v_start + __pyx_v_shape); - - /* "View.MemoryView":829 - * if not is_slice: - * - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if not 0 <= start < shape: - */ - } - - /* "View.MemoryView":831 - * if start < 0: - * start += shape - * if not 0 <= start < shape: # <<<<<<<<<<<<<< - * _err_dim(IndexError, "Index out of bounds (axis %d)", dim) - * else: - */ - __pyx_t_1 = (0 <= __pyx_v_start); - if (__pyx_t_1) { - __pyx_t_1 = (__pyx_v_start < __pyx_v_shape); - } - __pyx_t_2 = ((!(__pyx_t_1 != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":832 - * start += shape - * if not 0 <= start < shape: - * _err_dim(IndexError, "Index out of bounds (axis %d)", dim) # <<<<<<<<<<<<<< - * else: - * - */ - __pyx_t_3 = __pyx_memoryview_err_dim(__pyx_builtin_IndexError, ((char *)"Index out of bounds (axis %d)"), __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 832, __pyx_L1_error) - - /* "View.MemoryView":831 - * if start < 0: - * start += shape - * if not 0 <= start < shape: # <<<<<<<<<<<<<< - * _err_dim(IndexError, "Index out of bounds (axis %d)", dim) - * else: - */ - } - - /* "View.MemoryView":827 - * cdef bint negative_step - * - * if not is_slice: # <<<<<<<<<<<<<< - * - * if start < 0: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":835 - * else: - * - * negative_step = have_step != 0 and step < 0 # <<<<<<<<<<<<<< - * - * if have_step and step == 0: - */ - /*else*/ { - __pyx_t_1 = ((__pyx_v_have_step != 0) != 0); - if (__pyx_t_1) { - } else { - __pyx_t_2 = __pyx_t_1; - goto __pyx_L6_bool_binop_done; - } - __pyx_t_1 = ((__pyx_v_step < 0) != 0); - __pyx_t_2 = __pyx_t_1; - __pyx_L6_bool_binop_done:; - __pyx_v_negative_step = __pyx_t_2; - - /* "View.MemoryView":837 - * negative_step = have_step != 0 and step < 0 - * - * if have_step and step == 0: # <<<<<<<<<<<<<< - * _err_dim(ValueError, "Step may not be zero (axis %d)", dim) - * - */ - __pyx_t_1 = (__pyx_v_have_step != 0); - if (__pyx_t_1) { - } else { - __pyx_t_2 = __pyx_t_1; - goto __pyx_L9_bool_binop_done; - } - __pyx_t_1 = ((__pyx_v_step == 0) != 0); - __pyx_t_2 = __pyx_t_1; - __pyx_L9_bool_binop_done:; - if (__pyx_t_2) { - - /* "View.MemoryView":838 - * - * if have_step and step == 0: - * _err_dim(ValueError, "Step may not be zero (axis %d)", dim) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = __pyx_memoryview_err_dim(__pyx_builtin_ValueError, ((char *)"Step may not be zero (axis %d)"), __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 838, __pyx_L1_error) - - /* "View.MemoryView":837 - * negative_step = have_step != 0 and step < 0 - * - * if have_step and step == 0: # <<<<<<<<<<<<<< - * _err_dim(ValueError, "Step may not be zero (axis %d)", dim) - * - */ - } - - /* "View.MemoryView":841 - * - * - * if have_start: # <<<<<<<<<<<<<< - * if start < 0: - * start += shape - */ - __pyx_t_2 = (__pyx_v_have_start != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":842 - * - * if have_start: - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if start < 0: - */ - __pyx_t_2 = ((__pyx_v_start < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":843 - * if have_start: - * if start < 0: - * start += shape # <<<<<<<<<<<<<< - * if start < 0: - * start = 0 - */ - __pyx_v_start = (__pyx_v_start + __pyx_v_shape); - - /* "View.MemoryView":844 - * if start < 0: - * start += shape - * if start < 0: # <<<<<<<<<<<<<< - * start = 0 - * elif start >= shape: - */ - __pyx_t_2 = ((__pyx_v_start < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":845 - * start += shape - * if start < 0: - * start = 0 # <<<<<<<<<<<<<< - * elif start >= shape: - * if negative_step: - */ - __pyx_v_start = 0; - - /* "View.MemoryView":844 - * if start < 0: - * start += shape - * if start < 0: # <<<<<<<<<<<<<< - * start = 0 - * elif start >= shape: - */ - } - - /* "View.MemoryView":842 - * - * if have_start: - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if start < 0: - */ - goto __pyx_L12; - } - - /* "View.MemoryView":846 - * if start < 0: - * start = 0 - * elif start >= shape: # <<<<<<<<<<<<<< - * if negative_step: - * start = shape - 1 - */ - __pyx_t_2 = ((__pyx_v_start >= __pyx_v_shape) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":847 - * start = 0 - * elif start >= shape: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - __pyx_t_2 = (__pyx_v_negative_step != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":848 - * elif start >= shape: - * if negative_step: - * start = shape - 1 # <<<<<<<<<<<<<< - * else: - * start = shape - */ - __pyx_v_start = (__pyx_v_shape - 1); - - /* "View.MemoryView":847 - * start = 0 - * elif start >= shape: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - goto __pyx_L14; - } - - /* "View.MemoryView":850 - * start = shape - 1 - * else: - * start = shape # <<<<<<<<<<<<<< - * else: - * if negative_step: - */ - /*else*/ { - __pyx_v_start = __pyx_v_shape; - } - __pyx_L14:; - - /* "View.MemoryView":846 - * if start < 0: - * start = 0 - * elif start >= shape: # <<<<<<<<<<<<<< - * if negative_step: - * start = shape - 1 - */ - } - __pyx_L12:; - - /* "View.MemoryView":841 - * - * - * if have_start: # <<<<<<<<<<<<<< - * if start < 0: - * start += shape - */ - goto __pyx_L11; - } - - /* "View.MemoryView":852 - * start = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - /*else*/ { - __pyx_t_2 = (__pyx_v_negative_step != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":853 - * else: - * if negative_step: - * start = shape - 1 # <<<<<<<<<<<<<< - * else: - * start = 0 - */ - __pyx_v_start = (__pyx_v_shape - 1); - - /* "View.MemoryView":852 - * start = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - goto __pyx_L15; - } - - /* "View.MemoryView":855 - * start = shape - 1 - * else: - * start = 0 # <<<<<<<<<<<<<< - * - * if have_stop: - */ - /*else*/ { - __pyx_v_start = 0; - } - __pyx_L15:; - } - __pyx_L11:; - - /* "View.MemoryView":857 - * start = 0 - * - * if have_stop: # <<<<<<<<<<<<<< - * if stop < 0: - * stop += shape - */ - __pyx_t_2 = (__pyx_v_have_stop != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":858 - * - * if have_stop: - * if stop < 0: # <<<<<<<<<<<<<< - * stop += shape - * if stop < 0: - */ - __pyx_t_2 = ((__pyx_v_stop < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":859 - * if have_stop: - * if stop < 0: - * stop += shape # <<<<<<<<<<<<<< - * if stop < 0: - * stop = 0 - */ - __pyx_v_stop = (__pyx_v_stop + __pyx_v_shape); - - /* "View.MemoryView":860 - * if stop < 0: - * stop += shape - * if stop < 0: # <<<<<<<<<<<<<< - * stop = 0 - * elif stop > shape: - */ - __pyx_t_2 = ((__pyx_v_stop < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":861 - * stop += shape - * if stop < 0: - * stop = 0 # <<<<<<<<<<<<<< - * elif stop > shape: - * stop = shape - */ - __pyx_v_stop = 0; - - /* "View.MemoryView":860 - * if stop < 0: - * stop += shape - * if stop < 0: # <<<<<<<<<<<<<< - * stop = 0 - * elif stop > shape: - */ - } - - /* "View.MemoryView":858 - * - * if have_stop: - * if stop < 0: # <<<<<<<<<<<<<< - * stop += shape - * if stop < 0: - */ - goto __pyx_L17; - } - - /* "View.MemoryView":862 - * if stop < 0: - * stop = 0 - * elif stop > shape: # <<<<<<<<<<<<<< - * stop = shape - * else: - */ - __pyx_t_2 = ((__pyx_v_stop > __pyx_v_shape) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":863 - * stop = 0 - * elif stop > shape: - * stop = shape # <<<<<<<<<<<<<< - * else: - * if negative_step: - */ - __pyx_v_stop = __pyx_v_shape; - - /* "View.MemoryView":862 - * if stop < 0: - * stop = 0 - * elif stop > shape: # <<<<<<<<<<<<<< - * stop = shape - * else: - */ - } - __pyx_L17:; - - /* "View.MemoryView":857 - * start = 0 - * - * if have_stop: # <<<<<<<<<<<<<< - * if stop < 0: - * stop += shape - */ - goto __pyx_L16; - } - - /* "View.MemoryView":865 - * stop = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * stop = -1 - * else: - */ - /*else*/ { - __pyx_t_2 = (__pyx_v_negative_step != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":866 - * else: - * if negative_step: - * stop = -1 # <<<<<<<<<<<<<< - * else: - * stop = shape - */ - __pyx_v_stop = -1L; - - /* "View.MemoryView":865 - * stop = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * stop = -1 - * else: - */ - goto __pyx_L19; - } - - /* "View.MemoryView":868 - * stop = -1 - * else: - * stop = shape # <<<<<<<<<<<<<< - * - * if not have_step: - */ - /*else*/ { - __pyx_v_stop = __pyx_v_shape; - } - __pyx_L19:; - } - __pyx_L16:; - - /* "View.MemoryView":870 - * stop = shape - * - * if not have_step: # <<<<<<<<<<<<<< - * step = 1 - * - */ - __pyx_t_2 = ((!(__pyx_v_have_step != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":871 - * - * if not have_step: - * step = 1 # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_step = 1; - - /* "View.MemoryView":870 - * stop = shape - * - * if not have_step: # <<<<<<<<<<<<<< - * step = 1 - * - */ - } - - /* "View.MemoryView":875 - * - * with cython.cdivision(True): - * new_shape = (stop - start) // step # <<<<<<<<<<<<<< - * - * if (stop - start) - step * new_shape: - */ - __pyx_v_new_shape = ((__pyx_v_stop - __pyx_v_start) / __pyx_v_step); - - /* "View.MemoryView":877 - * new_shape = (stop - start) // step - * - * if (stop - start) - step * new_shape: # <<<<<<<<<<<<<< - * new_shape += 1 - * - */ - __pyx_t_2 = (((__pyx_v_stop - __pyx_v_start) - (__pyx_v_step * __pyx_v_new_shape)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":878 - * - * if (stop - start) - step * new_shape: - * new_shape += 1 # <<<<<<<<<<<<<< - * - * if new_shape < 0: - */ - __pyx_v_new_shape = (__pyx_v_new_shape + 1); - - /* "View.MemoryView":877 - * new_shape = (stop - start) // step - * - * if (stop - start) - step * new_shape: # <<<<<<<<<<<<<< - * new_shape += 1 - * - */ - } - - /* "View.MemoryView":880 - * new_shape += 1 - * - * if new_shape < 0: # <<<<<<<<<<<<<< - * new_shape = 0 - * - */ - __pyx_t_2 = ((__pyx_v_new_shape < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":881 - * - * if new_shape < 0: - * new_shape = 0 # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_new_shape = 0; - - /* "View.MemoryView":880 - * new_shape += 1 - * - * if new_shape < 0: # <<<<<<<<<<<<<< - * new_shape = 0 - * - */ - } - - /* "View.MemoryView":884 - * - * - * dst.strides[new_ndim] = stride * step # <<<<<<<<<<<<<< - * dst.shape[new_ndim] = new_shape - * dst.suboffsets[new_ndim] = suboffset - */ - (__pyx_v_dst->strides[__pyx_v_new_ndim]) = (__pyx_v_stride * __pyx_v_step); - - /* "View.MemoryView":885 - * - * dst.strides[new_ndim] = stride * step - * dst.shape[new_ndim] = new_shape # <<<<<<<<<<<<<< - * dst.suboffsets[new_ndim] = suboffset - * - */ - (__pyx_v_dst->shape[__pyx_v_new_ndim]) = __pyx_v_new_shape; - - /* "View.MemoryView":886 - * dst.strides[new_ndim] = stride * step - * dst.shape[new_ndim] = new_shape - * dst.suboffsets[new_ndim] = suboffset # <<<<<<<<<<<<<< - * - * - */ - (__pyx_v_dst->suboffsets[__pyx_v_new_ndim]) = __pyx_v_suboffset; - } - __pyx_L3:; - - /* "View.MemoryView":889 - * - * - * if suboffset_dim[0] < 0: # <<<<<<<<<<<<<< - * dst.data += start * stride - * else: - */ - __pyx_t_2 = (((__pyx_v_suboffset_dim[0]) < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":890 - * - * if suboffset_dim[0] < 0: - * dst.data += start * stride # <<<<<<<<<<<<<< - * else: - * dst.suboffsets[suboffset_dim[0]] += start * stride - */ - __pyx_v_dst->data = (__pyx_v_dst->data + (__pyx_v_start * __pyx_v_stride)); - - /* "View.MemoryView":889 - * - * - * if suboffset_dim[0] < 0: # <<<<<<<<<<<<<< - * dst.data += start * stride - * else: - */ - goto __pyx_L23; - } - - /* "View.MemoryView":892 - * dst.data += start * stride - * else: - * dst.suboffsets[suboffset_dim[0]] += start * stride # <<<<<<<<<<<<<< - * - * if suboffset >= 0: - */ - /*else*/ { - __pyx_t_3 = (__pyx_v_suboffset_dim[0]); - (__pyx_v_dst->suboffsets[__pyx_t_3]) = ((__pyx_v_dst->suboffsets[__pyx_t_3]) + (__pyx_v_start * __pyx_v_stride)); - } - __pyx_L23:; - - /* "View.MemoryView":894 - * dst.suboffsets[suboffset_dim[0]] += start * stride - * - * if suboffset >= 0: # <<<<<<<<<<<<<< - * if not is_slice: - * if new_ndim == 0: - */ - __pyx_t_2 = ((__pyx_v_suboffset >= 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":895 - * - * if suboffset >= 0: - * if not is_slice: # <<<<<<<<<<<<<< - * if new_ndim == 0: - * dst.data = ( dst.data)[0] + suboffset - */ - __pyx_t_2 = ((!(__pyx_v_is_slice != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":896 - * if suboffset >= 0: - * if not is_slice: - * if new_ndim == 0: # <<<<<<<<<<<<<< - * dst.data = ( dst.data)[0] + suboffset - * else: - */ - __pyx_t_2 = ((__pyx_v_new_ndim == 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":897 - * if not is_slice: - * if new_ndim == 0: - * dst.data = ( dst.data)[0] + suboffset # <<<<<<<<<<<<<< - * else: - * _err_dim(IndexError, "All dimensions preceding dimension %d " - */ - __pyx_v_dst->data = ((((char **)__pyx_v_dst->data)[0]) + __pyx_v_suboffset); - - /* "View.MemoryView":896 - * if suboffset >= 0: - * if not is_slice: - * if new_ndim == 0: # <<<<<<<<<<<<<< - * dst.data = ( dst.data)[0] + suboffset - * else: - */ - goto __pyx_L26; - } - - /* "View.MemoryView":899 - * dst.data = ( dst.data)[0] + suboffset - * else: - * _err_dim(IndexError, "All dimensions preceding dimension %d " # <<<<<<<<<<<<<< - * "must be indexed and not sliced", dim) - * else: - */ - /*else*/ { - - /* "View.MemoryView":900 - * else: - * _err_dim(IndexError, "All dimensions preceding dimension %d " - * "must be indexed and not sliced", dim) # <<<<<<<<<<<<<< - * else: - * suboffset_dim[0] = new_ndim - */ - __pyx_t_3 = __pyx_memoryview_err_dim(__pyx_builtin_IndexError, ((char *)"All dimensions preceding dimension %d must be indexed and not sliced"), __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 899, __pyx_L1_error) - } - __pyx_L26:; - - /* "View.MemoryView":895 - * - * if suboffset >= 0: - * if not is_slice: # <<<<<<<<<<<<<< - * if new_ndim == 0: - * dst.data = ( dst.data)[0] + suboffset - */ - goto __pyx_L25; - } - - /* "View.MemoryView":902 - * "must be indexed and not sliced", dim) - * else: - * suboffset_dim[0] = new_ndim # <<<<<<<<<<<<<< - * - * return 0 - */ - /*else*/ { - (__pyx_v_suboffset_dim[0]) = __pyx_v_new_ndim; - } - __pyx_L25:; - - /* "View.MemoryView":894 - * dst.suboffsets[suboffset_dim[0]] += start * stride - * - * if suboffset >= 0: # <<<<<<<<<<<<<< - * if not is_slice: - * if new_ndim == 0: - */ - } - - /* "View.MemoryView":904 - * suboffset_dim[0] = new_ndim - * - * return 0 # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":807 - * - * @cname('__pyx_memoryview_slice_memviewslice') - * cdef int slice_memviewslice( # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * Py_ssize_t shape, Py_ssize_t stride, Py_ssize_t suboffset, - */ - - /* function exit code */ - __pyx_L1_error:; - { - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.slice_memviewslice", __pyx_clineno, __pyx_lineno, __pyx_filename); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - } - __pyx_r = -1; - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":910 - * - * @cname('__pyx_pybuffer_index') - * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, # <<<<<<<<<<<<<< - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 - */ - -static char *__pyx_pybuffer_index(Py_buffer *__pyx_v_view, char *__pyx_v_bufp, Py_ssize_t __pyx_v_index, Py_ssize_t __pyx_v_dim) { - Py_ssize_t __pyx_v_shape; - Py_ssize_t __pyx_v_stride; - Py_ssize_t __pyx_v_suboffset; - Py_ssize_t __pyx_v_itemsize; - char *__pyx_v_resultp; - char *__pyx_r; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("pybuffer_index", 0); - - /* "View.MemoryView":912 - * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 # <<<<<<<<<<<<<< - * cdef Py_ssize_t itemsize = view.itemsize - * cdef char *resultp - */ - __pyx_v_suboffset = -1L; - - /* "View.MemoryView":913 - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 - * cdef Py_ssize_t itemsize = view.itemsize # <<<<<<<<<<<<<< - * cdef char *resultp - * - */ - __pyx_t_1 = __pyx_v_view->itemsize; - __pyx_v_itemsize = __pyx_t_1; - - /* "View.MemoryView":916 - * cdef char *resultp - * - * if view.ndim == 0: # <<<<<<<<<<<<<< - * shape = view.len / itemsize - * stride = itemsize - */ - __pyx_t_2 = ((__pyx_v_view->ndim == 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":917 - * - * if view.ndim == 0: - * shape = view.len / itemsize # <<<<<<<<<<<<<< - * stride = itemsize - * else: - */ - if (unlikely(__pyx_v_itemsize == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "integer division or modulo by zero"); - __PYX_ERR(1, 917, __pyx_L1_error) - } - else if (sizeof(Py_ssize_t) == sizeof(long) && (!(((Py_ssize_t)-1) > 0)) && unlikely(__pyx_v_itemsize == (Py_ssize_t)-1) && unlikely(UNARY_NEG_WOULD_OVERFLOW(__pyx_v_view->len))) { - PyErr_SetString(PyExc_OverflowError, "value too large to perform division"); - __PYX_ERR(1, 917, __pyx_L1_error) - } - __pyx_v_shape = __Pyx_div_Py_ssize_t(__pyx_v_view->len, __pyx_v_itemsize); - - /* "View.MemoryView":918 - * if view.ndim == 0: - * shape = view.len / itemsize - * stride = itemsize # <<<<<<<<<<<<<< - * else: - * shape = view.shape[dim] - */ - __pyx_v_stride = __pyx_v_itemsize; - - /* "View.MemoryView":916 - * cdef char *resultp - * - * if view.ndim == 0: # <<<<<<<<<<<<<< - * shape = view.len / itemsize - * stride = itemsize - */ - goto __pyx_L3; - } - - /* "View.MemoryView":920 - * stride = itemsize - * else: - * shape = view.shape[dim] # <<<<<<<<<<<<<< - * stride = view.strides[dim] - * if view.suboffsets != NULL: - */ - /*else*/ { - __pyx_v_shape = (__pyx_v_view->shape[__pyx_v_dim]); - - /* "View.MemoryView":921 - * else: - * shape = view.shape[dim] - * stride = view.strides[dim] # <<<<<<<<<<<<<< - * if view.suboffsets != NULL: - * suboffset = view.suboffsets[dim] - */ - __pyx_v_stride = (__pyx_v_view->strides[__pyx_v_dim]); - - /* "View.MemoryView":922 - * shape = view.shape[dim] - * stride = view.strides[dim] - * if view.suboffsets != NULL: # <<<<<<<<<<<<<< - * suboffset = view.suboffsets[dim] - * - */ - __pyx_t_2 = ((__pyx_v_view->suboffsets != NULL) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":923 - * stride = view.strides[dim] - * if view.suboffsets != NULL: - * suboffset = view.suboffsets[dim] # <<<<<<<<<<<<<< - * - * if index < 0: - */ - __pyx_v_suboffset = (__pyx_v_view->suboffsets[__pyx_v_dim]); - - /* "View.MemoryView":922 - * shape = view.shape[dim] - * stride = view.strides[dim] - * if view.suboffsets != NULL: # <<<<<<<<<<<<<< - * suboffset = view.suboffsets[dim] - * - */ - } - } - __pyx_L3:; - - /* "View.MemoryView":925 - * suboffset = view.suboffsets[dim] - * - * if index < 0: # <<<<<<<<<<<<<< - * index += view.shape[dim] - * if index < 0: - */ - __pyx_t_2 = ((__pyx_v_index < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":926 - * - * if index < 0: - * index += view.shape[dim] # <<<<<<<<<<<<<< - * if index < 0: - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - */ - __pyx_v_index = (__pyx_v_index + (__pyx_v_view->shape[__pyx_v_dim])); - - /* "View.MemoryView":927 - * if index < 0: - * index += view.shape[dim] - * if index < 0: # <<<<<<<<<<<<<< - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - */ - __pyx_t_2 = ((__pyx_v_index < 0) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":928 - * index += view.shape[dim] - * if index < 0: - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) # <<<<<<<<<<<<<< - * - * if index >= shape: - */ - __pyx_t_3 = PyInt_FromSsize_t(__pyx_v_dim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 928, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyString_Format(__pyx_kp_s_Out_of_bounds_on_buffer_access_a, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 928, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_builtin_IndexError, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 928, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 928, __pyx_L1_error) - - /* "View.MemoryView":927 - * if index < 0: - * index += view.shape[dim] - * if index < 0: # <<<<<<<<<<<<<< - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - */ - } - - /* "View.MemoryView":925 - * suboffset = view.suboffsets[dim] - * - * if index < 0: # <<<<<<<<<<<<<< - * index += view.shape[dim] - * if index < 0: - */ - } - - /* "View.MemoryView":930 - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - * if index >= shape: # <<<<<<<<<<<<<< - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - */ - __pyx_t_2 = ((__pyx_v_index >= __pyx_v_shape) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":931 - * - * if index >= shape: - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) # <<<<<<<<<<<<<< - * - * resultp = bufp + index * stride - */ - __pyx_t_3 = PyInt_FromSsize_t(__pyx_v_dim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 931, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyString_Format(__pyx_kp_s_Out_of_bounds_on_buffer_access_a, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 931, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_builtin_IndexError, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 931, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 931, __pyx_L1_error) - - /* "View.MemoryView":930 - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - * if index >= shape: # <<<<<<<<<<<<<< - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - */ - } - - /* "View.MemoryView":933 - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - * resultp = bufp + index * stride # <<<<<<<<<<<<<< - * if suboffset >= 0: - * resultp = ( resultp)[0] + suboffset - */ - __pyx_v_resultp = (__pyx_v_bufp + (__pyx_v_index * __pyx_v_stride)); - - /* "View.MemoryView":934 - * - * resultp = bufp + index * stride - * if suboffset >= 0: # <<<<<<<<<<<<<< - * resultp = ( resultp)[0] + suboffset - * - */ - __pyx_t_2 = ((__pyx_v_suboffset >= 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":935 - * resultp = bufp + index * stride - * if suboffset >= 0: - * resultp = ( resultp)[0] + suboffset # <<<<<<<<<<<<<< - * - * return resultp - */ - __pyx_v_resultp = ((((char **)__pyx_v_resultp)[0]) + __pyx_v_suboffset); - - /* "View.MemoryView":934 - * - * resultp = bufp + index * stride - * if suboffset >= 0: # <<<<<<<<<<<<<< - * resultp = ( resultp)[0] + suboffset - * - */ - } - - /* "View.MemoryView":937 - * resultp = ( resultp)[0] + suboffset - * - * return resultp # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = __pyx_v_resultp; - goto __pyx_L0; - - /* "View.MemoryView":910 - * - * @cname('__pyx_pybuffer_index') - * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, # <<<<<<<<<<<<<< - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView.pybuffer_index", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":943 - * - * @cname('__pyx_memslice_transpose') - * cdef int transpose_memslice(__Pyx_memviewslice *memslice) nogil except 0: # <<<<<<<<<<<<<< - * cdef int ndim = memslice.memview.view.ndim - * - */ - -static int __pyx_memslice_transpose(__Pyx_memviewslice *__pyx_v_memslice) { - int __pyx_v_ndim; - Py_ssize_t *__pyx_v_shape; - Py_ssize_t *__pyx_v_strides; - int __pyx_v_i; - int __pyx_v_j; - int __pyx_r; - int __pyx_t_1; - Py_ssize_t *__pyx_t_2; - long __pyx_t_3; - long __pyx_t_4; - Py_ssize_t __pyx_t_5; - Py_ssize_t __pyx_t_6; - int __pyx_t_7; - int __pyx_t_8; - int __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - - /* "View.MemoryView":944 - * @cname('__pyx_memslice_transpose') - * cdef int transpose_memslice(__Pyx_memviewslice *memslice) nogil except 0: - * cdef int ndim = memslice.memview.view.ndim # <<<<<<<<<<<<<< - * - * cdef Py_ssize_t *shape = memslice.shape - */ - __pyx_t_1 = __pyx_v_memslice->memview->view.ndim; - __pyx_v_ndim = __pyx_t_1; - - /* "View.MemoryView":946 - * cdef int ndim = memslice.memview.view.ndim - * - * cdef Py_ssize_t *shape = memslice.shape # <<<<<<<<<<<<<< - * cdef Py_ssize_t *strides = memslice.strides - * - */ - __pyx_t_2 = __pyx_v_memslice->shape; - __pyx_v_shape = __pyx_t_2; - - /* "View.MemoryView":947 - * - * cdef Py_ssize_t *shape = memslice.shape - * cdef Py_ssize_t *strides = memslice.strides # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_2 = __pyx_v_memslice->strides; - __pyx_v_strides = __pyx_t_2; - - /* "View.MemoryView":951 - * - * cdef int i, j - * for i in range(ndim / 2): # <<<<<<<<<<<<<< - * j = ndim - 1 - i - * strides[i], strides[j] = strides[j], strides[i] - */ - __pyx_t_3 = __Pyx_div_long(__pyx_v_ndim, 2); - __pyx_t_4 = __pyx_t_3; - for (__pyx_t_1 = 0; __pyx_t_1 < __pyx_t_4; __pyx_t_1+=1) { - __pyx_v_i = __pyx_t_1; - - /* "View.MemoryView":952 - * cdef int i, j - * for i in range(ndim / 2): - * j = ndim - 1 - i # <<<<<<<<<<<<<< - * strides[i], strides[j] = strides[j], strides[i] - * shape[i], shape[j] = shape[j], shape[i] - */ - __pyx_v_j = ((__pyx_v_ndim - 1) - __pyx_v_i); - - /* "View.MemoryView":953 - * for i in range(ndim / 2): - * j = ndim - 1 - i - * strides[i], strides[j] = strides[j], strides[i] # <<<<<<<<<<<<<< - * shape[i], shape[j] = shape[j], shape[i] - * - */ - __pyx_t_5 = (__pyx_v_strides[__pyx_v_j]); - __pyx_t_6 = (__pyx_v_strides[__pyx_v_i]); - (__pyx_v_strides[__pyx_v_i]) = __pyx_t_5; - (__pyx_v_strides[__pyx_v_j]) = __pyx_t_6; - - /* "View.MemoryView":954 - * j = ndim - 1 - i - * strides[i], strides[j] = strides[j], strides[i] - * shape[i], shape[j] = shape[j], shape[i] # <<<<<<<<<<<<<< - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: - */ - __pyx_t_6 = (__pyx_v_shape[__pyx_v_j]); - __pyx_t_5 = (__pyx_v_shape[__pyx_v_i]); - (__pyx_v_shape[__pyx_v_i]) = __pyx_t_6; - (__pyx_v_shape[__pyx_v_j]) = __pyx_t_5; - - /* "View.MemoryView":956 - * shape[i], shape[j] = shape[j], shape[i] - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: # <<<<<<<<<<<<<< - * _err(ValueError, "Cannot transpose memoryview with indirect dimensions") - * - */ - __pyx_t_8 = (((__pyx_v_memslice->suboffsets[__pyx_v_i]) >= 0) != 0); - if (!__pyx_t_8) { - } else { - __pyx_t_7 = __pyx_t_8; - goto __pyx_L6_bool_binop_done; - } - __pyx_t_8 = (((__pyx_v_memslice->suboffsets[__pyx_v_j]) >= 0) != 0); - __pyx_t_7 = __pyx_t_8; - __pyx_L6_bool_binop_done:; - if (__pyx_t_7) { - - /* "View.MemoryView":957 - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: - * _err(ValueError, "Cannot transpose memoryview with indirect dimensions") # <<<<<<<<<<<<<< - * - * return 1 - */ - __pyx_t_9 = __pyx_memoryview_err(__pyx_builtin_ValueError, ((char *)"Cannot transpose memoryview with indirect dimensions")); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 957, __pyx_L1_error) - - /* "View.MemoryView":956 - * shape[i], shape[j] = shape[j], shape[i] - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: # <<<<<<<<<<<<<< - * _err(ValueError, "Cannot transpose memoryview with indirect dimensions") - * - */ - } - } - - /* "View.MemoryView":959 - * _err(ValueError, "Cannot transpose memoryview with indirect dimensions") - * - * return 1 # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = 1; - goto __pyx_L0; - - /* "View.MemoryView":943 - * - * @cname('__pyx_memslice_transpose') - * cdef int transpose_memslice(__Pyx_memviewslice *memslice) nogil except 0: # <<<<<<<<<<<<<< - * cdef int ndim = memslice.memview.view.ndim - * - */ - - /* function exit code */ - __pyx_L1_error:; - { - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.transpose_memslice", __pyx_clineno, __pyx_lineno, __pyx_filename); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - } - __pyx_r = 0; - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":976 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * def __dealloc__(self): # <<<<<<<<<<<<<< - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) - * - */ - -/* Python wrapper */ -static void __pyx_memoryviewslice___dealloc__(PyObject *__pyx_v_self); /*proto*/ -static void __pyx_memoryviewslice___dealloc__(PyObject *__pyx_v_self) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0); - __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -static void __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(struct __pyx_memoryviewslice_obj *__pyx_v_self) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__", 0); - - /* "View.MemoryView":977 - * - * def __dealloc__(self): - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) # <<<<<<<<<<<<<< - * - * cdef convert_item_to_object(self, char *itemp): - */ - __PYX_XDEC_MEMVIEW((&__pyx_v_self->from_slice), 1); - - /* "View.MemoryView":976 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * def __dealloc__(self): # <<<<<<<<<<<<<< - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) - * - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":979 - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * if self.to_object_func != NULL: - * return self.to_object_func(itemp) - */ - -static PyObject *__pyx_memoryviewslice_convert_item_to_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("convert_item_to_object", 0); - - /* "View.MemoryView":980 - * - * cdef convert_item_to_object(self, char *itemp): - * if self.to_object_func != NULL: # <<<<<<<<<<<<<< - * return self.to_object_func(itemp) - * else: - */ - __pyx_t_1 = ((__pyx_v_self->to_object_func != NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":981 - * cdef convert_item_to_object(self, char *itemp): - * if self.to_object_func != NULL: - * return self.to_object_func(itemp) # <<<<<<<<<<<<<< - * else: - * return memoryview.convert_item_to_object(self, itemp) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_v_self->to_object_func(__pyx_v_itemp); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 981, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":980 - * - * cdef convert_item_to_object(self, char *itemp): - * if self.to_object_func != NULL: # <<<<<<<<<<<<<< - * return self.to_object_func(itemp) - * else: - */ - } - - /* "View.MemoryView":983 - * return self.to_object_func(itemp) - * else: - * return memoryview.convert_item_to_object(self, itemp) # <<<<<<<<<<<<<< - * - * cdef assign_item_from_object(self, char *itemp, object value): - */ - /*else*/ { - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_memoryview_convert_item_to_object(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_itemp); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 983, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - } - - /* "View.MemoryView":979 - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * if self.to_object_func != NULL: - * return self.to_object_func(itemp) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":985 - * return memoryview.convert_item_to_object(self, itemp) - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * if self.to_dtype_func != NULL: - * self.to_dtype_func(itemp, value) - */ - -static PyObject *__pyx_memoryviewslice_assign_item_from_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("assign_item_from_object", 0); - - /* "View.MemoryView":986 - * - * cdef assign_item_from_object(self, char *itemp, object value): - * if self.to_dtype_func != NULL: # <<<<<<<<<<<<<< - * self.to_dtype_func(itemp, value) - * else: - */ - __pyx_t_1 = ((__pyx_v_self->to_dtype_func != NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":987 - * cdef assign_item_from_object(self, char *itemp, object value): - * if self.to_dtype_func != NULL: - * self.to_dtype_func(itemp, value) # <<<<<<<<<<<<<< - * else: - * memoryview.assign_item_from_object(self, itemp, value) - */ - __pyx_t_2 = __pyx_v_self->to_dtype_func(__pyx_v_itemp, __pyx_v_value); if (unlikely(__pyx_t_2 == ((int)0))) __PYX_ERR(1, 987, __pyx_L1_error) - - /* "View.MemoryView":986 - * - * cdef assign_item_from_object(self, char *itemp, object value): - * if self.to_dtype_func != NULL: # <<<<<<<<<<<<<< - * self.to_dtype_func(itemp, value) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":989 - * self.to_dtype_func(itemp, value) - * else: - * memoryview.assign_item_from_object(self, itemp, value) # <<<<<<<<<<<<<< - * - * @property - */ - /*else*/ { - __pyx_t_3 = __pyx_memoryview_assign_item_from_object(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_itemp, __pyx_v_value); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 989, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_L3:; - - /* "View.MemoryView":985 - * return memoryview.convert_item_to_object(self, itemp) - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * if self.to_dtype_func != NULL: - * self.to_dtype_func(itemp, value) - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.assign_item_from_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":992 - * - * @property - * def base(self): # <<<<<<<<<<<<<< - * return self.from_object - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_16_memoryviewslice_4base_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_16_memoryviewslice_4base_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base___get__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base___get__(struct __pyx_memoryviewslice_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":993 - * @property - * def base(self): - * return self.from_object # <<<<<<<<<<<<<< - * - * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)") - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->from_object); - __pyx_r = __pyx_v_self->from_object; - goto __pyx_L0; - - /* "View.MemoryView":992 - * - * @property - * def base(self): # <<<<<<<<<<<<<< - * return self.from_object - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryviewslice_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryviewslice_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_memoryviewslice___reduce_cython__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryviewslice___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__18, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 2, __pyx_L1_error) - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryviewslice_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryviewslice_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_memoryviewslice_2__setstate_cython__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryviewslice_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__19, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 4, __pyx_L1_error) - - /* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":999 - * - * @cname('__pyx_memoryview_fromslice') - * cdef memoryview_fromslice(__Pyx_memviewslice memviewslice, # <<<<<<<<<<<<<< - * int ndim, - * object (*to_object_func)(char *), - */ - -static PyObject *__pyx_memoryview_fromslice(__Pyx_memviewslice __pyx_v_memviewslice, int __pyx_v_ndim, PyObject *(*__pyx_v_to_object_func)(char *), int (*__pyx_v_to_dtype_func)(char *, PyObject *), int __pyx_v_dtype_is_object) { - struct __pyx_memoryviewslice_obj *__pyx_v_result = 0; - Py_ssize_t __pyx_v_suboffset; - PyObject *__pyx_v_length = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - __Pyx_TypeInfo *__pyx_t_4; - Py_buffer __pyx_t_5; - Py_ssize_t *__pyx_t_6; - Py_ssize_t *__pyx_t_7; - Py_ssize_t *__pyx_t_8; - Py_ssize_t __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_fromslice", 0); - - /* "View.MemoryView":1007 - * cdef _memoryviewslice result - * - * if memviewslice.memview == Py_None: # <<<<<<<<<<<<<< - * return None - * - */ - __pyx_t_1 = ((((PyObject *)__pyx_v_memviewslice.memview) == Py_None) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1008 - * - * if memviewslice.memview == Py_None: - * return None # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - - /* "View.MemoryView":1007 - * cdef _memoryviewslice result - * - * if memviewslice.memview == Py_None: # <<<<<<<<<<<<<< - * return None - * - */ - } - - /* "View.MemoryView":1013 - * - * - * result = _memoryviewslice(None, 0, dtype_is_object) # <<<<<<<<<<<<<< - * - * result.from_slice = memviewslice - */ - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1013, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1013, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(Py_None); - PyTuple_SET_ITEM(__pyx_t_3, 0, Py_None); - __Pyx_INCREF(__pyx_int_0); - __Pyx_GIVEREF(__pyx_int_0); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_0); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryviewslice_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1013, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result = ((struct __pyx_memoryviewslice_obj *)__pyx_t_2); - __pyx_t_2 = 0; - - /* "View.MemoryView":1015 - * result = _memoryviewslice(None, 0, dtype_is_object) - * - * result.from_slice = memviewslice # <<<<<<<<<<<<<< - * __PYX_INC_MEMVIEW(&memviewslice, 1) - * - */ - __pyx_v_result->from_slice = __pyx_v_memviewslice; - - /* "View.MemoryView":1016 - * - * result.from_slice = memviewslice - * __PYX_INC_MEMVIEW(&memviewslice, 1) # <<<<<<<<<<<<<< - * - * result.from_object = ( memviewslice.memview).base - */ - __PYX_INC_MEMVIEW((&__pyx_v_memviewslice), 1); - - /* "View.MemoryView":1018 - * __PYX_INC_MEMVIEW(&memviewslice, 1) - * - * result.from_object = ( memviewslice.memview).base # <<<<<<<<<<<<<< - * result.typeinfo = memviewslice.memview.typeinfo - * - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_memviewslice.memview), __pyx_n_s_base); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1018, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __Pyx_GOTREF(__pyx_v_result->from_object); - __Pyx_DECREF(__pyx_v_result->from_object); - __pyx_v_result->from_object = __pyx_t_2; - __pyx_t_2 = 0; - - /* "View.MemoryView":1019 - * - * result.from_object = ( memviewslice.memview).base - * result.typeinfo = memviewslice.memview.typeinfo # <<<<<<<<<<<<<< - * - * result.view = memviewslice.memview.view - */ - __pyx_t_4 = __pyx_v_memviewslice.memview->typeinfo; - __pyx_v_result->__pyx_base.typeinfo = __pyx_t_4; - - /* "View.MemoryView":1021 - * result.typeinfo = memviewslice.memview.typeinfo - * - * result.view = memviewslice.memview.view # <<<<<<<<<<<<<< - * result.view.buf = memviewslice.data - * result.view.ndim = ndim - */ - __pyx_t_5 = __pyx_v_memviewslice.memview->view; - __pyx_v_result->__pyx_base.view = __pyx_t_5; - - /* "View.MemoryView":1022 - * - * result.view = memviewslice.memview.view - * result.view.buf = memviewslice.data # <<<<<<<<<<<<<< - * result.view.ndim = ndim - * (<__pyx_buffer *> &result.view).obj = Py_None - */ - __pyx_v_result->__pyx_base.view.buf = ((void *)__pyx_v_memviewslice.data); - - /* "View.MemoryView":1023 - * result.view = memviewslice.memview.view - * result.view.buf = memviewslice.data - * result.view.ndim = ndim # <<<<<<<<<<<<<< - * (<__pyx_buffer *> &result.view).obj = Py_None - * Py_INCREF(Py_None) - */ - __pyx_v_result->__pyx_base.view.ndim = __pyx_v_ndim; - - /* "View.MemoryView":1024 - * result.view.buf = memviewslice.data - * result.view.ndim = ndim - * (<__pyx_buffer *> &result.view).obj = Py_None # <<<<<<<<<<<<<< - * Py_INCREF(Py_None) - * - */ - ((Py_buffer *)(&__pyx_v_result->__pyx_base.view))->obj = Py_None; - - /* "View.MemoryView":1025 - * result.view.ndim = ndim - * (<__pyx_buffer *> &result.view).obj = Py_None - * Py_INCREF(Py_None) # <<<<<<<<<<<<<< - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: - */ - Py_INCREF(Py_None); - - /* "View.MemoryView":1027 - * Py_INCREF(Py_None) - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: # <<<<<<<<<<<<<< - * result.flags = PyBUF_RECORDS - * else: - */ - __pyx_t_1 = ((((struct __pyx_memoryview_obj *)__pyx_v_memviewslice.memview)->flags & PyBUF_WRITABLE) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1028 - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: - * result.flags = PyBUF_RECORDS # <<<<<<<<<<<<<< - * else: - * result.flags = PyBUF_RECORDS_RO - */ - __pyx_v_result->__pyx_base.flags = PyBUF_RECORDS; - - /* "View.MemoryView":1027 - * Py_INCREF(Py_None) - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: # <<<<<<<<<<<<<< - * result.flags = PyBUF_RECORDS - * else: - */ - goto __pyx_L4; - } - - /* "View.MemoryView":1030 - * result.flags = PyBUF_RECORDS - * else: - * result.flags = PyBUF_RECORDS_RO # <<<<<<<<<<<<<< - * - * result.view.shape = result.from_slice.shape - */ - /*else*/ { - __pyx_v_result->__pyx_base.flags = PyBUF_RECORDS_RO; - } - __pyx_L4:; - - /* "View.MemoryView":1032 - * result.flags = PyBUF_RECORDS_RO - * - * result.view.shape = result.from_slice.shape # <<<<<<<<<<<<<< - * result.view.strides = result.from_slice.strides - * - */ - __pyx_v_result->__pyx_base.view.shape = ((Py_ssize_t *)__pyx_v_result->from_slice.shape); - - /* "View.MemoryView":1033 - * - * result.view.shape = result.from_slice.shape - * result.view.strides = result.from_slice.strides # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_result->__pyx_base.view.strides = ((Py_ssize_t *)__pyx_v_result->from_slice.strides); - - /* "View.MemoryView":1036 - * - * - * result.view.suboffsets = NULL # <<<<<<<<<<<<<< - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: - */ - __pyx_v_result->__pyx_base.view.suboffsets = NULL; - - /* "View.MemoryView":1037 - * - * result.view.suboffsets = NULL - * for suboffset in result.from_slice.suboffsets[:ndim]: # <<<<<<<<<<<<<< - * if suboffset >= 0: - * result.view.suboffsets = result.from_slice.suboffsets - */ - __pyx_t_7 = (__pyx_v_result->from_slice.suboffsets + __pyx_v_ndim); - for (__pyx_t_8 = __pyx_v_result->from_slice.suboffsets; __pyx_t_8 < __pyx_t_7; __pyx_t_8++) { - __pyx_t_6 = __pyx_t_8; - __pyx_v_suboffset = (__pyx_t_6[0]); - - /* "View.MemoryView":1038 - * result.view.suboffsets = NULL - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * result.view.suboffsets = result.from_slice.suboffsets - * break - */ - __pyx_t_1 = ((__pyx_v_suboffset >= 0) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1039 - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: - * result.view.suboffsets = result.from_slice.suboffsets # <<<<<<<<<<<<<< - * break - * - */ - __pyx_v_result->__pyx_base.view.suboffsets = ((Py_ssize_t *)__pyx_v_result->from_slice.suboffsets); - - /* "View.MemoryView":1040 - * if suboffset >= 0: - * result.view.suboffsets = result.from_slice.suboffsets - * break # <<<<<<<<<<<<<< - * - * result.view.len = result.view.itemsize - */ - goto __pyx_L6_break; - - /* "View.MemoryView":1038 - * result.view.suboffsets = NULL - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * result.view.suboffsets = result.from_slice.suboffsets - * break - */ - } - } - __pyx_L6_break:; - - /* "View.MemoryView":1042 - * break - * - * result.view.len = result.view.itemsize # <<<<<<<<<<<<<< - * for length in result.view.shape[:ndim]: - * result.view.len *= length - */ - __pyx_t_9 = __pyx_v_result->__pyx_base.view.itemsize; - __pyx_v_result->__pyx_base.view.len = __pyx_t_9; - - /* "View.MemoryView":1043 - * - * result.view.len = result.view.itemsize - * for length in result.view.shape[:ndim]: # <<<<<<<<<<<<<< - * result.view.len *= length - * - */ - __pyx_t_7 = (__pyx_v_result->__pyx_base.view.shape + __pyx_v_ndim); - for (__pyx_t_8 = __pyx_v_result->__pyx_base.view.shape; __pyx_t_8 < __pyx_t_7; __pyx_t_8++) { - __pyx_t_6 = __pyx_t_8; - __pyx_t_2 = PyInt_FromSsize_t((__pyx_t_6[0])); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1043, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_XDECREF_SET(__pyx_v_length, __pyx_t_2); - __pyx_t_2 = 0; - - /* "View.MemoryView":1044 - * result.view.len = result.view.itemsize - * for length in result.view.shape[:ndim]: - * result.view.len *= length # <<<<<<<<<<<<<< - * - * result.to_object_func = to_object_func - */ - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_result->__pyx_base.view.len); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1044, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_InPlaceMultiply(__pyx_t_2, __pyx_v_length); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1044, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_9 = __Pyx_PyIndex_AsSsize_t(__pyx_t_3); if (unlikely((__pyx_t_9 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 1044, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result->__pyx_base.view.len = __pyx_t_9; - } - - /* "View.MemoryView":1046 - * result.view.len *= length - * - * result.to_object_func = to_object_func # <<<<<<<<<<<<<< - * result.to_dtype_func = to_dtype_func - * - */ - __pyx_v_result->to_object_func = __pyx_v_to_object_func; - - /* "View.MemoryView":1047 - * - * result.to_object_func = to_object_func - * result.to_dtype_func = to_dtype_func # <<<<<<<<<<<<<< - * - * return result - */ - __pyx_v_result->to_dtype_func = __pyx_v_to_dtype_func; - - /* "View.MemoryView":1049 - * result.to_dtype_func = to_dtype_func - * - * return result # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_get_slice_from_memoryview') - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)__pyx_v_result)); - __pyx_r = ((PyObject *)__pyx_v_result); - goto __pyx_L0; - - /* "View.MemoryView":999 - * - * @cname('__pyx_memoryview_fromslice') - * cdef memoryview_fromslice(__Pyx_memviewslice memviewslice, # <<<<<<<<<<<<<< - * int ndim, - * object (*to_object_func)(char *), - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview_fromslice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XDECREF(__pyx_v_length); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1052 - * - * @cname('__pyx_memoryview_get_slice_from_memoryview') - * cdef __Pyx_memviewslice *get_slice_from_memview(memoryview memview, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - */ - -static __Pyx_memviewslice *__pyx_memoryview_get_slice_from_memoryview(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_mslice) { - struct __pyx_memoryviewslice_obj *__pyx_v_obj = 0; - __Pyx_memviewslice *__pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("get_slice_from_memview", 0); - - /* "View.MemoryView":1055 - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * obj = memview - * return &obj.from_slice - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1056 - * cdef _memoryviewslice obj - * if isinstance(memview, _memoryviewslice): - * obj = memview # <<<<<<<<<<<<<< - * return &obj.from_slice - * else: - */ - if (!(likely(((((PyObject *)__pyx_v_memview)) == Py_None) || likely(__Pyx_TypeTest(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type))))) __PYX_ERR(1, 1056, __pyx_L1_error) - __pyx_t_3 = ((PyObject *)__pyx_v_memview); - __Pyx_INCREF(__pyx_t_3); - __pyx_v_obj = ((struct __pyx_memoryviewslice_obj *)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":1057 - * if isinstance(memview, _memoryviewslice): - * obj = memview - * return &obj.from_slice # <<<<<<<<<<<<<< - * else: - * slice_copy(memview, mslice) - */ - __pyx_r = (&__pyx_v_obj->from_slice); - goto __pyx_L0; - - /* "View.MemoryView":1055 - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * obj = memview - * return &obj.from_slice - */ - } - - /* "View.MemoryView":1059 - * return &obj.from_slice - * else: - * slice_copy(memview, mslice) # <<<<<<<<<<<<<< - * return mslice - * - */ - /*else*/ { - __pyx_memoryview_slice_copy(__pyx_v_memview, __pyx_v_mslice); - - /* "View.MemoryView":1060 - * else: - * slice_copy(memview, mslice) - * return mslice # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_slice_copy') - */ - __pyx_r = __pyx_v_mslice; - goto __pyx_L0; - } - - /* "View.MemoryView":1052 - * - * @cname('__pyx_memoryview_get_slice_from_memoryview') - * cdef __Pyx_memviewslice *get_slice_from_memview(memoryview memview, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.get_slice_from_memview", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_obj); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1063 - * - * @cname('__pyx_memoryview_slice_copy') - * cdef void slice_copy(memoryview memview, __Pyx_memviewslice *dst): # <<<<<<<<<<<<<< - * cdef int dim - * cdef (Py_ssize_t*) shape, strides, suboffsets - */ - -static void __pyx_memoryview_slice_copy(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_dst) { - int __pyx_v_dim; - Py_ssize_t *__pyx_v_shape; - Py_ssize_t *__pyx_v_strides; - Py_ssize_t *__pyx_v_suboffsets; - __Pyx_RefNannyDeclarations - Py_ssize_t *__pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - Py_ssize_t __pyx_t_5; - __Pyx_RefNannySetupContext("slice_copy", 0); - - /* "View.MemoryView":1067 - * cdef (Py_ssize_t*) shape, strides, suboffsets - * - * shape = memview.view.shape # <<<<<<<<<<<<<< - * strides = memview.view.strides - * suboffsets = memview.view.suboffsets - */ - __pyx_t_1 = __pyx_v_memview->view.shape; - __pyx_v_shape = __pyx_t_1; - - /* "View.MemoryView":1068 - * - * shape = memview.view.shape - * strides = memview.view.strides # <<<<<<<<<<<<<< - * suboffsets = memview.view.suboffsets - * - */ - __pyx_t_1 = __pyx_v_memview->view.strides; - __pyx_v_strides = __pyx_t_1; - - /* "View.MemoryView":1069 - * shape = memview.view.shape - * strides = memview.view.strides - * suboffsets = memview.view.suboffsets # <<<<<<<<<<<<<< - * - * dst.memview = <__pyx_memoryview *> memview - */ - __pyx_t_1 = __pyx_v_memview->view.suboffsets; - __pyx_v_suboffsets = __pyx_t_1; - - /* "View.MemoryView":1071 - * suboffsets = memview.view.suboffsets - * - * dst.memview = <__pyx_memoryview *> memview # <<<<<<<<<<<<<< - * dst.data = memview.view.buf - * - */ - __pyx_v_dst->memview = ((struct __pyx_memoryview_obj *)__pyx_v_memview); - - /* "View.MemoryView":1072 - * - * dst.memview = <__pyx_memoryview *> memview - * dst.data = memview.view.buf # <<<<<<<<<<<<<< - * - * for dim in range(memview.view.ndim): - */ - __pyx_v_dst->data = ((char *)__pyx_v_memview->view.buf); - - /* "View.MemoryView":1074 - * dst.data = memview.view.buf - * - * for dim in range(memview.view.ndim): # <<<<<<<<<<<<<< - * dst.shape[dim] = shape[dim] - * dst.strides[dim] = strides[dim] - */ - __pyx_t_2 = __pyx_v_memview->view.ndim; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_dim = __pyx_t_4; - - /* "View.MemoryView":1075 - * - * for dim in range(memview.view.ndim): - * dst.shape[dim] = shape[dim] # <<<<<<<<<<<<<< - * dst.strides[dim] = strides[dim] - * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1 - */ - (__pyx_v_dst->shape[__pyx_v_dim]) = (__pyx_v_shape[__pyx_v_dim]); - - /* "View.MemoryView":1076 - * for dim in range(memview.view.ndim): - * dst.shape[dim] = shape[dim] - * dst.strides[dim] = strides[dim] # <<<<<<<<<<<<<< - * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1 - * - */ - (__pyx_v_dst->strides[__pyx_v_dim]) = (__pyx_v_strides[__pyx_v_dim]); - - /* "View.MemoryView":1077 - * dst.shape[dim] = shape[dim] - * dst.strides[dim] = strides[dim] - * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1 # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_object') - */ - if ((__pyx_v_suboffsets != 0)) { - __pyx_t_5 = (__pyx_v_suboffsets[__pyx_v_dim]); - } else { - __pyx_t_5 = -1L; - } - (__pyx_v_dst->suboffsets[__pyx_v_dim]) = __pyx_t_5; - } - - /* "View.MemoryView":1063 - * - * @cname('__pyx_memoryview_slice_copy') - * cdef void slice_copy(memoryview memview, __Pyx_memviewslice *dst): # <<<<<<<<<<<<<< - * cdef int dim - * cdef (Py_ssize_t*) shape, strides, suboffsets - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":1080 - * - * @cname('__pyx_memoryview_copy_object') - * cdef memoryview_copy(memoryview memview): # <<<<<<<<<<<<<< - * "Create a new memoryview object" - * cdef __Pyx_memviewslice memviewslice - */ - -static PyObject *__pyx_memoryview_copy_object(struct __pyx_memoryview_obj *__pyx_v_memview) { - __Pyx_memviewslice __pyx_v_memviewslice; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_copy", 0); - - /* "View.MemoryView":1083 - * "Create a new memoryview object" - * cdef __Pyx_memviewslice memviewslice - * slice_copy(memview, &memviewslice) # <<<<<<<<<<<<<< - * return memoryview_copy_from_slice(memview, &memviewslice) - * - */ - __pyx_memoryview_slice_copy(__pyx_v_memview, (&__pyx_v_memviewslice)); - - /* "View.MemoryView":1084 - * cdef __Pyx_memviewslice memviewslice - * slice_copy(memview, &memviewslice) - * return memoryview_copy_from_slice(memview, &memviewslice) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_object_from_slice') - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __pyx_memoryview_copy_object_from_slice(__pyx_v_memview, (&__pyx_v_memviewslice)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1084, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":1080 - * - * @cname('__pyx_memoryview_copy_object') - * cdef memoryview_copy(memoryview memview): # <<<<<<<<<<<<<< - * "Create a new memoryview object" - * cdef __Pyx_memviewslice memviewslice - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview_copy", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1087 - * - * @cname('__pyx_memoryview_copy_object_from_slice') - * cdef memoryview_copy_from_slice(memoryview memview, __Pyx_memviewslice *memviewslice): # <<<<<<<<<<<<<< - * """ - * Create a new memoryview object from a given memoryview object and slice. - */ - -static PyObject *__pyx_memoryview_copy_object_from_slice(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_memviewslice) { - PyObject *(*__pyx_v_to_object_func)(char *); - int (*__pyx_v_to_dtype_func)(char *, PyObject *); - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *(*__pyx_t_3)(char *); - int (*__pyx_t_4)(char *, PyObject *); - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_copy_from_slice", 0); - - /* "View.MemoryView":1094 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * to_object_func = (<_memoryviewslice> memview).to_object_func - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1095 - * - * if isinstance(memview, _memoryviewslice): - * to_object_func = (<_memoryviewslice> memview).to_object_func # <<<<<<<<<<<<<< - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - * else: - */ - __pyx_t_3 = ((struct __pyx_memoryviewslice_obj *)__pyx_v_memview)->to_object_func; - __pyx_v_to_object_func = __pyx_t_3; - - /* "View.MemoryView":1096 - * if isinstance(memview, _memoryviewslice): - * to_object_func = (<_memoryviewslice> memview).to_object_func - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func # <<<<<<<<<<<<<< - * else: - * to_object_func = NULL - */ - __pyx_t_4 = ((struct __pyx_memoryviewslice_obj *)__pyx_v_memview)->to_dtype_func; - __pyx_v_to_dtype_func = __pyx_t_4; - - /* "View.MemoryView":1094 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * to_object_func = (<_memoryviewslice> memview).to_object_func - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1098 - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - * else: - * to_object_func = NULL # <<<<<<<<<<<<<< - * to_dtype_func = NULL - * - */ - /*else*/ { - __pyx_v_to_object_func = NULL; - - /* "View.MemoryView":1099 - * else: - * to_object_func = NULL - * to_dtype_func = NULL # <<<<<<<<<<<<<< - * - * return memoryview_fromslice(memviewslice[0], memview.view.ndim, - */ - __pyx_v_to_dtype_func = NULL; - } - __pyx_L3:; - - /* "View.MemoryView":1101 - * to_dtype_func = NULL - * - * return memoryview_fromslice(memviewslice[0], memview.view.ndim, # <<<<<<<<<<<<<< - * to_object_func, to_dtype_func, - * memview.dtype_is_object) - */ - __Pyx_XDECREF(__pyx_r); - - /* "View.MemoryView":1103 - * return memoryview_fromslice(memviewslice[0], memview.view.ndim, - * to_object_func, to_dtype_func, - * memview.dtype_is_object) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_5 = __pyx_memoryview_fromslice((__pyx_v_memviewslice[0]), __pyx_v_memview->view.ndim, __pyx_v_to_object_func, __pyx_v_to_dtype_func, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 1101, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "View.MemoryView":1087 - * - * @cname('__pyx_memoryview_copy_object_from_slice') - * cdef memoryview_copy_from_slice(memoryview memview, __Pyx_memviewslice *memviewslice): # <<<<<<<<<<<<<< - * """ - * Create a new memoryview object from a given memoryview object and slice. - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview_copy_from_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1109 - * - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: # <<<<<<<<<<<<<< - * if arg < 0: - * return -arg - */ - -static Py_ssize_t abs_py_ssize_t(Py_ssize_t __pyx_v_arg) { - Py_ssize_t __pyx_r; - int __pyx_t_1; - - /* "View.MemoryView":1110 - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: - * if arg < 0: # <<<<<<<<<<<<<< - * return -arg - * else: - */ - __pyx_t_1 = ((__pyx_v_arg < 0) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1111 - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: - * if arg < 0: - * return -arg # <<<<<<<<<<<<<< - * else: - * return arg - */ - __pyx_r = (-__pyx_v_arg); - goto __pyx_L0; - - /* "View.MemoryView":1110 - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: - * if arg < 0: # <<<<<<<<<<<<<< - * return -arg - * else: - */ - } - - /* "View.MemoryView":1113 - * return -arg - * else: - * return arg # <<<<<<<<<<<<<< - * - * @cname('__pyx_get_best_slice_order') - */ - /*else*/ { - __pyx_r = __pyx_v_arg; - goto __pyx_L0; - } - - /* "View.MemoryView":1109 - * - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: # <<<<<<<<<<<<<< - * if arg < 0: - * return -arg - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1116 - * - * @cname('__pyx_get_best_slice_order') - * cdef char get_best_order(__Pyx_memviewslice *mslice, int ndim) nogil: # <<<<<<<<<<<<<< - * """ - * Figure out the best memory access order for a given slice. - */ - -static char __pyx_get_best_slice_order(__Pyx_memviewslice *__pyx_v_mslice, int __pyx_v_ndim) { - int __pyx_v_i; - Py_ssize_t __pyx_v_c_stride; - Py_ssize_t __pyx_v_f_stride; - char __pyx_r; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - - /* "View.MemoryView":1121 - * """ - * cdef int i - * cdef Py_ssize_t c_stride = 0 # <<<<<<<<<<<<<< - * cdef Py_ssize_t f_stride = 0 - * - */ - __pyx_v_c_stride = 0; - - /* "View.MemoryView":1122 - * cdef int i - * cdef Py_ssize_t c_stride = 0 - * cdef Py_ssize_t f_stride = 0 # <<<<<<<<<<<<<< - * - * for i in range(ndim - 1, -1, -1): - */ - __pyx_v_f_stride = 0; - - /* "View.MemoryView":1124 - * cdef Py_ssize_t f_stride = 0 - * - * for i in range(ndim - 1, -1, -1): # <<<<<<<<<<<<<< - * if mslice.shape[i] > 1: - * c_stride = mslice.strides[i] - */ - for (__pyx_t_1 = (__pyx_v_ndim - 1); __pyx_t_1 > -1; __pyx_t_1-=1) { - __pyx_v_i = __pyx_t_1; - - /* "View.MemoryView":1125 - * - * for i in range(ndim - 1, -1, -1): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * c_stride = mslice.strides[i] - * break - */ - __pyx_t_2 = (((__pyx_v_mslice->shape[__pyx_v_i]) > 1) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1126 - * for i in range(ndim - 1, -1, -1): - * if mslice.shape[i] > 1: - * c_stride = mslice.strides[i] # <<<<<<<<<<<<<< - * break - * - */ - __pyx_v_c_stride = (__pyx_v_mslice->strides[__pyx_v_i]); - - /* "View.MemoryView":1127 - * if mslice.shape[i] > 1: - * c_stride = mslice.strides[i] - * break # <<<<<<<<<<<<<< - * - * for i in range(ndim): - */ - goto __pyx_L4_break; - - /* "View.MemoryView":1125 - * - * for i in range(ndim - 1, -1, -1): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * c_stride = mslice.strides[i] - * break - */ - } - } - __pyx_L4_break:; - - /* "View.MemoryView":1129 - * break - * - * for i in range(ndim): # <<<<<<<<<<<<<< - * if mslice.shape[i] > 1: - * f_stride = mslice.strides[i] - */ - __pyx_t_1 = __pyx_v_ndim; - __pyx_t_3 = __pyx_t_1; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1130 - * - * for i in range(ndim): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * f_stride = mslice.strides[i] - * break - */ - __pyx_t_2 = (((__pyx_v_mslice->shape[__pyx_v_i]) > 1) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1131 - * for i in range(ndim): - * if mslice.shape[i] > 1: - * f_stride = mslice.strides[i] # <<<<<<<<<<<<<< - * break - * - */ - __pyx_v_f_stride = (__pyx_v_mslice->strides[__pyx_v_i]); - - /* "View.MemoryView":1132 - * if mslice.shape[i] > 1: - * f_stride = mslice.strides[i] - * break # <<<<<<<<<<<<<< - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): - */ - goto __pyx_L7_break; - - /* "View.MemoryView":1130 - * - * for i in range(ndim): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * f_stride = mslice.strides[i] - * break - */ - } - } - __pyx_L7_break:; - - /* "View.MemoryView":1134 - * break - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): # <<<<<<<<<<<<<< - * return 'C' - * else: - */ - __pyx_t_2 = ((abs_py_ssize_t(__pyx_v_c_stride) <= abs_py_ssize_t(__pyx_v_f_stride)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1135 - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): - * return 'C' # <<<<<<<<<<<<<< - * else: - * return 'F' - */ - __pyx_r = 'C'; - goto __pyx_L0; - - /* "View.MemoryView":1134 - * break - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): # <<<<<<<<<<<<<< - * return 'C' - * else: - */ - } - - /* "View.MemoryView":1137 - * return 'C' - * else: - * return 'F' # <<<<<<<<<<<<<< - * - * @cython.cdivision(True) - */ - /*else*/ { - __pyx_r = 'F'; - goto __pyx_L0; - } - - /* "View.MemoryView":1116 - * - * @cname('__pyx_get_best_slice_order') - * cdef char get_best_order(__Pyx_memviewslice *mslice, int ndim) nogil: # <<<<<<<<<<<<<< - * """ - * Figure out the best memory access order for a given slice. - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1140 - * - * @cython.cdivision(True) - * cdef void _copy_strided_to_strided(char *src_data, Py_ssize_t *src_strides, # <<<<<<<<<<<<<< - * char *dst_data, Py_ssize_t *dst_strides, - * Py_ssize_t *src_shape, Py_ssize_t *dst_shape, - */ - -static void _copy_strided_to_strided(char *__pyx_v_src_data, Py_ssize_t *__pyx_v_src_strides, char *__pyx_v_dst_data, Py_ssize_t *__pyx_v_dst_strides, Py_ssize_t *__pyx_v_src_shape, Py_ssize_t *__pyx_v_dst_shape, int __pyx_v_ndim, size_t __pyx_v_itemsize) { - CYTHON_UNUSED Py_ssize_t __pyx_v_i; - CYTHON_UNUSED Py_ssize_t __pyx_v_src_extent; - Py_ssize_t __pyx_v_dst_extent; - Py_ssize_t __pyx_v_src_stride; - Py_ssize_t __pyx_v_dst_stride; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - Py_ssize_t __pyx_t_4; - Py_ssize_t __pyx_t_5; - Py_ssize_t __pyx_t_6; - - /* "View.MemoryView":1147 - * - * cdef Py_ssize_t i - * cdef Py_ssize_t src_extent = src_shape[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t dst_extent = dst_shape[0] - * cdef Py_ssize_t src_stride = src_strides[0] - */ - __pyx_v_src_extent = (__pyx_v_src_shape[0]); - - /* "View.MemoryView":1148 - * cdef Py_ssize_t i - * cdef Py_ssize_t src_extent = src_shape[0] - * cdef Py_ssize_t dst_extent = dst_shape[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t src_stride = src_strides[0] - * cdef Py_ssize_t dst_stride = dst_strides[0] - */ - __pyx_v_dst_extent = (__pyx_v_dst_shape[0]); - - /* "View.MemoryView":1149 - * cdef Py_ssize_t src_extent = src_shape[0] - * cdef Py_ssize_t dst_extent = dst_shape[0] - * cdef Py_ssize_t src_stride = src_strides[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t dst_stride = dst_strides[0] - * - */ - __pyx_v_src_stride = (__pyx_v_src_strides[0]); - - /* "View.MemoryView":1150 - * cdef Py_ssize_t dst_extent = dst_shape[0] - * cdef Py_ssize_t src_stride = src_strides[0] - * cdef Py_ssize_t dst_stride = dst_strides[0] # <<<<<<<<<<<<<< - * - * if ndim == 1: - */ - __pyx_v_dst_stride = (__pyx_v_dst_strides[0]); - - /* "View.MemoryView":1152 - * cdef Py_ssize_t dst_stride = dst_strides[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): - */ - __pyx_t_1 = ((__pyx_v_ndim == 1) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1153 - * - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<< - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) - */ - __pyx_t_2 = ((__pyx_v_src_stride > 0) != 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L5_bool_binop_done; - } - __pyx_t_2 = ((__pyx_v_dst_stride > 0) != 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L5_bool_binop_done; - } - - /* "View.MemoryView":1154 - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): # <<<<<<<<<<<<<< - * memcpy(dst_data, src_data, itemsize * dst_extent) - * else: - */ - __pyx_t_2 = (((size_t)__pyx_v_src_stride) == __pyx_v_itemsize); - if (__pyx_t_2) { - __pyx_t_2 = (__pyx_v_itemsize == ((size_t)__pyx_v_dst_stride)); - } - __pyx_t_3 = (__pyx_t_2 != 0); - __pyx_t_1 = __pyx_t_3; - __pyx_L5_bool_binop_done:; - - /* "View.MemoryView":1153 - * - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<< - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) - */ - if (__pyx_t_1) { - - /* "View.MemoryView":1155 - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) # <<<<<<<<<<<<<< - * else: - * for i in range(dst_extent): - */ - (void)(memcpy(__pyx_v_dst_data, __pyx_v_src_data, (__pyx_v_itemsize * __pyx_v_dst_extent))); - - /* "View.MemoryView":1153 - * - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<< - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) - */ - goto __pyx_L4; - } - - /* "View.MemoryView":1157 - * memcpy(dst_data, src_data, itemsize * dst_extent) - * else: - * for i in range(dst_extent): # <<<<<<<<<<<<<< - * memcpy(dst_data, src_data, itemsize) - * src_data += src_stride - */ - /*else*/ { - __pyx_t_4 = __pyx_v_dst_extent; - __pyx_t_5 = __pyx_t_4; - for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) { - __pyx_v_i = __pyx_t_6; - - /* "View.MemoryView":1158 - * else: - * for i in range(dst_extent): - * memcpy(dst_data, src_data, itemsize) # <<<<<<<<<<<<<< - * src_data += src_stride - * dst_data += dst_stride - */ - (void)(memcpy(__pyx_v_dst_data, __pyx_v_src_data, __pyx_v_itemsize)); - - /* "View.MemoryView":1159 - * for i in range(dst_extent): - * memcpy(dst_data, src_data, itemsize) - * src_data += src_stride # <<<<<<<<<<<<<< - * dst_data += dst_stride - * else: - */ - __pyx_v_src_data = (__pyx_v_src_data + __pyx_v_src_stride); - - /* "View.MemoryView":1160 - * memcpy(dst_data, src_data, itemsize) - * src_data += src_stride - * dst_data += dst_stride # <<<<<<<<<<<<<< - * else: - * for i in range(dst_extent): - */ - __pyx_v_dst_data = (__pyx_v_dst_data + __pyx_v_dst_stride); - } - } - __pyx_L4:; - - /* "View.MemoryView":1152 - * cdef Py_ssize_t dst_stride = dst_strides[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1162 - * dst_data += dst_stride - * else: - * for i in range(dst_extent): # <<<<<<<<<<<<<< - * _copy_strided_to_strided(src_data, src_strides + 1, - * dst_data, dst_strides + 1, - */ - /*else*/ { - __pyx_t_4 = __pyx_v_dst_extent; - __pyx_t_5 = __pyx_t_4; - for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) { - __pyx_v_i = __pyx_t_6; - - /* "View.MemoryView":1163 - * else: - * for i in range(dst_extent): - * _copy_strided_to_strided(src_data, src_strides + 1, # <<<<<<<<<<<<<< - * dst_data, dst_strides + 1, - * src_shape + 1, dst_shape + 1, - */ - _copy_strided_to_strided(__pyx_v_src_data, (__pyx_v_src_strides + 1), __pyx_v_dst_data, (__pyx_v_dst_strides + 1), (__pyx_v_src_shape + 1), (__pyx_v_dst_shape + 1), (__pyx_v_ndim - 1), __pyx_v_itemsize); - - /* "View.MemoryView":1167 - * src_shape + 1, dst_shape + 1, - * ndim - 1, itemsize) - * src_data += src_stride # <<<<<<<<<<<<<< - * dst_data += dst_stride - * - */ - __pyx_v_src_data = (__pyx_v_src_data + __pyx_v_src_stride); - - /* "View.MemoryView":1168 - * ndim - 1, itemsize) - * src_data += src_stride - * dst_data += dst_stride # <<<<<<<<<<<<<< - * - * cdef void copy_strided_to_strided(__Pyx_memviewslice *src, - */ - __pyx_v_dst_data = (__pyx_v_dst_data + __pyx_v_dst_stride); - } - } - __pyx_L3:; - - /* "View.MemoryView":1140 - * - * @cython.cdivision(True) - * cdef void _copy_strided_to_strided(char *src_data, Py_ssize_t *src_strides, # <<<<<<<<<<<<<< - * char *dst_data, Py_ssize_t *dst_strides, - * Py_ssize_t *src_shape, Py_ssize_t *dst_shape, - */ - - /* function exit code */ -} - -/* "View.MemoryView":1170 - * dst_data += dst_stride - * - * cdef void copy_strided_to_strided(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * int ndim, size_t itemsize) nogil: - */ - -static void copy_strided_to_strided(__Pyx_memviewslice *__pyx_v_src, __Pyx_memviewslice *__pyx_v_dst, int __pyx_v_ndim, size_t __pyx_v_itemsize) { - - /* "View.MemoryView":1173 - * __Pyx_memviewslice *dst, - * int ndim, size_t itemsize) nogil: - * _copy_strided_to_strided(src.data, src.strides, dst.data, dst.strides, # <<<<<<<<<<<<<< - * src.shape, dst.shape, ndim, itemsize) - * - */ - _copy_strided_to_strided(__pyx_v_src->data, __pyx_v_src->strides, __pyx_v_dst->data, __pyx_v_dst->strides, __pyx_v_src->shape, __pyx_v_dst->shape, __pyx_v_ndim, __pyx_v_itemsize); - - /* "View.MemoryView":1170 - * dst_data += dst_stride - * - * cdef void copy_strided_to_strided(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * int ndim, size_t itemsize) nogil: - */ - - /* function exit code */ -} - -/* "View.MemoryView":1177 - * - * @cname('__pyx_memoryview_slice_get_size') - * cdef Py_ssize_t slice_get_size(__Pyx_memviewslice *src, int ndim) nogil: # <<<<<<<<<<<<<< - * "Return the size of the memory occupied by the slice in number of bytes" - * cdef Py_ssize_t shape, size = src.memview.view.itemsize - */ - -static Py_ssize_t __pyx_memoryview_slice_get_size(__Pyx_memviewslice *__pyx_v_src, int __pyx_v_ndim) { - Py_ssize_t __pyx_v_shape; - Py_ssize_t __pyx_v_size; - Py_ssize_t __pyx_r; - Py_ssize_t __pyx_t_1; - Py_ssize_t *__pyx_t_2; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - - /* "View.MemoryView":1179 - * cdef Py_ssize_t slice_get_size(__Pyx_memviewslice *src, int ndim) nogil: - * "Return the size of the memory occupied by the slice in number of bytes" - * cdef Py_ssize_t shape, size = src.memview.view.itemsize # <<<<<<<<<<<<<< - * - * for shape in src.shape[:ndim]: - */ - __pyx_t_1 = __pyx_v_src->memview->view.itemsize; - __pyx_v_size = __pyx_t_1; - - /* "View.MemoryView":1181 - * cdef Py_ssize_t shape, size = src.memview.view.itemsize - * - * for shape in src.shape[:ndim]: # <<<<<<<<<<<<<< - * size *= shape - * - */ - __pyx_t_3 = (__pyx_v_src->shape + __pyx_v_ndim); - for (__pyx_t_4 = __pyx_v_src->shape; __pyx_t_4 < __pyx_t_3; __pyx_t_4++) { - __pyx_t_2 = __pyx_t_4; - __pyx_v_shape = (__pyx_t_2[0]); - - /* "View.MemoryView":1182 - * - * for shape in src.shape[:ndim]: - * size *= shape # <<<<<<<<<<<<<< - * - * return size - */ - __pyx_v_size = (__pyx_v_size * __pyx_v_shape); - } - - /* "View.MemoryView":1184 - * size *= shape - * - * return size # <<<<<<<<<<<<<< - * - * @cname('__pyx_fill_contig_strides_array') - */ - __pyx_r = __pyx_v_size; - goto __pyx_L0; - - /* "View.MemoryView":1177 - * - * @cname('__pyx_memoryview_slice_get_size') - * cdef Py_ssize_t slice_get_size(__Pyx_memviewslice *src, int ndim) nogil: # <<<<<<<<<<<<<< - * "Return the size of the memory occupied by the slice in number of bytes" - * cdef Py_ssize_t shape, size = src.memview.view.itemsize - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1187 - * - * @cname('__pyx_fill_contig_strides_array') - * cdef Py_ssize_t fill_contig_strides_array( # <<<<<<<<<<<<<< - * Py_ssize_t *shape, Py_ssize_t *strides, Py_ssize_t stride, - * int ndim, char order) nogil: - */ - -static Py_ssize_t __pyx_fill_contig_strides_array(Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, Py_ssize_t __pyx_v_stride, int __pyx_v_ndim, char __pyx_v_order) { - int __pyx_v_idx; - Py_ssize_t __pyx_r; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - - /* "View.MemoryView":1196 - * cdef int idx - * - * if order == 'F': # <<<<<<<<<<<<<< - * for idx in range(ndim): - * strides[idx] = stride - */ - __pyx_t_1 = ((__pyx_v_order == 'F') != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1197 - * - * if order == 'F': - * for idx in range(ndim): # <<<<<<<<<<<<<< - * strides[idx] = stride - * stride *= shape[idx] - */ - __pyx_t_2 = __pyx_v_ndim; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_idx = __pyx_t_4; - - /* "View.MemoryView":1198 - * if order == 'F': - * for idx in range(ndim): - * strides[idx] = stride # <<<<<<<<<<<<<< - * stride *= shape[idx] - * else: - */ - (__pyx_v_strides[__pyx_v_idx]) = __pyx_v_stride; - - /* "View.MemoryView":1199 - * for idx in range(ndim): - * strides[idx] = stride - * stride *= shape[idx] # <<<<<<<<<<<<<< - * else: - * for idx in range(ndim - 1, -1, -1): - */ - __pyx_v_stride = (__pyx_v_stride * (__pyx_v_shape[__pyx_v_idx])); - } - - /* "View.MemoryView":1196 - * cdef int idx - * - * if order == 'F': # <<<<<<<<<<<<<< - * for idx in range(ndim): - * strides[idx] = stride - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1201 - * stride *= shape[idx] - * else: - * for idx in range(ndim - 1, -1, -1): # <<<<<<<<<<<<<< - * strides[idx] = stride - * stride *= shape[idx] - */ - /*else*/ { - for (__pyx_t_2 = (__pyx_v_ndim - 1); __pyx_t_2 > -1; __pyx_t_2-=1) { - __pyx_v_idx = __pyx_t_2; - - /* "View.MemoryView":1202 - * else: - * for idx in range(ndim - 1, -1, -1): - * strides[idx] = stride # <<<<<<<<<<<<<< - * stride *= shape[idx] - * - */ - (__pyx_v_strides[__pyx_v_idx]) = __pyx_v_stride; - - /* "View.MemoryView":1203 - * for idx in range(ndim - 1, -1, -1): - * strides[idx] = stride - * stride *= shape[idx] # <<<<<<<<<<<<<< - * - * return stride - */ - __pyx_v_stride = (__pyx_v_stride * (__pyx_v_shape[__pyx_v_idx])); - } - } - __pyx_L3:; - - /* "View.MemoryView":1205 - * stride *= shape[idx] - * - * return stride # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_data_to_temp') - */ - __pyx_r = __pyx_v_stride; - goto __pyx_L0; - - /* "View.MemoryView":1187 - * - * @cname('__pyx_fill_contig_strides_array') - * cdef Py_ssize_t fill_contig_strides_array( # <<<<<<<<<<<<<< - * Py_ssize_t *shape, Py_ssize_t *strides, Py_ssize_t stride, - * int ndim, char order) nogil: - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1208 - * - * @cname('__pyx_memoryview_copy_data_to_temp') - * cdef void *copy_data_to_temp(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *tmpslice, - * char order, - */ - -static void *__pyx_memoryview_copy_data_to_temp(__Pyx_memviewslice *__pyx_v_src, __Pyx_memviewslice *__pyx_v_tmpslice, char __pyx_v_order, int __pyx_v_ndim) { - int __pyx_v_i; - void *__pyx_v_result; - size_t __pyx_v_itemsize; - size_t __pyx_v_size; - void *__pyx_r; - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - struct __pyx_memoryview_obj *__pyx_t_4; - int __pyx_t_5; - int __pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - - /* "View.MemoryView":1219 - * cdef void *result - * - * cdef size_t itemsize = src.memview.view.itemsize # <<<<<<<<<<<<<< - * cdef size_t size = slice_get_size(src, ndim) - * - */ - __pyx_t_1 = __pyx_v_src->memview->view.itemsize; - __pyx_v_itemsize = __pyx_t_1; - - /* "View.MemoryView":1220 - * - * cdef size_t itemsize = src.memview.view.itemsize - * cdef size_t size = slice_get_size(src, ndim) # <<<<<<<<<<<<<< - * - * result = malloc(size) - */ - __pyx_v_size = __pyx_memoryview_slice_get_size(__pyx_v_src, __pyx_v_ndim); - - /* "View.MemoryView":1222 - * cdef size_t size = slice_get_size(src, ndim) - * - * result = malloc(size) # <<<<<<<<<<<<<< - * if not result: - * _err(MemoryError, NULL) - */ - __pyx_v_result = malloc(__pyx_v_size); - - /* "View.MemoryView":1223 - * - * result = malloc(size) - * if not result: # <<<<<<<<<<<<<< - * _err(MemoryError, NULL) - * - */ - __pyx_t_2 = ((!(__pyx_v_result != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1224 - * result = malloc(size) - * if not result: - * _err(MemoryError, NULL) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = __pyx_memoryview_err(__pyx_builtin_MemoryError, NULL); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 1224, __pyx_L1_error) - - /* "View.MemoryView":1223 - * - * result = malloc(size) - * if not result: # <<<<<<<<<<<<<< - * _err(MemoryError, NULL) - * - */ - } - - /* "View.MemoryView":1227 - * - * - * tmpslice.data = result # <<<<<<<<<<<<<< - * tmpslice.memview = src.memview - * for i in range(ndim): - */ - __pyx_v_tmpslice->data = ((char *)__pyx_v_result); - - /* "View.MemoryView":1228 - * - * tmpslice.data = result - * tmpslice.memview = src.memview # <<<<<<<<<<<<<< - * for i in range(ndim): - * tmpslice.shape[i] = src.shape[i] - */ - __pyx_t_4 = __pyx_v_src->memview; - __pyx_v_tmpslice->memview = __pyx_t_4; - - /* "View.MemoryView":1229 - * tmpslice.data = result - * tmpslice.memview = src.memview - * for i in range(ndim): # <<<<<<<<<<<<<< - * tmpslice.shape[i] = src.shape[i] - * tmpslice.suboffsets[i] = -1 - */ - __pyx_t_3 = __pyx_v_ndim; - __pyx_t_5 = __pyx_t_3; - for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) { - __pyx_v_i = __pyx_t_6; - - /* "View.MemoryView":1230 - * tmpslice.memview = src.memview - * for i in range(ndim): - * tmpslice.shape[i] = src.shape[i] # <<<<<<<<<<<<<< - * tmpslice.suboffsets[i] = -1 - * - */ - (__pyx_v_tmpslice->shape[__pyx_v_i]) = (__pyx_v_src->shape[__pyx_v_i]); - - /* "View.MemoryView":1231 - * for i in range(ndim): - * tmpslice.shape[i] = src.shape[i] - * tmpslice.suboffsets[i] = -1 # <<<<<<<<<<<<<< - * - * fill_contig_strides_array(&tmpslice.shape[0], &tmpslice.strides[0], itemsize, - */ - (__pyx_v_tmpslice->suboffsets[__pyx_v_i]) = -1L; - } - - /* "View.MemoryView":1233 - * tmpslice.suboffsets[i] = -1 - * - * fill_contig_strides_array(&tmpslice.shape[0], &tmpslice.strides[0], itemsize, # <<<<<<<<<<<<<< - * ndim, order) - * - */ - (void)(__pyx_fill_contig_strides_array((&(__pyx_v_tmpslice->shape[0])), (&(__pyx_v_tmpslice->strides[0])), __pyx_v_itemsize, __pyx_v_ndim, __pyx_v_order)); - - /* "View.MemoryView":1237 - * - * - * for i in range(ndim): # <<<<<<<<<<<<<< - * if tmpslice.shape[i] == 1: - * tmpslice.strides[i] = 0 - */ - __pyx_t_3 = __pyx_v_ndim; - __pyx_t_5 = __pyx_t_3; - for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) { - __pyx_v_i = __pyx_t_6; - - /* "View.MemoryView":1238 - * - * for i in range(ndim): - * if tmpslice.shape[i] == 1: # <<<<<<<<<<<<<< - * tmpslice.strides[i] = 0 - * - */ - __pyx_t_2 = (((__pyx_v_tmpslice->shape[__pyx_v_i]) == 1) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1239 - * for i in range(ndim): - * if tmpslice.shape[i] == 1: - * tmpslice.strides[i] = 0 # <<<<<<<<<<<<<< - * - * if slice_is_contig(src[0], order, ndim): - */ - (__pyx_v_tmpslice->strides[__pyx_v_i]) = 0; - - /* "View.MemoryView":1238 - * - * for i in range(ndim): - * if tmpslice.shape[i] == 1: # <<<<<<<<<<<<<< - * tmpslice.strides[i] = 0 - * - */ - } - } - - /* "View.MemoryView":1241 - * tmpslice.strides[i] = 0 - * - * if slice_is_contig(src[0], order, ndim): # <<<<<<<<<<<<<< - * memcpy(result, src.data, size) - * else: - */ - __pyx_t_2 = (__pyx_memviewslice_is_contig((__pyx_v_src[0]), __pyx_v_order, __pyx_v_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1242 - * - * if slice_is_contig(src[0], order, ndim): - * memcpy(result, src.data, size) # <<<<<<<<<<<<<< - * else: - * copy_strided_to_strided(src, tmpslice, ndim, itemsize) - */ - (void)(memcpy(__pyx_v_result, __pyx_v_src->data, __pyx_v_size)); - - /* "View.MemoryView":1241 - * tmpslice.strides[i] = 0 - * - * if slice_is_contig(src[0], order, ndim): # <<<<<<<<<<<<<< - * memcpy(result, src.data, size) - * else: - */ - goto __pyx_L9; - } - - /* "View.MemoryView":1244 - * memcpy(result, src.data, size) - * else: - * copy_strided_to_strided(src, tmpslice, ndim, itemsize) # <<<<<<<<<<<<<< - * - * return result - */ - /*else*/ { - copy_strided_to_strided(__pyx_v_src, __pyx_v_tmpslice, __pyx_v_ndim, __pyx_v_itemsize); - } - __pyx_L9:; - - /* "View.MemoryView":1246 - * copy_strided_to_strided(src, tmpslice, ndim, itemsize) - * - * return result # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = __pyx_v_result; - goto __pyx_L0; - - /* "View.MemoryView":1208 - * - * @cname('__pyx_memoryview_copy_data_to_temp') - * cdef void *copy_data_to_temp(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *tmpslice, - * char order, - */ - - /* function exit code */ - __pyx_L1_error:; - { - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.copy_data_to_temp", __pyx_clineno, __pyx_lineno, __pyx_filename); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - } - __pyx_r = NULL; - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1251 - * - * @cname('__pyx_memoryview_err_extents') - * cdef int _err_extents(int i, Py_ssize_t extent1, # <<<<<<<<<<<<<< - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError("got differing extents in dimension %d (got %d and %d)" % - */ - -static int __pyx_memoryview_err_extents(int __pyx_v_i, Py_ssize_t __pyx_v_extent1, Py_ssize_t __pyx_v_extent2) { - int __pyx_r; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("_err_extents", 0); - - /* "View.MemoryView":1254 - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError("got differing extents in dimension %d (got %d and %d)" % - * (i, extent1, extent2)) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_err_dim') - */ - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_i); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1254, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_extent1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1254, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyInt_FromSsize_t(__pyx_v_extent2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1254, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyTuple_New(3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1254, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_t_3); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_3 = 0; - - /* "View.MemoryView":1253 - * cdef int _err_extents(int i, Py_ssize_t extent1, - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError("got differing extents in dimension %d (got %d and %d)" % # <<<<<<<<<<<<<< - * (i, extent1, extent2)) - * - */ - __pyx_t_3 = __Pyx_PyString_Format(__pyx_kp_s_got_differing_extents_in_dimensi, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1253, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1253, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_4, 0, 0, 0); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __PYX_ERR(1, 1253, __pyx_L1_error) - - /* "View.MemoryView":1251 - * - * @cname('__pyx_memoryview_err_extents') - * cdef int _err_extents(int i, Py_ssize_t extent1, # <<<<<<<<<<<<<< - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError("got differing extents in dimension %d (got %d and %d)" % - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView._err_extents", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - return __pyx_r; -} - -/* "View.MemoryView":1257 - * - * @cname('__pyx_memoryview_err_dim') - * cdef int _err_dim(object error, char *msg, int dim) except -1 with gil: # <<<<<<<<<<<<<< - * raise error(msg.decode('ascii') % dim) - * - */ - -static int __pyx_memoryview_err_dim(PyObject *__pyx_v_error, char *__pyx_v_msg, int __pyx_v_dim) { - int __pyx_r; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("_err_dim", 0); - __Pyx_INCREF(__pyx_v_error); - - /* "View.MemoryView":1258 - * @cname('__pyx_memoryview_err_dim') - * cdef int _err_dim(object error, char *msg, int dim) except -1 with gil: - * raise error(msg.decode('ascii') % dim) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_err') - */ - __pyx_t_2 = __Pyx_decode_c_string(__pyx_v_msg, 0, strlen(__pyx_v_msg), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1258, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyInt_From_int(__pyx_v_dim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1258, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyUnicode_Format(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1258, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_INCREF(__pyx_v_error); - __pyx_t_3 = __pyx_v_error; __pyx_t_2 = NULL; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - } - } - __pyx_t_1 = (__pyx_t_2) ? __Pyx_PyObject_Call2Args(__pyx_t_3, __pyx_t_2, __pyx_t_4) : __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_t_4); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1258, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 1258, __pyx_L1_error) - - /* "View.MemoryView":1257 - * - * @cname('__pyx_memoryview_err_dim') - * cdef int _err_dim(object error, char *msg, int dim) except -1 with gil: # <<<<<<<<<<<<<< - * raise error(msg.decode('ascii') % dim) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView._err_dim", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __Pyx_XDECREF(__pyx_v_error); - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - return __pyx_r; -} - -/* "View.MemoryView":1261 - * - * @cname('__pyx_memoryview_err') - * cdef int _err(object error, char *msg) except -1 with gil: # <<<<<<<<<<<<<< - * if msg != NULL: - * raise error(msg.decode('ascii')) - */ - -static int __pyx_memoryview_err(PyObject *__pyx_v_error, char *__pyx_v_msg) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("_err", 0); - __Pyx_INCREF(__pyx_v_error); - - /* "View.MemoryView":1262 - * @cname('__pyx_memoryview_err') - * cdef int _err(object error, char *msg) except -1 with gil: - * if msg != NULL: # <<<<<<<<<<<<<< - * raise error(msg.decode('ascii')) - * else: - */ - __pyx_t_1 = ((__pyx_v_msg != NULL) != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":1263 - * cdef int _err(object error, char *msg) except -1 with gil: - * if msg != NULL: - * raise error(msg.decode('ascii')) # <<<<<<<<<<<<<< - * else: - * raise error - */ - __pyx_t_3 = __Pyx_decode_c_string(__pyx_v_msg, 0, strlen(__pyx_v_msg), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1263, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_error); - __pyx_t_4 = __pyx_v_error; __pyx_t_5 = NULL; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_4))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_4); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_4, function); - } - } - __pyx_t_2 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_4, __pyx_t_5, __pyx_t_3) : __Pyx_PyObject_CallOneArg(__pyx_t_4, __pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1263, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(1, 1263, __pyx_L1_error) - - /* "View.MemoryView":1262 - * @cname('__pyx_memoryview_err') - * cdef int _err(object error, char *msg) except -1 with gil: - * if msg != NULL: # <<<<<<<<<<<<<< - * raise error(msg.decode('ascii')) - * else: - */ - } - - /* "View.MemoryView":1265 - * raise error(msg.decode('ascii')) - * else: - * raise error # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_contents') - */ - /*else*/ { - __Pyx_Raise(__pyx_v_error, 0, 0, 0); - __PYX_ERR(1, 1265, __pyx_L1_error) - } - - /* "View.MemoryView":1261 - * - * @cname('__pyx_memoryview_err') - * cdef int _err(object error, char *msg) except -1 with gil: # <<<<<<<<<<<<<< - * if msg != NULL: - * raise error(msg.decode('ascii')) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView._err", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __Pyx_XDECREF(__pyx_v_error); - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - return __pyx_r; -} - -/* "View.MemoryView":1268 - * - * @cname('__pyx_memoryview_copy_contents') - * cdef int memoryview_copy_contents(__Pyx_memviewslice src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice dst, - * int src_ndim, int dst_ndim, - */ - -static int __pyx_memoryview_copy_contents(__Pyx_memviewslice __pyx_v_src, __Pyx_memviewslice __pyx_v_dst, int __pyx_v_src_ndim, int __pyx_v_dst_ndim, int __pyx_v_dtype_is_object) { - void *__pyx_v_tmpdata; - size_t __pyx_v_itemsize; - int __pyx_v_i; - char __pyx_v_order; - int __pyx_v_broadcasting; - int __pyx_v_direct_copy; - __Pyx_memviewslice __pyx_v_tmp; - int __pyx_v_ndim; - int __pyx_r; - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - int __pyx_t_5; - int __pyx_t_6; - void *__pyx_t_7; - int __pyx_t_8; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - - /* "View.MemoryView":1276 - * Check for overlapping memory and verify the shapes. - * """ - * cdef void *tmpdata = NULL # <<<<<<<<<<<<<< - * cdef size_t itemsize = src.memview.view.itemsize - * cdef int i - */ - __pyx_v_tmpdata = NULL; - - /* "View.MemoryView":1277 - * """ - * cdef void *tmpdata = NULL - * cdef size_t itemsize = src.memview.view.itemsize # <<<<<<<<<<<<<< - * cdef int i - * cdef char order = get_best_order(&src, src_ndim) - */ - __pyx_t_1 = __pyx_v_src.memview->view.itemsize; - __pyx_v_itemsize = __pyx_t_1; - - /* "View.MemoryView":1279 - * cdef size_t itemsize = src.memview.view.itemsize - * cdef int i - * cdef char order = get_best_order(&src, src_ndim) # <<<<<<<<<<<<<< - * cdef bint broadcasting = False - * cdef bint direct_copy = False - */ - __pyx_v_order = __pyx_get_best_slice_order((&__pyx_v_src), __pyx_v_src_ndim); - - /* "View.MemoryView":1280 - * cdef int i - * cdef char order = get_best_order(&src, src_ndim) - * cdef bint broadcasting = False # <<<<<<<<<<<<<< - * cdef bint direct_copy = False - * cdef __Pyx_memviewslice tmp - */ - __pyx_v_broadcasting = 0; - - /* "View.MemoryView":1281 - * cdef char order = get_best_order(&src, src_ndim) - * cdef bint broadcasting = False - * cdef bint direct_copy = False # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice tmp - * - */ - __pyx_v_direct_copy = 0; - - /* "View.MemoryView":1284 - * cdef __Pyx_memviewslice tmp - * - * if src_ndim < dst_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: - */ - __pyx_t_2 = ((__pyx_v_src_ndim < __pyx_v_dst_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1285 - * - * if src_ndim < dst_ndim: - * broadcast_leading(&src, src_ndim, dst_ndim) # <<<<<<<<<<<<<< - * elif dst_ndim < src_ndim: - * broadcast_leading(&dst, dst_ndim, src_ndim) - */ - __pyx_memoryview_broadcast_leading((&__pyx_v_src), __pyx_v_src_ndim, __pyx_v_dst_ndim); - - /* "View.MemoryView":1284 - * cdef __Pyx_memviewslice tmp - * - * if src_ndim < dst_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1286 - * if src_ndim < dst_ndim: - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&dst, dst_ndim, src_ndim) - * - */ - __pyx_t_2 = ((__pyx_v_dst_ndim < __pyx_v_src_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1287 - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: - * broadcast_leading(&dst, dst_ndim, src_ndim) # <<<<<<<<<<<<<< - * - * cdef int ndim = max(src_ndim, dst_ndim) - */ - __pyx_memoryview_broadcast_leading((&__pyx_v_dst), __pyx_v_dst_ndim, __pyx_v_src_ndim); - - /* "View.MemoryView":1286 - * if src_ndim < dst_ndim: - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&dst, dst_ndim, src_ndim) - * - */ - } - __pyx_L3:; - - /* "View.MemoryView":1289 - * broadcast_leading(&dst, dst_ndim, src_ndim) - * - * cdef int ndim = max(src_ndim, dst_ndim) # <<<<<<<<<<<<<< - * - * for i in range(ndim): - */ - __pyx_t_3 = __pyx_v_dst_ndim; - __pyx_t_4 = __pyx_v_src_ndim; - if (((__pyx_t_3 > __pyx_t_4) != 0)) { - __pyx_t_5 = __pyx_t_3; - } else { - __pyx_t_5 = __pyx_t_4; - } - __pyx_v_ndim = __pyx_t_5; - - /* "View.MemoryView":1291 - * cdef int ndim = max(src_ndim, dst_ndim) - * - * for i in range(ndim): # <<<<<<<<<<<<<< - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: - */ - __pyx_t_5 = __pyx_v_ndim; - __pyx_t_3 = __pyx_t_5; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1292 - * - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: # <<<<<<<<<<<<<< - * if src.shape[i] == 1: - * broadcasting = True - */ - __pyx_t_2 = (((__pyx_v_src.shape[__pyx_v_i]) != (__pyx_v_dst.shape[__pyx_v_i])) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1293 - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: # <<<<<<<<<<<<<< - * broadcasting = True - * src.strides[i] = 0 - */ - __pyx_t_2 = (((__pyx_v_src.shape[__pyx_v_i]) == 1) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1294 - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: - * broadcasting = True # <<<<<<<<<<<<<< - * src.strides[i] = 0 - * else: - */ - __pyx_v_broadcasting = 1; - - /* "View.MemoryView":1295 - * if src.shape[i] == 1: - * broadcasting = True - * src.strides[i] = 0 # <<<<<<<<<<<<<< - * else: - * _err_extents(i, dst.shape[i], src.shape[i]) - */ - (__pyx_v_src.strides[__pyx_v_i]) = 0; - - /* "View.MemoryView":1293 - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: # <<<<<<<<<<<<<< - * broadcasting = True - * src.strides[i] = 0 - */ - goto __pyx_L7; - } - - /* "View.MemoryView":1297 - * src.strides[i] = 0 - * else: - * _err_extents(i, dst.shape[i], src.shape[i]) # <<<<<<<<<<<<<< - * - * if src.suboffsets[i] >= 0: - */ - /*else*/ { - __pyx_t_6 = __pyx_memoryview_err_extents(__pyx_v_i, (__pyx_v_dst.shape[__pyx_v_i]), (__pyx_v_src.shape[__pyx_v_i])); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(1, 1297, __pyx_L1_error) - } - __pyx_L7:; - - /* "View.MemoryView":1292 - * - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: # <<<<<<<<<<<<<< - * if src.shape[i] == 1: - * broadcasting = True - */ - } - - /* "View.MemoryView":1299 - * _err_extents(i, dst.shape[i], src.shape[i]) - * - * if src.suboffsets[i] >= 0: # <<<<<<<<<<<<<< - * _err_dim(ValueError, "Dimension %d is not direct", i) - * - */ - __pyx_t_2 = (((__pyx_v_src.suboffsets[__pyx_v_i]) >= 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1300 - * - * if src.suboffsets[i] >= 0: - * _err_dim(ValueError, "Dimension %d is not direct", i) # <<<<<<<<<<<<<< - * - * if slices_overlap(&src, &dst, ndim, itemsize): - */ - __pyx_t_6 = __pyx_memoryview_err_dim(__pyx_builtin_ValueError, ((char *)"Dimension %d is not direct"), __pyx_v_i); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(1, 1300, __pyx_L1_error) - - /* "View.MemoryView":1299 - * _err_extents(i, dst.shape[i], src.shape[i]) - * - * if src.suboffsets[i] >= 0: # <<<<<<<<<<<<<< - * _err_dim(ValueError, "Dimension %d is not direct", i) - * - */ - } - } - - /* "View.MemoryView":1302 - * _err_dim(ValueError, "Dimension %d is not direct", i) - * - * if slices_overlap(&src, &dst, ndim, itemsize): # <<<<<<<<<<<<<< - * - * if not slice_is_contig(src, order, ndim): - */ - __pyx_t_2 = (__pyx_slices_overlap((&__pyx_v_src), (&__pyx_v_dst), __pyx_v_ndim, __pyx_v_itemsize) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1304 - * if slices_overlap(&src, &dst, ndim, itemsize): - * - * if not slice_is_contig(src, order, ndim): # <<<<<<<<<<<<<< - * order = get_best_order(&dst, ndim) - * - */ - __pyx_t_2 = ((!(__pyx_memviewslice_is_contig(__pyx_v_src, __pyx_v_order, __pyx_v_ndim) != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1305 - * - * if not slice_is_contig(src, order, ndim): - * order = get_best_order(&dst, ndim) # <<<<<<<<<<<<<< - * - * tmpdata = copy_data_to_temp(&src, &tmp, order, ndim) - */ - __pyx_v_order = __pyx_get_best_slice_order((&__pyx_v_dst), __pyx_v_ndim); - - /* "View.MemoryView":1304 - * if slices_overlap(&src, &dst, ndim, itemsize): - * - * if not slice_is_contig(src, order, ndim): # <<<<<<<<<<<<<< - * order = get_best_order(&dst, ndim) - * - */ - } - - /* "View.MemoryView":1307 - * order = get_best_order(&dst, ndim) - * - * tmpdata = copy_data_to_temp(&src, &tmp, order, ndim) # <<<<<<<<<<<<<< - * src = tmp - * - */ - __pyx_t_7 = __pyx_memoryview_copy_data_to_temp((&__pyx_v_src), (&__pyx_v_tmp), __pyx_v_order, __pyx_v_ndim); if (unlikely(__pyx_t_7 == ((void *)NULL))) __PYX_ERR(1, 1307, __pyx_L1_error) - __pyx_v_tmpdata = __pyx_t_7; - - /* "View.MemoryView":1308 - * - * tmpdata = copy_data_to_temp(&src, &tmp, order, ndim) - * src = tmp # <<<<<<<<<<<<<< - * - * if not broadcasting: - */ - __pyx_v_src = __pyx_v_tmp; - - /* "View.MemoryView":1302 - * _err_dim(ValueError, "Dimension %d is not direct", i) - * - * if slices_overlap(&src, &dst, ndim, itemsize): # <<<<<<<<<<<<<< - * - * if not slice_is_contig(src, order, ndim): - */ - } - - /* "View.MemoryView":1310 - * src = tmp - * - * if not broadcasting: # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_2 = ((!(__pyx_v_broadcasting != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1313 - * - * - * if slice_is_contig(src, 'C', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): - */ - __pyx_t_2 = (__pyx_memviewslice_is_contig(__pyx_v_src, 'C', __pyx_v_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1314 - * - * if slice_is_contig(src, 'C', ndim): - * direct_copy = slice_is_contig(dst, 'C', ndim) # <<<<<<<<<<<<<< - * elif slice_is_contig(src, 'F', ndim): - * direct_copy = slice_is_contig(dst, 'F', ndim) - */ - __pyx_v_direct_copy = __pyx_memviewslice_is_contig(__pyx_v_dst, 'C', __pyx_v_ndim); - - /* "View.MemoryView":1313 - * - * - * if slice_is_contig(src, 'C', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): - */ - goto __pyx_L12; - } - - /* "View.MemoryView":1315 - * if slice_is_contig(src, 'C', ndim): - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - */ - __pyx_t_2 = (__pyx_memviewslice_is_contig(__pyx_v_src, 'F', __pyx_v_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1316 - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): - * direct_copy = slice_is_contig(dst, 'F', ndim) # <<<<<<<<<<<<<< - * - * if direct_copy: - */ - __pyx_v_direct_copy = __pyx_memviewslice_is_contig(__pyx_v_dst, 'F', __pyx_v_ndim); - - /* "View.MemoryView":1315 - * if slice_is_contig(src, 'C', ndim): - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - */ - } - __pyx_L12:; - - /* "View.MemoryView":1318 - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - * if direct_copy: # <<<<<<<<<<<<<< - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - */ - __pyx_t_2 = (__pyx_v_direct_copy != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1320 - * if direct_copy: - * - * refcount_copying(&dst, dtype_is_object, ndim, False) # <<<<<<<<<<<<<< - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) - * refcount_copying(&dst, dtype_is_object, ndim, True) - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 0); - - /* "View.MemoryView":1321 - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) # <<<<<<<<<<<<<< - * refcount_copying(&dst, dtype_is_object, ndim, True) - * free(tmpdata) - */ - (void)(memcpy(__pyx_v_dst.data, __pyx_v_src.data, __pyx_memoryview_slice_get_size((&__pyx_v_src), __pyx_v_ndim))); - - /* "View.MemoryView":1322 - * refcount_copying(&dst, dtype_is_object, ndim, False) - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) - * refcount_copying(&dst, dtype_is_object, ndim, True) # <<<<<<<<<<<<<< - * free(tmpdata) - * return 0 - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 1); - - /* "View.MemoryView":1323 - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) - * refcount_copying(&dst, dtype_is_object, ndim, True) - * free(tmpdata) # <<<<<<<<<<<<<< - * return 0 - * - */ - free(__pyx_v_tmpdata); - - /* "View.MemoryView":1324 - * refcount_copying(&dst, dtype_is_object, ndim, True) - * free(tmpdata) - * return 0 # <<<<<<<<<<<<<< - * - * if order == 'F' == get_best_order(&dst, ndim): - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":1318 - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - * if direct_copy: # <<<<<<<<<<<<<< - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - */ - } - - /* "View.MemoryView":1310 - * src = tmp - * - * if not broadcasting: # <<<<<<<<<<<<<< - * - * - */ - } - - /* "View.MemoryView":1326 - * return 0 - * - * if order == 'F' == get_best_order(&dst, ndim): # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_2 = (__pyx_v_order == 'F'); - if (__pyx_t_2) { - __pyx_t_2 = ('F' == __pyx_get_best_slice_order((&__pyx_v_dst), __pyx_v_ndim)); - } - __pyx_t_8 = (__pyx_t_2 != 0); - if (__pyx_t_8) { - - /* "View.MemoryView":1329 - * - * - * transpose_memslice(&src) # <<<<<<<<<<<<<< - * transpose_memslice(&dst) - * - */ - __pyx_t_5 = __pyx_memslice_transpose((&__pyx_v_src)); if (unlikely(__pyx_t_5 == ((int)0))) __PYX_ERR(1, 1329, __pyx_L1_error) - - /* "View.MemoryView":1330 - * - * transpose_memslice(&src) - * transpose_memslice(&dst) # <<<<<<<<<<<<<< - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - */ - __pyx_t_5 = __pyx_memslice_transpose((&__pyx_v_dst)); if (unlikely(__pyx_t_5 == ((int)0))) __PYX_ERR(1, 1330, __pyx_L1_error) - - /* "View.MemoryView":1326 - * return 0 - * - * if order == 'F' == get_best_order(&dst, ndim): # <<<<<<<<<<<<<< - * - * - */ - } - - /* "View.MemoryView":1332 - * transpose_memslice(&dst) - * - * refcount_copying(&dst, dtype_is_object, ndim, False) # <<<<<<<<<<<<<< - * copy_strided_to_strided(&src, &dst, ndim, itemsize) - * refcount_copying(&dst, dtype_is_object, ndim, True) - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 0); - - /* "View.MemoryView":1333 - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - * copy_strided_to_strided(&src, &dst, ndim, itemsize) # <<<<<<<<<<<<<< - * refcount_copying(&dst, dtype_is_object, ndim, True) - * - */ - copy_strided_to_strided((&__pyx_v_src), (&__pyx_v_dst), __pyx_v_ndim, __pyx_v_itemsize); - - /* "View.MemoryView":1334 - * refcount_copying(&dst, dtype_is_object, ndim, False) - * copy_strided_to_strided(&src, &dst, ndim, itemsize) - * refcount_copying(&dst, dtype_is_object, ndim, True) # <<<<<<<<<<<<<< - * - * free(tmpdata) - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 1); - - /* "View.MemoryView":1336 - * refcount_copying(&dst, dtype_is_object, ndim, True) - * - * free(tmpdata) # <<<<<<<<<<<<<< - * return 0 - * - */ - free(__pyx_v_tmpdata); - - /* "View.MemoryView":1337 - * - * free(tmpdata) - * return 0 # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_broadcast_leading') - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":1268 - * - * @cname('__pyx_memoryview_copy_contents') - * cdef int memoryview_copy_contents(__Pyx_memviewslice src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice dst, - * int src_ndim, int dst_ndim, - */ - - /* function exit code */ - __pyx_L1_error:; - { - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.memoryview_copy_contents", __pyx_clineno, __pyx_lineno, __pyx_filename); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - } - __pyx_r = -1; - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1340 - * - * @cname('__pyx_memoryview_broadcast_leading') - * cdef void broadcast_leading(__Pyx_memviewslice *mslice, # <<<<<<<<<<<<<< - * int ndim, - * int ndim_other) nogil: - */ - -static void __pyx_memoryview_broadcast_leading(__Pyx_memviewslice *__pyx_v_mslice, int __pyx_v_ndim, int __pyx_v_ndim_other) { - int __pyx_v_i; - int __pyx_v_offset; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - - /* "View.MemoryView":1344 - * int ndim_other) nogil: - * cdef int i - * cdef int offset = ndim_other - ndim # <<<<<<<<<<<<<< - * - * for i in range(ndim - 1, -1, -1): - */ - __pyx_v_offset = (__pyx_v_ndim_other - __pyx_v_ndim); - - /* "View.MemoryView":1346 - * cdef int offset = ndim_other - ndim - * - * for i in range(ndim - 1, -1, -1): # <<<<<<<<<<<<<< - * mslice.shape[i + offset] = mslice.shape[i] - * mslice.strides[i + offset] = mslice.strides[i] - */ - for (__pyx_t_1 = (__pyx_v_ndim - 1); __pyx_t_1 > -1; __pyx_t_1-=1) { - __pyx_v_i = __pyx_t_1; - - /* "View.MemoryView":1347 - * - * for i in range(ndim - 1, -1, -1): - * mslice.shape[i + offset] = mslice.shape[i] # <<<<<<<<<<<<<< - * mslice.strides[i + offset] = mslice.strides[i] - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] - */ - (__pyx_v_mslice->shape[(__pyx_v_i + __pyx_v_offset)]) = (__pyx_v_mslice->shape[__pyx_v_i]); - - /* "View.MemoryView":1348 - * for i in range(ndim - 1, -1, -1): - * mslice.shape[i + offset] = mslice.shape[i] - * mslice.strides[i + offset] = mslice.strides[i] # <<<<<<<<<<<<<< - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] - * - */ - (__pyx_v_mslice->strides[(__pyx_v_i + __pyx_v_offset)]) = (__pyx_v_mslice->strides[__pyx_v_i]); - - /* "View.MemoryView":1349 - * mslice.shape[i + offset] = mslice.shape[i] - * mslice.strides[i + offset] = mslice.strides[i] - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] # <<<<<<<<<<<<<< - * - * for i in range(offset): - */ - (__pyx_v_mslice->suboffsets[(__pyx_v_i + __pyx_v_offset)]) = (__pyx_v_mslice->suboffsets[__pyx_v_i]); - } - - /* "View.MemoryView":1351 - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] - * - * for i in range(offset): # <<<<<<<<<<<<<< - * mslice.shape[i] = 1 - * mslice.strides[i] = mslice.strides[0] - */ - __pyx_t_1 = __pyx_v_offset; - __pyx_t_2 = __pyx_t_1; - for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { - __pyx_v_i = __pyx_t_3; - - /* "View.MemoryView":1352 - * - * for i in range(offset): - * mslice.shape[i] = 1 # <<<<<<<<<<<<<< - * mslice.strides[i] = mslice.strides[0] - * mslice.suboffsets[i] = -1 - */ - (__pyx_v_mslice->shape[__pyx_v_i]) = 1; - - /* "View.MemoryView":1353 - * for i in range(offset): - * mslice.shape[i] = 1 - * mslice.strides[i] = mslice.strides[0] # <<<<<<<<<<<<<< - * mslice.suboffsets[i] = -1 - * - */ - (__pyx_v_mslice->strides[__pyx_v_i]) = (__pyx_v_mslice->strides[0]); - - /* "View.MemoryView":1354 - * mslice.shape[i] = 1 - * mslice.strides[i] = mslice.strides[0] - * mslice.suboffsets[i] = -1 # <<<<<<<<<<<<<< - * - * - */ - (__pyx_v_mslice->suboffsets[__pyx_v_i]) = -1L; - } - - /* "View.MemoryView":1340 - * - * @cname('__pyx_memoryview_broadcast_leading') - * cdef void broadcast_leading(__Pyx_memviewslice *mslice, # <<<<<<<<<<<<<< - * int ndim, - * int ndim_other) nogil: - */ - - /* function exit code */ -} - -/* "View.MemoryView":1362 - * - * @cname('__pyx_memoryview_refcount_copying') - * cdef void refcount_copying(__Pyx_memviewslice *dst, bint dtype_is_object, # <<<<<<<<<<<<<< - * int ndim, bint inc) nogil: - * - */ - -static void __pyx_memoryview_refcount_copying(__Pyx_memviewslice *__pyx_v_dst, int __pyx_v_dtype_is_object, int __pyx_v_ndim, int __pyx_v_inc) { - int __pyx_t_1; - - /* "View.MemoryView":1366 - * - * - * if dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice_with_gil(dst.data, dst.shape, - * dst.strides, ndim, inc) - */ - __pyx_t_1 = (__pyx_v_dtype_is_object != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1367 - * - * if dtype_is_object: - * refcount_objects_in_slice_with_gil(dst.data, dst.shape, # <<<<<<<<<<<<<< - * dst.strides, ndim, inc) - * - */ - __pyx_memoryview_refcount_objects_in_slice_with_gil(__pyx_v_dst->data, __pyx_v_dst->shape, __pyx_v_dst->strides, __pyx_v_ndim, __pyx_v_inc); - - /* "View.MemoryView":1366 - * - * - * if dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice_with_gil(dst.data, dst.shape, - * dst.strides, ndim, inc) - */ - } - - /* "View.MemoryView":1362 - * - * @cname('__pyx_memoryview_refcount_copying') - * cdef void refcount_copying(__Pyx_memviewslice *dst, bint dtype_is_object, # <<<<<<<<<<<<<< - * int ndim, bint inc) nogil: - * - */ - - /* function exit code */ -} - -/* "View.MemoryView":1371 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice_with_gil') - * cdef void refcount_objects_in_slice_with_gil(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * bint inc) with gil: - */ - -static void __pyx_memoryview_refcount_objects_in_slice_with_gil(char *__pyx_v_data, Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, int __pyx_v_ndim, int __pyx_v_inc) { - __Pyx_RefNannyDeclarations - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("refcount_objects_in_slice_with_gil", 0); - - /* "View.MemoryView":1374 - * Py_ssize_t *strides, int ndim, - * bint inc) with gil: - * refcount_objects_in_slice(data, shape, strides, ndim, inc) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_refcount_objects_in_slice') - */ - __pyx_memoryview_refcount_objects_in_slice(__pyx_v_data, __pyx_v_shape, __pyx_v_strides, __pyx_v_ndim, __pyx_v_inc); - - /* "View.MemoryView":1371 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice_with_gil') - * cdef void refcount_objects_in_slice_with_gil(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * bint inc) with gil: - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif -} - -/* "View.MemoryView":1377 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice') - * cdef void refcount_objects_in_slice(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, bint inc): - * cdef Py_ssize_t i - */ - -static void __pyx_memoryview_refcount_objects_in_slice(char *__pyx_v_data, Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, int __pyx_v_ndim, int __pyx_v_inc) { - CYTHON_UNUSED Py_ssize_t __pyx_v_i; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - Py_ssize_t __pyx_t_2; - Py_ssize_t __pyx_t_3; - int __pyx_t_4; - __Pyx_RefNannySetupContext("refcount_objects_in_slice", 0); - - /* "View.MemoryView":1381 - * cdef Py_ssize_t i - * - * for i in range(shape[0]): # <<<<<<<<<<<<<< - * if ndim == 1: - * if inc: - */ - __pyx_t_1 = (__pyx_v_shape[0]); - __pyx_t_2 = __pyx_t_1; - for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { - __pyx_v_i = __pyx_t_3; - - /* "View.MemoryView":1382 - * - * for i in range(shape[0]): - * if ndim == 1: # <<<<<<<<<<<<<< - * if inc: - * Py_INCREF(( data)[0]) - */ - __pyx_t_4 = ((__pyx_v_ndim == 1) != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":1383 - * for i in range(shape[0]): - * if ndim == 1: - * if inc: # <<<<<<<<<<<<<< - * Py_INCREF(( data)[0]) - * else: - */ - __pyx_t_4 = (__pyx_v_inc != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":1384 - * if ndim == 1: - * if inc: - * Py_INCREF(( data)[0]) # <<<<<<<<<<<<<< - * else: - * Py_DECREF(( data)[0]) - */ - Py_INCREF((((PyObject **)__pyx_v_data)[0])); - - /* "View.MemoryView":1383 - * for i in range(shape[0]): - * if ndim == 1: - * if inc: # <<<<<<<<<<<<<< - * Py_INCREF(( data)[0]) - * else: - */ - goto __pyx_L6; - } - - /* "View.MemoryView":1386 - * Py_INCREF(( data)[0]) - * else: - * Py_DECREF(( data)[0]) # <<<<<<<<<<<<<< - * else: - * refcount_objects_in_slice(data, shape + 1, strides + 1, - */ - /*else*/ { - Py_DECREF((((PyObject **)__pyx_v_data)[0])); - } - __pyx_L6:; - - /* "View.MemoryView":1382 - * - * for i in range(shape[0]): - * if ndim == 1: # <<<<<<<<<<<<<< - * if inc: - * Py_INCREF(( data)[0]) - */ - goto __pyx_L5; - } - - /* "View.MemoryView":1388 - * Py_DECREF(( data)[0]) - * else: - * refcount_objects_in_slice(data, shape + 1, strides + 1, # <<<<<<<<<<<<<< - * ndim - 1, inc) - * - */ - /*else*/ { - - /* "View.MemoryView":1389 - * else: - * refcount_objects_in_slice(data, shape + 1, strides + 1, - * ndim - 1, inc) # <<<<<<<<<<<<<< - * - * data += strides[0] - */ - __pyx_memoryview_refcount_objects_in_slice(__pyx_v_data, (__pyx_v_shape + 1), (__pyx_v_strides + 1), (__pyx_v_ndim - 1), __pyx_v_inc); - } - __pyx_L5:; - - /* "View.MemoryView":1391 - * ndim - 1, inc) - * - * data += strides[0] # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_data = (__pyx_v_data + (__pyx_v_strides[0])); - } - - /* "View.MemoryView":1377 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice') - * cdef void refcount_objects_in_slice(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, bint inc): - * cdef Py_ssize_t i - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":1397 - * - * @cname('__pyx_memoryview_slice_assign_scalar') - * cdef void slice_assign_scalar(__Pyx_memviewslice *dst, int ndim, # <<<<<<<<<<<<<< - * size_t itemsize, void *item, - * bint dtype_is_object) nogil: - */ - -static void __pyx_memoryview_slice_assign_scalar(__Pyx_memviewslice *__pyx_v_dst, int __pyx_v_ndim, size_t __pyx_v_itemsize, void *__pyx_v_item, int __pyx_v_dtype_is_object) { - - /* "View.MemoryView":1400 - * size_t itemsize, void *item, - * bint dtype_is_object) nogil: - * refcount_copying(dst, dtype_is_object, ndim, False) # <<<<<<<<<<<<<< - * _slice_assign_scalar(dst.data, dst.shape, dst.strides, ndim, - * itemsize, item) - */ - __pyx_memoryview_refcount_copying(__pyx_v_dst, __pyx_v_dtype_is_object, __pyx_v_ndim, 0); - - /* "View.MemoryView":1401 - * bint dtype_is_object) nogil: - * refcount_copying(dst, dtype_is_object, ndim, False) - * _slice_assign_scalar(dst.data, dst.shape, dst.strides, ndim, # <<<<<<<<<<<<<< - * itemsize, item) - * refcount_copying(dst, dtype_is_object, ndim, True) - */ - __pyx_memoryview__slice_assign_scalar(__pyx_v_dst->data, __pyx_v_dst->shape, __pyx_v_dst->strides, __pyx_v_ndim, __pyx_v_itemsize, __pyx_v_item); - - /* "View.MemoryView":1403 - * _slice_assign_scalar(dst.data, dst.shape, dst.strides, ndim, - * itemsize, item) - * refcount_copying(dst, dtype_is_object, ndim, True) # <<<<<<<<<<<<<< - * - * - */ - __pyx_memoryview_refcount_copying(__pyx_v_dst, __pyx_v_dtype_is_object, __pyx_v_ndim, 1); - - /* "View.MemoryView":1397 - * - * @cname('__pyx_memoryview_slice_assign_scalar') - * cdef void slice_assign_scalar(__Pyx_memviewslice *dst, int ndim, # <<<<<<<<<<<<<< - * size_t itemsize, void *item, - * bint dtype_is_object) nogil: - */ - - /* function exit code */ -} - -/* "View.MemoryView":1407 - * - * @cname('__pyx_memoryview__slice_assign_scalar') - * cdef void _slice_assign_scalar(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * size_t itemsize, void *item) nogil: - */ - -static void __pyx_memoryview__slice_assign_scalar(char *__pyx_v_data, Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, int __pyx_v_ndim, size_t __pyx_v_itemsize, void *__pyx_v_item) { - CYTHON_UNUSED Py_ssize_t __pyx_v_i; - Py_ssize_t __pyx_v_stride; - Py_ssize_t __pyx_v_extent; - int __pyx_t_1; - Py_ssize_t __pyx_t_2; - Py_ssize_t __pyx_t_3; - Py_ssize_t __pyx_t_4; - - /* "View.MemoryView":1411 - * size_t itemsize, void *item) nogil: - * cdef Py_ssize_t i - * cdef Py_ssize_t stride = strides[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t extent = shape[0] - * - */ - __pyx_v_stride = (__pyx_v_strides[0]); - - /* "View.MemoryView":1412 - * cdef Py_ssize_t i - * cdef Py_ssize_t stride = strides[0] - * cdef Py_ssize_t extent = shape[0] # <<<<<<<<<<<<<< - * - * if ndim == 1: - */ - __pyx_v_extent = (__pyx_v_shape[0]); - - /* "View.MemoryView":1414 - * cdef Py_ssize_t extent = shape[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * for i in range(extent): - * memcpy(data, item, itemsize) - */ - __pyx_t_1 = ((__pyx_v_ndim == 1) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1415 - * - * if ndim == 1: - * for i in range(extent): # <<<<<<<<<<<<<< - * memcpy(data, item, itemsize) - * data += stride - */ - __pyx_t_2 = __pyx_v_extent; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1416 - * if ndim == 1: - * for i in range(extent): - * memcpy(data, item, itemsize) # <<<<<<<<<<<<<< - * data += stride - * else: - */ - (void)(memcpy(__pyx_v_data, __pyx_v_item, __pyx_v_itemsize)); - - /* "View.MemoryView":1417 - * for i in range(extent): - * memcpy(data, item, itemsize) - * data += stride # <<<<<<<<<<<<<< - * else: - * for i in range(extent): - */ - __pyx_v_data = (__pyx_v_data + __pyx_v_stride); - } - - /* "View.MemoryView":1414 - * cdef Py_ssize_t extent = shape[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * for i in range(extent): - * memcpy(data, item, itemsize) - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1419 - * data += stride - * else: - * for i in range(extent): # <<<<<<<<<<<<<< - * _slice_assign_scalar(data, shape + 1, strides + 1, - * ndim - 1, itemsize, item) - */ - /*else*/ { - __pyx_t_2 = __pyx_v_extent; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1420 - * else: - * for i in range(extent): - * _slice_assign_scalar(data, shape + 1, strides + 1, # <<<<<<<<<<<<<< - * ndim - 1, itemsize, item) - * data += stride - */ - __pyx_memoryview__slice_assign_scalar(__pyx_v_data, (__pyx_v_shape + 1), (__pyx_v_strides + 1), (__pyx_v_ndim - 1), __pyx_v_itemsize, __pyx_v_item); - - /* "View.MemoryView":1422 - * _slice_assign_scalar(data, shape + 1, strides + 1, - * ndim - 1, itemsize, item) - * data += stride # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_data = (__pyx_v_data + __pyx_v_stride); - } - } - __pyx_L3:; - - /* "View.MemoryView":1407 - * - * @cname('__pyx_memoryview__slice_assign_scalar') - * cdef void _slice_assign_scalar(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * size_t itemsize, void *item) nogil: - */ - - /* function exit code */ -} - -/* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static PyMethodDef __pyx_mdef_15View_dot_MemoryView_1__pyx_unpickle_Enum = {"__pyx_unpickle_Enum", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum, METH_VARARGS|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v___pyx_type = 0; - long __pyx_v___pyx_checksum; - PyObject *__pyx_v___pyx_state = 0; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__pyx_unpickle_Enum (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pyx_type,&__pyx_n_s_pyx_checksum,&__pyx_n_s_pyx_state,0}; - PyObject* values[3] = {0,0,0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pyx_type)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pyx_checksum)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__pyx_unpickle_Enum", 1, 3, 3, 1); __PYX_ERR(1, 1, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pyx_state)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__pyx_unpickle_Enum", 1, 3, 3, 2); __PYX_ERR(1, 1, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__pyx_unpickle_Enum") < 0)) __PYX_ERR(1, 1, __pyx_L3_error) - } - } else if (PyTuple_GET_SIZE(__pyx_args) != 3) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - } - __pyx_v___pyx_type = values[0]; - __pyx_v___pyx_checksum = __Pyx_PyInt_As_long(values[1]); if (unlikely((__pyx_v___pyx_checksum == (long)-1) && PyErr_Occurred())) __PYX_ERR(1, 1, __pyx_L3_error) - __pyx_v___pyx_state = values[2]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__pyx_unpickle_Enum", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 1, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.__pyx_unpickle_Enum", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(__pyx_self, __pyx_v___pyx_type, __pyx_v___pyx_checksum, __pyx_v___pyx_state); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v___pyx_type, long __pyx_v___pyx_checksum, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_v___pyx_PickleError = 0; - PyObject *__pyx_v___pyx_result = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__pyx_unpickle_Enum", 0); - - /* "(tree fragment)":4 - * cdef object __pyx_PickleError - * cdef object __pyx_result - * if __pyx_checksum != 0xb068931: # <<<<<<<<<<<<<< - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) - */ - __pyx_t_1 = ((__pyx_v___pyx_checksum != 0xb068931) != 0); - if (__pyx_t_1) { - - /* "(tree fragment)":5 - * cdef object __pyx_result - * if __pyx_checksum != 0xb068931: - * from pickle import PickleError as __pyx_PickleError # <<<<<<<<<<<<<< - * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) - * __pyx_result = Enum.__new__(__pyx_type) - */ - __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_n_s_PickleError); - __Pyx_GIVEREF(__pyx_n_s_PickleError); - PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_PickleError); - __pyx_t_3 = __Pyx_Import(__pyx_n_s_pickle, __pyx_t_2, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_PickleError); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_2); - __pyx_v___pyx_PickleError = __pyx_t_2; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "(tree fragment)":6 - * if __pyx_checksum != 0xb068931: - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) # <<<<<<<<<<<<<< - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: - */ - __pyx_t_2 = __Pyx_PyInt_From_long(__pyx_v___pyx_checksum); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = __Pyx_PyString_Format(__pyx_kp_s_Incompatible_checksums_s_vs_0xb0, __pyx_t_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_INCREF(__pyx_v___pyx_PickleError); - __pyx_t_2 = __pyx_v___pyx_PickleError; __pyx_t_5 = NULL; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - } - } - __pyx_t_3 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_5, __pyx_t_4) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 6, __pyx_L1_error) - - /* "(tree fragment)":4 - * cdef object __pyx_PickleError - * cdef object __pyx_result - * if __pyx_checksum != 0xb068931: # <<<<<<<<<<<<<< - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) - */ - } - - /* "(tree fragment)":7 - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) - * __pyx_result = Enum.__new__(__pyx_type) # <<<<<<<<<<<<<< - * if __pyx_state is not None: - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_MemviewEnum_type), __pyx_n_s_new); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 7, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - } - } - __pyx_t_3 = (__pyx_t_4) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_4, __pyx_v___pyx_type) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v___pyx_type); - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 7, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v___pyx_result = __pyx_t_3; - __pyx_t_3 = 0; - - /* "(tree fragment)":8 - * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - */ - __pyx_t_1 = (__pyx_v___pyx_state != Py_None); - __pyx_t_6 = (__pyx_t_1 != 0); - if (__pyx_t_6) { - - /* "(tree fragment)":9 - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) # <<<<<<<<<<<<<< - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - */ - if (!(likely(PyTuple_CheckExact(__pyx_v___pyx_state))||((__pyx_v___pyx_state) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "tuple", Py_TYPE(__pyx_v___pyx_state)->tp_name), 0))) __PYX_ERR(1, 9, __pyx_L1_error) - __pyx_t_3 = __pyx_unpickle_Enum__set_state(((struct __pyx_MemviewEnum_obj *)__pyx_v___pyx_result), ((PyObject*)__pyx_v___pyx_state)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 9, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "(tree fragment)":8 - * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - */ - } - - /* "(tree fragment)":10 - * if __pyx_state is not None: - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result # <<<<<<<<<<<<<< - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v___pyx_result); - __pyx_r = __pyx_v___pyx_result; - goto __pyx_L0; - - /* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.__pyx_unpickle_Enum", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v___pyx_PickleError); - __Pyx_XDECREF(__pyx_v___pyx_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":11 - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<< - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - */ - -static PyObject *__pyx_unpickle_Enum__set_state(struct __pyx_MemviewEnum_obj *__pyx_v___pyx_result, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - Py_ssize_t __pyx_t_3; - int __pyx_t_4; - int __pyx_t_5; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__pyx_unpickle_Enum__set_state", 0); - - /* "(tree fragment)":12 - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] # <<<<<<<<<<<<<< - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - * __pyx_result.__dict__.update(__pyx_state[1]) - */ - if (unlikely(__pyx_v___pyx_state == Py_None)) { - PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable"); - __PYX_ERR(1, 12, __pyx_L1_error) - } - __pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __Pyx_GOTREF(__pyx_v___pyx_result->name); - __Pyx_DECREF(__pyx_v___pyx_result->name); - __pyx_v___pyx_result->name = __pyx_t_1; - __pyx_t_1 = 0; - - /* "(tree fragment)":13 - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<< - * __pyx_result.__dict__.update(__pyx_state[1]) - */ - if (unlikely(__pyx_v___pyx_state == Py_None)) { - PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()"); - __PYX_ERR(1, 13, __pyx_L1_error) - } - __pyx_t_3 = PyTuple_GET_SIZE(__pyx_v___pyx_state); if (unlikely(__pyx_t_3 == ((Py_ssize_t)-1))) __PYX_ERR(1, 13, __pyx_L1_error) - __pyx_t_4 = ((__pyx_t_3 > 1) != 0); - if (__pyx_t_4) { - } else { - __pyx_t_2 = __pyx_t_4; - goto __pyx_L4_bool_binop_done; - } - __pyx_t_4 = __Pyx_HasAttr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(1, 13, __pyx_L1_error) - __pyx_t_5 = (__pyx_t_4 != 0); - __pyx_t_2 = __pyx_t_5; - __pyx_L4_bool_binop_done:; - if (__pyx_t_2) { - - /* "(tree fragment)":14 - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - * __pyx_result.__dict__.update(__pyx_state[1]) # <<<<<<<<<<<<<< - */ - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_update); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(__pyx_v___pyx_state == Py_None)) { - PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable"); - __PYX_ERR(1, 14, __pyx_L1_error) - } - __pyx_t_6 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_8 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_7))) { - __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_7); - if (likely(__pyx_t_8)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7); - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_7, function); - } - } - __pyx_t_1 = (__pyx_t_8) ? __Pyx_PyObject_Call2Args(__pyx_t_7, __pyx_t_8, __pyx_t_6) : __Pyx_PyObject_CallOneArg(__pyx_t_7, __pyx_t_6); - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "(tree fragment)":13 - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<< - * __pyx_result.__dict__.update(__pyx_state[1]) - */ - } - - /* "(tree fragment)":11 - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<< - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_AddTraceback("View.MemoryView.__pyx_unpickle_Enum__set_state", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static struct __pyx_vtabstruct_array __pyx_vtable_array; - -static PyObject *__pyx_tp_new_array(PyTypeObject *t, PyObject *a, PyObject *k) { - struct __pyx_array_obj *p; - PyObject *o; - if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) { - o = (*t->tp_alloc)(t, 0); - } else { - o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); - } - if (unlikely(!o)) return 0; - p = ((struct __pyx_array_obj *)o); - p->__pyx_vtab = __pyx_vtabptr_array; - p->mode = ((PyObject*)Py_None); Py_INCREF(Py_None); - p->_format = ((PyObject*)Py_None); Py_INCREF(Py_None); - if (unlikely(__pyx_array___cinit__(o, a, k) < 0)) goto bad; - return o; - bad: - Py_DECREF(o); o = 0; - return NULL; -} - -static void __pyx_tp_dealloc_array(PyObject *o) { - struct __pyx_array_obj *p = (struct __pyx_array_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && (!PyType_IS_GC(Py_TYPE(o)) || !_PyGC_FINALIZED(o))) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - #endif - { - PyObject *etype, *eval, *etb; - PyErr_Fetch(&etype, &eval, &etb); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) + 1); - __pyx_array___dealloc__(o); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) - 1); - PyErr_Restore(etype, eval, etb); - } - Py_CLEAR(p->mode); - Py_CLEAR(p->_format); - (*Py_TYPE(o)->tp_free)(o); -} -static PyObject *__pyx_sq_item_array(PyObject *o, Py_ssize_t i) { - PyObject *r; - PyObject *x = PyInt_FromSsize_t(i); if(!x) return 0; - r = Py_TYPE(o)->tp_as_mapping->mp_subscript(o, x); - Py_DECREF(x); - return r; -} - -static int __pyx_mp_ass_subscript_array(PyObject *o, PyObject *i, PyObject *v) { - if (v) { - return __pyx_array___setitem__(o, i, v); - } - else { - PyErr_Format(PyExc_NotImplementedError, - "Subscript deletion not supported by %.200s", Py_TYPE(o)->tp_name); - return -1; - } -} - -static PyObject *__pyx_tp_getattro_array(PyObject *o, PyObject *n) { - PyObject *v = __Pyx_PyObject_GenericGetAttr(o, n); - if (!v && PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Clear(); - v = __pyx_array___getattr__(o, n); - } - return v; -} - -static PyObject *__pyx_getprop___pyx_array_memview(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(o); -} - -static PyMethodDef __pyx_methods_array[] = { - {"__getattr__", (PyCFunction)__pyx_array___getattr__, METH_O|METH_COEXIST, 0}, - {"__reduce_cython__", (PyCFunction)__pyx_pw___pyx_array_1__reduce_cython__, METH_NOARGS, 0}, - {"__setstate_cython__", (PyCFunction)__pyx_pw___pyx_array_3__setstate_cython__, METH_O, 0}, - {0, 0, 0, 0} -}; - -static struct PyGetSetDef __pyx_getsets_array[] = { - {(char *)"memview", __pyx_getprop___pyx_array_memview, 0, (char *)0, 0}, - {0, 0, 0, 0, 0} -}; - -static PySequenceMethods __pyx_tp_as_sequence_array = { - __pyx_array___len__, /*sq_length*/ - 0, /*sq_concat*/ - 0, /*sq_repeat*/ - __pyx_sq_item_array, /*sq_item*/ - 0, /*sq_slice*/ - 0, /*sq_ass_item*/ - 0, /*sq_ass_slice*/ - 0, /*sq_contains*/ - 0, /*sq_inplace_concat*/ - 0, /*sq_inplace_repeat*/ -}; - -static PyMappingMethods __pyx_tp_as_mapping_array = { - __pyx_array___len__, /*mp_length*/ - __pyx_array___getitem__, /*mp_subscript*/ - __pyx_mp_ass_subscript_array, /*mp_ass_subscript*/ -}; - -static PyBufferProcs __pyx_tp_as_buffer_array = { - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getreadbuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getwritebuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getsegcount*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getcharbuffer*/ - #endif - __pyx_array_getbuffer, /*bf_getbuffer*/ - 0, /*bf_releasebuffer*/ -}; - -static PyTypeObject __pyx_type___pyx_array = { - PyVarObject_HEAD_INIT(0, 0) - "monotonic_align.core.array", /*tp_name*/ - sizeof(struct __pyx_array_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_array, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - &__pyx_tp_as_sequence_array, /*tp_as_sequence*/ - &__pyx_tp_as_mapping_array, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - __pyx_tp_getattro_array, /*tp_getattro*/ - 0, /*tp_setattro*/ - &__pyx_tp_as_buffer_array, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE, /*tp_flags*/ - 0, /*tp_doc*/ - 0, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods_array, /*tp_methods*/ - 0, /*tp_members*/ - __pyx_getsets_array, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_array, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - 0, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif -}; - -static PyObject *__pyx_tp_new_Enum(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - struct __pyx_MemviewEnum_obj *p; - PyObject *o; - if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) { - o = (*t->tp_alloc)(t, 0); - } else { - o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); - } - if (unlikely(!o)) return 0; - p = ((struct __pyx_MemviewEnum_obj *)o); - p->name = Py_None; Py_INCREF(Py_None); - return o; -} - -static void __pyx_tp_dealloc_Enum(PyObject *o) { - struct __pyx_MemviewEnum_obj *p = (struct __pyx_MemviewEnum_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && !_PyGC_FINALIZED(o)) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - #endif - PyObject_GC_UnTrack(o); - Py_CLEAR(p->name); - (*Py_TYPE(o)->tp_free)(o); -} - -static int __pyx_tp_traverse_Enum(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_MemviewEnum_obj *p = (struct __pyx_MemviewEnum_obj *)o; - if (p->name) { - e = (*v)(p->name, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear_Enum(PyObject *o) { - PyObject* tmp; - struct __pyx_MemviewEnum_obj *p = (struct __pyx_MemviewEnum_obj *)o; - tmp = ((PyObject*)p->name); - p->name = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - return 0; -} - -static PyMethodDef __pyx_methods_Enum[] = { - {"__reduce_cython__", (PyCFunction)__pyx_pw___pyx_MemviewEnum_1__reduce_cython__, METH_NOARGS, 0}, - {"__setstate_cython__", (PyCFunction)__pyx_pw___pyx_MemviewEnum_3__setstate_cython__, METH_O, 0}, - {0, 0, 0, 0} -}; - -static PyTypeObject __pyx_type___pyx_MemviewEnum = { - PyVarObject_HEAD_INIT(0, 0) - "monotonic_align.core.Enum", /*tp_name*/ - sizeof(struct __pyx_MemviewEnum_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_Enum, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - __pyx_MemviewEnum___repr__, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_Enum, /*tp_traverse*/ - __pyx_tp_clear_Enum, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods_Enum, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - __pyx_MemviewEnum___init__, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_Enum, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - 0, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif -}; -static struct __pyx_vtabstruct_memoryview __pyx_vtable_memoryview; - -static PyObject *__pyx_tp_new_memoryview(PyTypeObject *t, PyObject *a, PyObject *k) { - struct __pyx_memoryview_obj *p; - PyObject *o; - if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) { - o = (*t->tp_alloc)(t, 0); - } else { - o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); - } - if (unlikely(!o)) return 0; - p = ((struct __pyx_memoryview_obj *)o); - p->__pyx_vtab = __pyx_vtabptr_memoryview; - p->obj = Py_None; Py_INCREF(Py_None); - p->_size = Py_None; Py_INCREF(Py_None); - p->_array_interface = Py_None; Py_INCREF(Py_None); - p->view.obj = NULL; - if (unlikely(__pyx_memoryview___cinit__(o, a, k) < 0)) goto bad; - return o; - bad: - Py_DECREF(o); o = 0; - return NULL; -} - -static void __pyx_tp_dealloc_memoryview(PyObject *o) { - struct __pyx_memoryview_obj *p = (struct __pyx_memoryview_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && !_PyGC_FINALIZED(o)) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - #endif - PyObject_GC_UnTrack(o); - { - PyObject *etype, *eval, *etb; - PyErr_Fetch(&etype, &eval, &etb); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) + 1); - __pyx_memoryview___dealloc__(o); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) - 1); - PyErr_Restore(etype, eval, etb); - } - Py_CLEAR(p->obj); - Py_CLEAR(p->_size); - Py_CLEAR(p->_array_interface); - (*Py_TYPE(o)->tp_free)(o); -} - -static int __pyx_tp_traverse_memoryview(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_memoryview_obj *p = (struct __pyx_memoryview_obj *)o; - if (p->obj) { - e = (*v)(p->obj, a); if (e) return e; - } - if (p->_size) { - e = (*v)(p->_size, a); if (e) return e; - } - if (p->_array_interface) { - e = (*v)(p->_array_interface, a); if (e) return e; - } - if (p->view.obj) { - e = (*v)(p->view.obj, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear_memoryview(PyObject *o) { - PyObject* tmp; - struct __pyx_memoryview_obj *p = (struct __pyx_memoryview_obj *)o; - tmp = ((PyObject*)p->obj); - p->obj = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - tmp = ((PyObject*)p->_size); - p->_size = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - tmp = ((PyObject*)p->_array_interface); - p->_array_interface = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - Py_CLEAR(p->view.obj); - return 0; -} -static PyObject *__pyx_sq_item_memoryview(PyObject *o, Py_ssize_t i) { - PyObject *r; - PyObject *x = PyInt_FromSsize_t(i); if(!x) return 0; - r = Py_TYPE(o)->tp_as_mapping->mp_subscript(o, x); - Py_DECREF(x); - return r; -} - -static int __pyx_mp_ass_subscript_memoryview(PyObject *o, PyObject *i, PyObject *v) { - if (v) { - return __pyx_memoryview___setitem__(o, i, v); - } - else { - PyErr_Format(PyExc_NotImplementedError, - "Subscript deletion not supported by %.200s", Py_TYPE(o)->tp_name); - return -1; - } -} - -static PyObject *__pyx_getprop___pyx_memoryview_T(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_base(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_shape(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_strides(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_suboffsets(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_ndim(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_itemsize(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_nbytes(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_size(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(o); -} - -static PyMethodDef __pyx_methods_memoryview[] = { - {"is_c_contig", (PyCFunction)__pyx_memoryview_is_c_contig, METH_NOARGS, 0}, - {"is_f_contig", (PyCFunction)__pyx_memoryview_is_f_contig, METH_NOARGS, 0}, - {"copy", (PyCFunction)__pyx_memoryview_copy, METH_NOARGS, 0}, - {"copy_fortran", (PyCFunction)__pyx_memoryview_copy_fortran, METH_NOARGS, 0}, - {"__reduce_cython__", (PyCFunction)__pyx_pw___pyx_memoryview_1__reduce_cython__, METH_NOARGS, 0}, - {"__setstate_cython__", (PyCFunction)__pyx_pw___pyx_memoryview_3__setstate_cython__, METH_O, 0}, - {0, 0, 0, 0} -}; - -static struct PyGetSetDef __pyx_getsets_memoryview[] = { - {(char *)"T", __pyx_getprop___pyx_memoryview_T, 0, (char *)0, 0}, - {(char *)"base", __pyx_getprop___pyx_memoryview_base, 0, (char *)0, 0}, - {(char *)"shape", __pyx_getprop___pyx_memoryview_shape, 0, (char *)0, 0}, - {(char *)"strides", __pyx_getprop___pyx_memoryview_strides, 0, (char *)0, 0}, - {(char *)"suboffsets", __pyx_getprop___pyx_memoryview_suboffsets, 0, (char *)0, 0}, - {(char *)"ndim", __pyx_getprop___pyx_memoryview_ndim, 0, (char *)0, 0}, - {(char *)"itemsize", __pyx_getprop___pyx_memoryview_itemsize, 0, (char *)0, 0}, - {(char *)"nbytes", __pyx_getprop___pyx_memoryview_nbytes, 0, (char *)0, 0}, - {(char *)"size", __pyx_getprop___pyx_memoryview_size, 0, (char *)0, 0}, - {0, 0, 0, 0, 0} -}; - -static PySequenceMethods __pyx_tp_as_sequence_memoryview = { - __pyx_memoryview___len__, /*sq_length*/ - 0, /*sq_concat*/ - 0, /*sq_repeat*/ - __pyx_sq_item_memoryview, /*sq_item*/ - 0, /*sq_slice*/ - 0, /*sq_ass_item*/ - 0, /*sq_ass_slice*/ - 0, /*sq_contains*/ - 0, /*sq_inplace_concat*/ - 0, /*sq_inplace_repeat*/ -}; - -static PyMappingMethods __pyx_tp_as_mapping_memoryview = { - __pyx_memoryview___len__, /*mp_length*/ - __pyx_memoryview___getitem__, /*mp_subscript*/ - __pyx_mp_ass_subscript_memoryview, /*mp_ass_subscript*/ -}; - -static PyBufferProcs __pyx_tp_as_buffer_memoryview = { - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getreadbuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getwritebuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getsegcount*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getcharbuffer*/ - #endif - __pyx_memoryview_getbuffer, /*bf_getbuffer*/ - 0, /*bf_releasebuffer*/ -}; - -static PyTypeObject __pyx_type___pyx_memoryview = { - PyVarObject_HEAD_INIT(0, 0) - "monotonic_align.core.memoryview", /*tp_name*/ - sizeof(struct __pyx_memoryview_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_memoryview, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - __pyx_memoryview___repr__, /*tp_repr*/ - 0, /*tp_as_number*/ - &__pyx_tp_as_sequence_memoryview, /*tp_as_sequence*/ - &__pyx_tp_as_mapping_memoryview, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - __pyx_memoryview___str__, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - &__pyx_tp_as_buffer_memoryview, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_memoryview, /*tp_traverse*/ - __pyx_tp_clear_memoryview, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods_memoryview, /*tp_methods*/ - 0, /*tp_members*/ - __pyx_getsets_memoryview, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_memoryview, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - 0, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif -}; -static struct __pyx_vtabstruct__memoryviewslice __pyx_vtable__memoryviewslice; - -static PyObject *__pyx_tp_new__memoryviewslice(PyTypeObject *t, PyObject *a, PyObject *k) { - struct __pyx_memoryviewslice_obj *p; - PyObject *o = __pyx_tp_new_memoryview(t, a, k); - if (unlikely(!o)) return 0; - p = ((struct __pyx_memoryviewslice_obj *)o); - p->__pyx_base.__pyx_vtab = (struct __pyx_vtabstruct_memoryview*)__pyx_vtabptr__memoryviewslice; - p->from_object = Py_None; Py_INCREF(Py_None); - p->from_slice.memview = NULL; - return o; -} - -static void __pyx_tp_dealloc__memoryviewslice(PyObject *o) { - struct __pyx_memoryviewslice_obj *p = (struct __pyx_memoryviewslice_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && !_PyGC_FINALIZED(o)) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - #endif - PyObject_GC_UnTrack(o); - { - PyObject *etype, *eval, *etb; - PyErr_Fetch(&etype, &eval, &etb); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) + 1); - __pyx_memoryviewslice___dealloc__(o); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) - 1); - PyErr_Restore(etype, eval, etb); - } - Py_CLEAR(p->from_object); - PyObject_GC_Track(o); - __pyx_tp_dealloc_memoryview(o); -} - -static int __pyx_tp_traverse__memoryviewslice(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_memoryviewslice_obj *p = (struct __pyx_memoryviewslice_obj *)o; - e = __pyx_tp_traverse_memoryview(o, v, a); if (e) return e; - if (p->from_object) { - e = (*v)(p->from_object, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear__memoryviewslice(PyObject *o) { - PyObject* tmp; - struct __pyx_memoryviewslice_obj *p = (struct __pyx_memoryviewslice_obj *)o; - __pyx_tp_clear_memoryview(o); - tmp = ((PyObject*)p->from_object); - p->from_object = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - __PYX_XDEC_MEMVIEW(&p->from_slice, 1); - return 0; -} - -static PyObject *__pyx_getprop___pyx_memoryviewslice_base(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_16_memoryviewslice_4base_1__get__(o); -} - -static PyMethodDef __pyx_methods__memoryviewslice[] = { - {"__reduce_cython__", (PyCFunction)__pyx_pw___pyx_memoryviewslice_1__reduce_cython__, METH_NOARGS, 0}, - {"__setstate_cython__", (PyCFunction)__pyx_pw___pyx_memoryviewslice_3__setstate_cython__, METH_O, 0}, - {0, 0, 0, 0} -}; - -static struct PyGetSetDef __pyx_getsets__memoryviewslice[] = { - {(char *)"base", __pyx_getprop___pyx_memoryviewslice_base, 0, (char *)0, 0}, - {0, 0, 0, 0, 0} -}; - -static PyTypeObject __pyx_type___pyx_memoryviewslice = { - PyVarObject_HEAD_INIT(0, 0) - "monotonic_align.core._memoryviewslice", /*tp_name*/ - sizeof(struct __pyx_memoryviewslice_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc__memoryviewslice, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - #if CYTHON_COMPILING_IN_PYPY - __pyx_memoryview___repr__, /*tp_repr*/ - #else - 0, /*tp_repr*/ - #endif - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - #if CYTHON_COMPILING_IN_PYPY - __pyx_memoryview___str__, /*tp_str*/ - #else - 0, /*tp_str*/ - #endif - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - "Internal class for passing memoryview slices to Python", /*tp_doc*/ - __pyx_tp_traverse__memoryviewslice, /*tp_traverse*/ - __pyx_tp_clear__memoryviewslice, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods__memoryviewslice, /*tp_methods*/ - 0, /*tp_members*/ - __pyx_getsets__memoryviewslice, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new__memoryviewslice, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - 0, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif -}; - -static PyMethodDef __pyx_methods[] = { - {"maximum_path_c", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_15monotonic_align_4core_1maximum_path_c, METH_VARARGS|METH_KEYWORDS, 0}, - {0, 0, 0, 0} -}; - -#if PY_MAJOR_VERSION >= 3 -#if CYTHON_PEP489_MULTI_PHASE_INIT -static PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def); /*proto*/ -static int __pyx_pymod_exec_core(PyObject* module); /*proto*/ -static PyModuleDef_Slot __pyx_moduledef_slots[] = { - {Py_mod_create, (void*)__pyx_pymod_create}, - {Py_mod_exec, (void*)__pyx_pymod_exec_core}, - {0, NULL} -}; -#endif - -static struct PyModuleDef __pyx_moduledef = { - PyModuleDef_HEAD_INIT, - "core", - 0, /* m_doc */ - #if CYTHON_PEP489_MULTI_PHASE_INIT - 0, /* m_size */ - #else - -1, /* m_size */ - #endif - __pyx_methods /* m_methods */, - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_moduledef_slots, /* m_slots */ - #else - NULL, /* m_reload */ - #endif - NULL, /* m_traverse */ - NULL, /* m_clear */ - NULL /* m_free */ -}; -#endif -#ifndef CYTHON_SMALL_CODE -#if defined(__clang__) - #define CYTHON_SMALL_CODE -#elif defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3)) - #define CYTHON_SMALL_CODE __attribute__((cold)) -#else - #define CYTHON_SMALL_CODE -#endif -#endif - -static __Pyx_StringTabEntry __pyx_string_tab[] = { - {&__pyx_n_s_ASCII, __pyx_k_ASCII, sizeof(__pyx_k_ASCII), 0, 0, 1, 1}, - {&__pyx_kp_s_Buffer_view_does_not_expose_stri, __pyx_k_Buffer_view_does_not_expose_stri, sizeof(__pyx_k_Buffer_view_does_not_expose_stri), 0, 0, 1, 0}, - {&__pyx_kp_s_Can_only_create_a_buffer_that_is, __pyx_k_Can_only_create_a_buffer_that_is, sizeof(__pyx_k_Can_only_create_a_buffer_that_is), 0, 0, 1, 0}, - {&__pyx_kp_s_Cannot_assign_to_read_only_memor, __pyx_k_Cannot_assign_to_read_only_memor, sizeof(__pyx_k_Cannot_assign_to_read_only_memor), 0, 0, 1, 0}, - {&__pyx_kp_s_Cannot_create_writable_memory_vi, __pyx_k_Cannot_create_writable_memory_vi, sizeof(__pyx_k_Cannot_create_writable_memory_vi), 0, 0, 1, 0}, - {&__pyx_kp_s_Cannot_index_with_type_s, __pyx_k_Cannot_index_with_type_s, sizeof(__pyx_k_Cannot_index_with_type_s), 0, 0, 1, 0}, - {&__pyx_n_s_Ellipsis, __pyx_k_Ellipsis, sizeof(__pyx_k_Ellipsis), 0, 0, 1, 1}, - {&__pyx_kp_s_Empty_shape_tuple_for_cython_arr, __pyx_k_Empty_shape_tuple_for_cython_arr, sizeof(__pyx_k_Empty_shape_tuple_for_cython_arr), 0, 0, 1, 0}, - {&__pyx_kp_s_Incompatible_checksums_s_vs_0xb0, __pyx_k_Incompatible_checksums_s_vs_0xb0, sizeof(__pyx_k_Incompatible_checksums_s_vs_0xb0), 0, 0, 1, 0}, - {&__pyx_n_s_IndexError, __pyx_k_IndexError, sizeof(__pyx_k_IndexError), 0, 0, 1, 1}, - {&__pyx_kp_s_Indirect_dimensions_not_supporte, __pyx_k_Indirect_dimensions_not_supporte, sizeof(__pyx_k_Indirect_dimensions_not_supporte), 0, 0, 1, 0}, - {&__pyx_kp_s_Invalid_mode_expected_c_or_fortr, __pyx_k_Invalid_mode_expected_c_or_fortr, sizeof(__pyx_k_Invalid_mode_expected_c_or_fortr), 0, 0, 1, 0}, - {&__pyx_kp_s_Invalid_shape_in_axis_d_d, __pyx_k_Invalid_shape_in_axis_d_d, sizeof(__pyx_k_Invalid_shape_in_axis_d_d), 0, 0, 1, 0}, - {&__pyx_n_s_MemoryError, __pyx_k_MemoryError, sizeof(__pyx_k_MemoryError), 0, 0, 1, 1}, - {&__pyx_kp_s_MemoryView_of_r_at_0x_x, __pyx_k_MemoryView_of_r_at_0x_x, sizeof(__pyx_k_MemoryView_of_r_at_0x_x), 0, 0, 1, 0}, - {&__pyx_kp_s_MemoryView_of_r_object, __pyx_k_MemoryView_of_r_object, sizeof(__pyx_k_MemoryView_of_r_object), 0, 0, 1, 0}, - {&__pyx_n_b_O, __pyx_k_O, sizeof(__pyx_k_O), 0, 0, 0, 1}, - {&__pyx_kp_s_Out_of_bounds_on_buffer_access_a, __pyx_k_Out_of_bounds_on_buffer_access_a, sizeof(__pyx_k_Out_of_bounds_on_buffer_access_a), 0, 0, 1, 0}, - {&__pyx_n_s_PickleError, __pyx_k_PickleError, sizeof(__pyx_k_PickleError), 0, 0, 1, 1}, - {&__pyx_n_s_TypeError, __pyx_k_TypeError, sizeof(__pyx_k_TypeError), 0, 0, 1, 1}, - {&__pyx_kp_s_Unable_to_convert_item_to_object, __pyx_k_Unable_to_convert_item_to_object, sizeof(__pyx_k_Unable_to_convert_item_to_object), 0, 0, 1, 0}, - {&__pyx_n_s_ValueError, __pyx_k_ValueError, sizeof(__pyx_k_ValueError), 0, 0, 1, 1}, - {&__pyx_n_s_View_MemoryView, __pyx_k_View_MemoryView, sizeof(__pyx_k_View_MemoryView), 0, 0, 1, 1}, - {&__pyx_n_s_allocate_buffer, __pyx_k_allocate_buffer, sizeof(__pyx_k_allocate_buffer), 0, 0, 1, 1}, - {&__pyx_n_s_base, __pyx_k_base, sizeof(__pyx_k_base), 0, 0, 1, 1}, - {&__pyx_n_s_c, __pyx_k_c, sizeof(__pyx_k_c), 0, 0, 1, 1}, - {&__pyx_n_u_c, __pyx_k_c, sizeof(__pyx_k_c), 0, 1, 0, 1}, - {&__pyx_n_s_class, __pyx_k_class, sizeof(__pyx_k_class), 0, 0, 1, 1}, - {&__pyx_n_s_cline_in_traceback, __pyx_k_cline_in_traceback, sizeof(__pyx_k_cline_in_traceback), 0, 0, 1, 1}, - {&__pyx_kp_s_contiguous_and_direct, __pyx_k_contiguous_and_direct, sizeof(__pyx_k_contiguous_and_direct), 0, 0, 1, 0}, - {&__pyx_kp_s_contiguous_and_indirect, __pyx_k_contiguous_and_indirect, sizeof(__pyx_k_contiguous_and_indirect), 0, 0, 1, 0}, - {&__pyx_n_s_dict, __pyx_k_dict, sizeof(__pyx_k_dict), 0, 0, 1, 1}, - {&__pyx_n_s_dtype_is_object, __pyx_k_dtype_is_object, sizeof(__pyx_k_dtype_is_object), 0, 0, 1, 1}, - {&__pyx_n_s_encode, __pyx_k_encode, sizeof(__pyx_k_encode), 0, 0, 1, 1}, - {&__pyx_n_s_enumerate, __pyx_k_enumerate, sizeof(__pyx_k_enumerate), 0, 0, 1, 1}, - {&__pyx_n_s_error, __pyx_k_error, sizeof(__pyx_k_error), 0, 0, 1, 1}, - {&__pyx_n_s_flags, __pyx_k_flags, sizeof(__pyx_k_flags), 0, 0, 1, 1}, - {&__pyx_n_s_format, __pyx_k_format, sizeof(__pyx_k_format), 0, 0, 1, 1}, - {&__pyx_n_s_fortran, __pyx_k_fortran, sizeof(__pyx_k_fortran), 0, 0, 1, 1}, - {&__pyx_n_u_fortran, __pyx_k_fortran, sizeof(__pyx_k_fortran), 0, 1, 0, 1}, - {&__pyx_n_s_getstate, __pyx_k_getstate, sizeof(__pyx_k_getstate), 0, 0, 1, 1}, - {&__pyx_kp_s_got_differing_extents_in_dimensi, __pyx_k_got_differing_extents_in_dimensi, sizeof(__pyx_k_got_differing_extents_in_dimensi), 0, 0, 1, 0}, - {&__pyx_n_s_id, __pyx_k_id, sizeof(__pyx_k_id), 0, 0, 1, 1}, - {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1}, - {&__pyx_n_s_itemsize, __pyx_k_itemsize, sizeof(__pyx_k_itemsize), 0, 0, 1, 1}, - {&__pyx_kp_s_itemsize_0_for_cython_array, __pyx_k_itemsize_0_for_cython_array, sizeof(__pyx_k_itemsize_0_for_cython_array), 0, 0, 1, 0}, - {&__pyx_n_s_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1}, - {&__pyx_n_s_memview, __pyx_k_memview, sizeof(__pyx_k_memview), 0, 0, 1, 1}, - {&__pyx_n_s_mode, __pyx_k_mode, sizeof(__pyx_k_mode), 0, 0, 1, 1}, - {&__pyx_n_s_name, __pyx_k_name, sizeof(__pyx_k_name), 0, 0, 1, 1}, - {&__pyx_n_s_name_2, __pyx_k_name_2, sizeof(__pyx_k_name_2), 0, 0, 1, 1}, - {&__pyx_n_s_ndim, __pyx_k_ndim, sizeof(__pyx_k_ndim), 0, 0, 1, 1}, - {&__pyx_n_s_new, __pyx_k_new, sizeof(__pyx_k_new), 0, 0, 1, 1}, - {&__pyx_kp_s_no_default___reduce___due_to_non, __pyx_k_no_default___reduce___due_to_non, sizeof(__pyx_k_no_default___reduce___due_to_non), 0, 0, 1, 0}, - {&__pyx_n_s_obj, __pyx_k_obj, sizeof(__pyx_k_obj), 0, 0, 1, 1}, - {&__pyx_n_s_pack, __pyx_k_pack, sizeof(__pyx_k_pack), 0, 0, 1, 1}, - {&__pyx_n_s_paths, __pyx_k_paths, sizeof(__pyx_k_paths), 0, 0, 1, 1}, - {&__pyx_n_s_pickle, __pyx_k_pickle, sizeof(__pyx_k_pickle), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_PickleError, __pyx_k_pyx_PickleError, sizeof(__pyx_k_pyx_PickleError), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_checksum, __pyx_k_pyx_checksum, sizeof(__pyx_k_pyx_checksum), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_getbuffer, __pyx_k_pyx_getbuffer, sizeof(__pyx_k_pyx_getbuffer), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_result, __pyx_k_pyx_result, sizeof(__pyx_k_pyx_result), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_state, __pyx_k_pyx_state, sizeof(__pyx_k_pyx_state), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_type, __pyx_k_pyx_type, sizeof(__pyx_k_pyx_type), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_unpickle_Enum, __pyx_k_pyx_unpickle_Enum, sizeof(__pyx_k_pyx_unpickle_Enum), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_vtable, __pyx_k_pyx_vtable, sizeof(__pyx_k_pyx_vtable), 0, 0, 1, 1}, - {&__pyx_n_s_range, __pyx_k_range, sizeof(__pyx_k_range), 0, 0, 1, 1}, - {&__pyx_n_s_reduce, __pyx_k_reduce, sizeof(__pyx_k_reduce), 0, 0, 1, 1}, - {&__pyx_n_s_reduce_cython, __pyx_k_reduce_cython, sizeof(__pyx_k_reduce_cython), 0, 0, 1, 1}, - {&__pyx_n_s_reduce_ex, __pyx_k_reduce_ex, sizeof(__pyx_k_reduce_ex), 0, 0, 1, 1}, - {&__pyx_n_s_setstate, __pyx_k_setstate, sizeof(__pyx_k_setstate), 0, 0, 1, 1}, - {&__pyx_n_s_setstate_cython, __pyx_k_setstate_cython, sizeof(__pyx_k_setstate_cython), 0, 0, 1, 1}, - {&__pyx_n_s_shape, __pyx_k_shape, sizeof(__pyx_k_shape), 0, 0, 1, 1}, - {&__pyx_n_s_size, __pyx_k_size, sizeof(__pyx_k_size), 0, 0, 1, 1}, - {&__pyx_n_s_start, __pyx_k_start, sizeof(__pyx_k_start), 0, 0, 1, 1}, - {&__pyx_n_s_step, __pyx_k_step, sizeof(__pyx_k_step), 0, 0, 1, 1}, - {&__pyx_n_s_stop, __pyx_k_stop, sizeof(__pyx_k_stop), 0, 0, 1, 1}, - {&__pyx_kp_s_strided_and_direct, __pyx_k_strided_and_direct, sizeof(__pyx_k_strided_and_direct), 0, 0, 1, 0}, - {&__pyx_kp_s_strided_and_direct_or_indirect, __pyx_k_strided_and_direct_or_indirect, sizeof(__pyx_k_strided_and_direct_or_indirect), 0, 0, 1, 0}, - {&__pyx_kp_s_strided_and_indirect, __pyx_k_strided_and_indirect, sizeof(__pyx_k_strided_and_indirect), 0, 0, 1, 0}, - {&__pyx_kp_s_stringsource, __pyx_k_stringsource, sizeof(__pyx_k_stringsource), 0, 0, 1, 0}, - {&__pyx_n_s_struct, __pyx_k_struct, sizeof(__pyx_k_struct), 0, 0, 1, 1}, - {&__pyx_n_s_t_xs, __pyx_k_t_xs, sizeof(__pyx_k_t_xs), 0, 0, 1, 1}, - {&__pyx_n_s_t_ys, __pyx_k_t_ys, sizeof(__pyx_k_t_ys), 0, 0, 1, 1}, - {&__pyx_n_s_test, __pyx_k_test, sizeof(__pyx_k_test), 0, 0, 1, 1}, - {&__pyx_kp_s_unable_to_allocate_array_data, __pyx_k_unable_to_allocate_array_data, sizeof(__pyx_k_unable_to_allocate_array_data), 0, 0, 1, 0}, - {&__pyx_kp_s_unable_to_allocate_shape_and_str, __pyx_k_unable_to_allocate_shape_and_str, sizeof(__pyx_k_unable_to_allocate_shape_and_str), 0, 0, 1, 0}, - {&__pyx_n_s_unpack, __pyx_k_unpack, sizeof(__pyx_k_unpack), 0, 0, 1, 1}, - {&__pyx_n_s_update, __pyx_k_update, sizeof(__pyx_k_update), 0, 0, 1, 1}, - {&__pyx_n_s_values, __pyx_k_values, sizeof(__pyx_k_values), 0, 0, 1, 1}, - {0, 0, 0, 0, 0, 0, 0} -}; -static CYTHON_SMALL_CODE int __Pyx_InitCachedBuiltins(void) { - __pyx_builtin_range = __Pyx_GetBuiltinName(__pyx_n_s_range); if (!__pyx_builtin_range) __PYX_ERR(0, 15, __pyx_L1_error) - __pyx_builtin_ValueError = __Pyx_GetBuiltinName(__pyx_n_s_ValueError); if (!__pyx_builtin_ValueError) __PYX_ERR(1, 133, __pyx_L1_error) - __pyx_builtin_MemoryError = __Pyx_GetBuiltinName(__pyx_n_s_MemoryError); if (!__pyx_builtin_MemoryError) __PYX_ERR(1, 148, __pyx_L1_error) - __pyx_builtin_enumerate = __Pyx_GetBuiltinName(__pyx_n_s_enumerate); if (!__pyx_builtin_enumerate) __PYX_ERR(1, 151, __pyx_L1_error) - __pyx_builtin_TypeError = __Pyx_GetBuiltinName(__pyx_n_s_TypeError); if (!__pyx_builtin_TypeError) __PYX_ERR(1, 2, __pyx_L1_error) - __pyx_builtin_Ellipsis = __Pyx_GetBuiltinName(__pyx_n_s_Ellipsis); if (!__pyx_builtin_Ellipsis) __PYX_ERR(1, 404, __pyx_L1_error) - __pyx_builtin_id = __Pyx_GetBuiltinName(__pyx_n_s_id); if (!__pyx_builtin_id) __PYX_ERR(1, 613, __pyx_L1_error) - __pyx_builtin_IndexError = __Pyx_GetBuiltinName(__pyx_n_s_IndexError); if (!__pyx_builtin_IndexError) __PYX_ERR(1, 832, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} - -static CYTHON_SMALL_CODE int __Pyx_InitCachedConstants(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_InitCachedConstants", 0); - - /* "View.MemoryView":133 - * - * if not self.ndim: - * raise ValueError("Empty shape tuple for cython.array") # <<<<<<<<<<<<<< - * - * if itemsize <= 0: - */ - __pyx_tuple__2 = PyTuple_Pack(1, __pyx_kp_s_Empty_shape_tuple_for_cython_arr); if (unlikely(!__pyx_tuple__2)) __PYX_ERR(1, 133, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__2); - __Pyx_GIVEREF(__pyx_tuple__2); - - /* "View.MemoryView":136 - * - * if itemsize <= 0: - * raise ValueError("itemsize <= 0 for cython.array") # <<<<<<<<<<<<<< - * - * if not isinstance(format, bytes): - */ - __pyx_tuple__3 = PyTuple_Pack(1, __pyx_kp_s_itemsize_0_for_cython_array); if (unlikely(!__pyx_tuple__3)) __PYX_ERR(1, 136, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__3); - __Pyx_GIVEREF(__pyx_tuple__3); - - /* "View.MemoryView":148 - * - * if not self._shape: - * raise MemoryError("unable to allocate shape and strides.") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__4 = PyTuple_Pack(1, __pyx_kp_s_unable_to_allocate_shape_and_str); if (unlikely(!__pyx_tuple__4)) __PYX_ERR(1, 148, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__4); - __Pyx_GIVEREF(__pyx_tuple__4); - - /* "View.MemoryView":176 - * self.data = malloc(self.len) - * if not self.data: - * raise MemoryError("unable to allocate array data.") # <<<<<<<<<<<<<< - * - * if self.dtype_is_object: - */ - __pyx_tuple__5 = PyTuple_Pack(1, __pyx_kp_s_unable_to_allocate_array_data); if (unlikely(!__pyx_tuple__5)) __PYX_ERR(1, 176, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__5); - __Pyx_GIVEREF(__pyx_tuple__5); - - /* "View.MemoryView":192 - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - * raise ValueError("Can only create a buffer that is contiguous in memory.") # <<<<<<<<<<<<<< - * info.buf = self.data - * info.len = self.len - */ - __pyx_tuple__6 = PyTuple_Pack(1, __pyx_kp_s_Can_only_create_a_buffer_that_is); if (unlikely(!__pyx_tuple__6)) __PYX_ERR(1, 192, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__6); - __Pyx_GIVEREF(__pyx_tuple__6); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_tuple__7 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__7)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__7); - __Pyx_GIVEREF(__pyx_tuple__7); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_tuple__8 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__8)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__8); - __Pyx_GIVEREF(__pyx_tuple__8); - - /* "View.MemoryView":418 - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: - * raise TypeError("Cannot assign to read-only memoryview") # <<<<<<<<<<<<<< - * - * have_slices, index = _unellipsify(index, self.view.ndim) - */ - __pyx_tuple__9 = PyTuple_Pack(1, __pyx_kp_s_Cannot_assign_to_read_only_memor); if (unlikely(!__pyx_tuple__9)) __PYX_ERR(1, 418, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__9); - __Pyx_GIVEREF(__pyx_tuple__9); - - /* "View.MemoryView":495 - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - * raise ValueError("Unable to convert item to object") # <<<<<<<<<<<<<< - * else: - * if len(self.view.format) == 1: - */ - __pyx_tuple__10 = PyTuple_Pack(1, __pyx_kp_s_Unable_to_convert_item_to_object); if (unlikely(!__pyx_tuple__10)) __PYX_ERR(1, 495, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__10); - __Pyx_GIVEREF(__pyx_tuple__10); - - /* "View.MemoryView":520 - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: - * raise ValueError("Cannot create writable memory view from read-only memoryview") # <<<<<<<<<<<<<< - * - * if flags & PyBUF_ND: - */ - __pyx_tuple__11 = PyTuple_Pack(1, __pyx_kp_s_Cannot_create_writable_memory_vi); if (unlikely(!__pyx_tuple__11)) __PYX_ERR(1, 520, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__11); - __Pyx_GIVEREF(__pyx_tuple__11); - - /* "View.MemoryView":570 - * if self.view.strides == NULL: - * - * raise ValueError("Buffer view does not expose strides") # <<<<<<<<<<<<<< - * - * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) - */ - __pyx_tuple__12 = PyTuple_Pack(1, __pyx_kp_s_Buffer_view_does_not_expose_stri); if (unlikely(!__pyx_tuple__12)) __PYX_ERR(1, 570, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__12); - __Pyx_GIVEREF(__pyx_tuple__12); - - /* "View.MemoryView":577 - * def suboffsets(self): - * if self.view.suboffsets == NULL: - * return (-1,) * self.view.ndim # <<<<<<<<<<<<<< - * - * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) - */ - __pyx_tuple__13 = PyTuple_New(1); if (unlikely(!__pyx_tuple__13)) __PYX_ERR(1, 577, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__13); - __Pyx_INCREF(__pyx_int_neg_1); - __Pyx_GIVEREF(__pyx_int_neg_1); - PyTuple_SET_ITEM(__pyx_tuple__13, 0, __pyx_int_neg_1); - __Pyx_GIVEREF(__pyx_tuple__13); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_tuple__14 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__14)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__14); - __Pyx_GIVEREF(__pyx_tuple__14); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_tuple__15 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__15)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__15); - __Pyx_GIVEREF(__pyx_tuple__15); - - /* "View.MemoryView":682 - * if item is Ellipsis: - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) # <<<<<<<<<<<<<< - * seen_ellipsis = True - * else: - */ - __pyx_slice__16 = PySlice_New(Py_None, Py_None, Py_None); if (unlikely(!__pyx_slice__16)) __PYX_ERR(1, 682, __pyx_L1_error) - __Pyx_GOTREF(__pyx_slice__16); - __Pyx_GIVEREF(__pyx_slice__16); - - /* "View.MemoryView":703 - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - * raise ValueError("Indirect dimensions not supported") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__17 = PyTuple_Pack(1, __pyx_kp_s_Indirect_dimensions_not_supporte); if (unlikely(!__pyx_tuple__17)) __PYX_ERR(1, 703, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__17); - __Pyx_GIVEREF(__pyx_tuple__17); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_tuple__18 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__18)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__18); - __Pyx_GIVEREF(__pyx_tuple__18); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_tuple__19 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__19)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__19); - __Pyx_GIVEREF(__pyx_tuple__19); - - /* "View.MemoryView":286 - * return self.name - * - * cdef generic = Enum("") # <<<<<<<<<<<<<< - * cdef strided = Enum("") # default - * cdef indirect = Enum("") - */ - __pyx_tuple__20 = PyTuple_Pack(1, __pyx_kp_s_strided_and_direct_or_indirect); if (unlikely(!__pyx_tuple__20)) __PYX_ERR(1, 286, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__20); - __Pyx_GIVEREF(__pyx_tuple__20); - - /* "View.MemoryView":287 - * - * cdef generic = Enum("") - * cdef strided = Enum("") # default # <<<<<<<<<<<<<< - * cdef indirect = Enum("") - * - */ - __pyx_tuple__21 = PyTuple_Pack(1, __pyx_kp_s_strided_and_direct); if (unlikely(!__pyx_tuple__21)) __PYX_ERR(1, 287, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__21); - __Pyx_GIVEREF(__pyx_tuple__21); - - /* "View.MemoryView":288 - * cdef generic = Enum("") - * cdef strided = Enum("") # default - * cdef indirect = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__22 = PyTuple_Pack(1, __pyx_kp_s_strided_and_indirect); if (unlikely(!__pyx_tuple__22)) __PYX_ERR(1, 288, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__22); - __Pyx_GIVEREF(__pyx_tuple__22); - - /* "View.MemoryView":291 - * - * - * cdef contiguous = Enum("") # <<<<<<<<<<<<<< - * cdef indirect_contiguous = Enum("") - * - */ - __pyx_tuple__23 = PyTuple_Pack(1, __pyx_kp_s_contiguous_and_direct); if (unlikely(!__pyx_tuple__23)) __PYX_ERR(1, 291, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__23); - __Pyx_GIVEREF(__pyx_tuple__23); - - /* "View.MemoryView":292 - * - * cdef contiguous = Enum("") - * cdef indirect_contiguous = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__24 = PyTuple_Pack(1, __pyx_kp_s_contiguous_and_indirect); if (unlikely(!__pyx_tuple__24)) __PYX_ERR(1, 292, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__24); - __Pyx_GIVEREF(__pyx_tuple__24); - - /* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - __pyx_tuple__25 = PyTuple_Pack(5, __pyx_n_s_pyx_type, __pyx_n_s_pyx_checksum, __pyx_n_s_pyx_state, __pyx_n_s_pyx_PickleError, __pyx_n_s_pyx_result); if (unlikely(!__pyx_tuple__25)) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__25); - __Pyx_GIVEREF(__pyx_tuple__25); - __pyx_codeobj__26 = (PyObject*)__Pyx_PyCode_New(3, 0, 5, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__25, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_stringsource, __pyx_n_s_pyx_unpickle_Enum, 1, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__26)) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_RefNannyFinishContext(); - return 0; - __pyx_L1_error:; - __Pyx_RefNannyFinishContext(); - return -1; -} - -static CYTHON_SMALL_CODE int __Pyx_InitGlobals(void) { - /* InitThreads.init */ - #ifdef WITH_THREAD -PyEval_InitThreads(); -#endif - -if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1, __pyx_L1_error) - - if (__Pyx_InitStrings(__pyx_string_tab) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - __pyx_int_0 = PyInt_FromLong(0); if (unlikely(!__pyx_int_0)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_1 = PyInt_FromLong(1); if (unlikely(!__pyx_int_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_184977713 = PyInt_FromLong(184977713L); if (unlikely(!__pyx_int_184977713)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_neg_1 = PyInt_FromLong(-1); if (unlikely(!__pyx_int_neg_1)) __PYX_ERR(0, 1, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} - -static CYTHON_SMALL_CODE int __Pyx_modinit_global_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_import_code(void); /*proto*/ - -static int __Pyx_modinit_global_init_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_global_init_code", 0); - /*--- Global init code ---*/ - generic = Py_None; Py_INCREF(Py_None); - strided = Py_None; Py_INCREF(Py_None); - indirect = Py_None; Py_INCREF(Py_None); - contiguous = Py_None; Py_INCREF(Py_None); - indirect_contiguous = Py_None; Py_INCREF(Py_None); - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_export_code", 0); - /*--- Variable export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_export_code", 0); - /*--- Function export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_type_init_code(void) { - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__Pyx_modinit_type_init_code", 0); - /*--- Type init code ---*/ - __pyx_vtabptr_array = &__pyx_vtable_array; - __pyx_vtable_array.get_memview = (PyObject *(*)(struct __pyx_array_obj *))__pyx_array_get_memview; - if (PyType_Ready(&__pyx_type___pyx_array) < 0) __PYX_ERR(1, 105, __pyx_L1_error) - #if PY_VERSION_HEX < 0x030800B1 - __pyx_type___pyx_array.tp_print = 0; - #endif - if (__Pyx_SetVtable(__pyx_type___pyx_array.tp_dict, __pyx_vtabptr_array) < 0) __PYX_ERR(1, 105, __pyx_L1_error) - if (__Pyx_setup_reduce((PyObject*)&__pyx_type___pyx_array) < 0) __PYX_ERR(1, 105, __pyx_L1_error) - __pyx_array_type = &__pyx_type___pyx_array; - if (PyType_Ready(&__pyx_type___pyx_MemviewEnum) < 0) __PYX_ERR(1, 279, __pyx_L1_error) - #if PY_VERSION_HEX < 0x030800B1 - __pyx_type___pyx_MemviewEnum.tp_print = 0; - #endif - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type___pyx_MemviewEnum.tp_dictoffset && __pyx_type___pyx_MemviewEnum.tp_getattro == PyObject_GenericGetAttr)) { - __pyx_type___pyx_MemviewEnum.tp_getattro = __Pyx_PyObject_GenericGetAttr; - } - if (__Pyx_setup_reduce((PyObject*)&__pyx_type___pyx_MemviewEnum) < 0) __PYX_ERR(1, 279, __pyx_L1_error) - __pyx_MemviewEnum_type = &__pyx_type___pyx_MemviewEnum; - __pyx_vtabptr_memoryview = &__pyx_vtable_memoryview; - __pyx_vtable_memoryview.get_item_pointer = (char *(*)(struct __pyx_memoryview_obj *, PyObject *))__pyx_memoryview_get_item_pointer; - __pyx_vtable_memoryview.is_slice = (PyObject *(*)(struct __pyx_memoryview_obj *, PyObject *))__pyx_memoryview_is_slice; - __pyx_vtable_memoryview.setitem_slice_assignment = (PyObject *(*)(struct __pyx_memoryview_obj *, PyObject *, PyObject *))__pyx_memoryview_setitem_slice_assignment; - __pyx_vtable_memoryview.setitem_slice_assign_scalar = (PyObject *(*)(struct __pyx_memoryview_obj *, struct __pyx_memoryview_obj *, PyObject *))__pyx_memoryview_setitem_slice_assign_scalar; - __pyx_vtable_memoryview.setitem_indexed = (PyObject *(*)(struct __pyx_memoryview_obj *, PyObject *, PyObject *))__pyx_memoryview_setitem_indexed; - __pyx_vtable_memoryview.convert_item_to_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *))__pyx_memoryview_convert_item_to_object; - __pyx_vtable_memoryview.assign_item_from_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *, PyObject *))__pyx_memoryview_assign_item_from_object; - if (PyType_Ready(&__pyx_type___pyx_memoryview) < 0) __PYX_ERR(1, 330, __pyx_L1_error) - #if PY_VERSION_HEX < 0x030800B1 - __pyx_type___pyx_memoryview.tp_print = 0; - #endif - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type___pyx_memoryview.tp_dictoffset && __pyx_type___pyx_memoryview.tp_getattro == PyObject_GenericGetAttr)) { - __pyx_type___pyx_memoryview.tp_getattro = __Pyx_PyObject_GenericGetAttr; - } - if (__Pyx_SetVtable(__pyx_type___pyx_memoryview.tp_dict, __pyx_vtabptr_memoryview) < 0) __PYX_ERR(1, 330, __pyx_L1_error) - if (__Pyx_setup_reduce((PyObject*)&__pyx_type___pyx_memoryview) < 0) __PYX_ERR(1, 330, __pyx_L1_error) - __pyx_memoryview_type = &__pyx_type___pyx_memoryview; - __pyx_vtabptr__memoryviewslice = &__pyx_vtable__memoryviewslice; - __pyx_vtable__memoryviewslice.__pyx_base = *__pyx_vtabptr_memoryview; - __pyx_vtable__memoryviewslice.__pyx_base.convert_item_to_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *))__pyx_memoryviewslice_convert_item_to_object; - __pyx_vtable__memoryviewslice.__pyx_base.assign_item_from_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *, PyObject *))__pyx_memoryviewslice_assign_item_from_object; - __pyx_type___pyx_memoryviewslice.tp_base = __pyx_memoryview_type; - if (PyType_Ready(&__pyx_type___pyx_memoryviewslice) < 0) __PYX_ERR(1, 965, __pyx_L1_error) - #if PY_VERSION_HEX < 0x030800B1 - __pyx_type___pyx_memoryviewslice.tp_print = 0; - #endif - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type___pyx_memoryviewslice.tp_dictoffset && __pyx_type___pyx_memoryviewslice.tp_getattro == PyObject_GenericGetAttr)) { - __pyx_type___pyx_memoryviewslice.tp_getattro = __Pyx_PyObject_GenericGetAttr; - } - if (__Pyx_SetVtable(__pyx_type___pyx_memoryviewslice.tp_dict, __pyx_vtabptr__memoryviewslice) < 0) __PYX_ERR(1, 965, __pyx_L1_error) - if (__Pyx_setup_reduce((PyObject*)&__pyx_type___pyx_memoryviewslice) < 0) __PYX_ERR(1, 965, __pyx_L1_error) - __pyx_memoryviewslice_type = &__pyx_type___pyx_memoryviewslice; - __Pyx_RefNannyFinishContext(); - return 0; - __pyx_L1_error:; - __Pyx_RefNannyFinishContext(); - return -1; -} - -static int __Pyx_modinit_type_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_type_import_code", 0); - /*--- Type import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_import_code", 0); - /*--- Variable import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_import_code", 0); - /*--- Function import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - - -#ifndef CYTHON_NO_PYINIT_EXPORT -#define __Pyx_PyMODINIT_FUNC PyMODINIT_FUNC -#elif PY_MAJOR_VERSION < 3 -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" void -#else -#define __Pyx_PyMODINIT_FUNC void -#endif -#else -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" PyObject * -#else -#define __Pyx_PyMODINIT_FUNC PyObject * -#endif -#endif - - -#if PY_MAJOR_VERSION < 3 -__Pyx_PyMODINIT_FUNC initcore(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC initcore(void) -#else -__Pyx_PyMODINIT_FUNC PyInit_core(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC PyInit_core(void) -#if CYTHON_PEP489_MULTI_PHASE_INIT -{ - return PyModuleDef_Init(&__pyx_moduledef); -} -static CYTHON_SMALL_CODE int __Pyx_check_single_interpreter(void) { - #if PY_VERSION_HEX >= 0x030700A1 - static PY_INT64_T main_interpreter_id = -1; - PY_INT64_T current_id = PyInterpreterState_GetID(PyThreadState_Get()->interp); - if (main_interpreter_id == -1) { - main_interpreter_id = current_id; - return (unlikely(current_id == -1)) ? -1 : 0; - } else if (unlikely(main_interpreter_id != current_id)) - #else - static PyInterpreterState *main_interpreter = NULL; - PyInterpreterState *current_interpreter = PyThreadState_Get()->interp; - if (!main_interpreter) { - main_interpreter = current_interpreter; - } else if (unlikely(main_interpreter != current_interpreter)) - #endif - { - PyErr_SetString( - PyExc_ImportError, - "Interpreter change detected - this module can only be loaded into one interpreter per process."); - return -1; - } - return 0; -} -static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *moddict, const char* from_name, const char* to_name, int allow_none) { - PyObject *value = PyObject_GetAttrString(spec, from_name); - int result = 0; - if (likely(value)) { - if (allow_none || value != Py_None) { - result = PyDict_SetItemString(moddict, to_name, value); - } - Py_DECREF(value); - } else if (PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Clear(); - } else { - result = -1; - } - return result; -} -static CYTHON_SMALL_CODE PyObject* __pyx_pymod_create(PyObject *spec, CYTHON_UNUSED PyModuleDef *def) { - PyObject *module = NULL, *moddict, *modname; - if (__Pyx_check_single_interpreter()) - return NULL; - if (__pyx_m) - return __Pyx_NewRef(__pyx_m); - modname = PyObject_GetAttrString(spec, "name"); - if (unlikely(!modname)) goto bad; - module = PyModule_NewObject(modname); - Py_DECREF(modname); - if (unlikely(!module)) goto bad; - moddict = PyModule_GetDict(module); - if (unlikely(!moddict)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "loader", "__loader__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "origin", "__file__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "parent", "__package__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "submodule_search_locations", "__path__", 0) < 0)) goto bad; - return module; -bad: - Py_XDECREF(module); - return NULL; -} - - -static CYTHON_SMALL_CODE int __pyx_pymod_exec_core(PyObject *__pyx_pyinit_module) -#endif -#endif -{ - PyObject *__pyx_t_1 = NULL; - static PyThread_type_lock __pyx_t_2[8]; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - #if CYTHON_PEP489_MULTI_PHASE_INIT - if (__pyx_m) { - if (__pyx_m == __pyx_pyinit_module) return 0; - PyErr_SetString(PyExc_RuntimeError, "Module 'core' has already been imported. Re-initialisation is not supported."); - return -1; - } - #elif PY_MAJOR_VERSION >= 3 - if (__pyx_m) return __Pyx_NewRef(__pyx_m); - #endif - #if CYTHON_REFNANNY -__Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny"); -if (!__Pyx_RefNanny) { - PyErr_Clear(); - __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny"); - if (!__Pyx_RefNanny) - Py_FatalError("failed to import 'refnanny' module"); -} -#endif - __Pyx_RefNannySetupContext("__Pyx_PyMODINIT_FUNC PyInit_core(void)", 0); - if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pxy_PyFrame_Initialize_Offsets - __Pxy_PyFrame_Initialize_Offsets(); - #endif - __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pyx_CyFunction_USED - if (__pyx_CyFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_FusedFunction_USED - if (__pyx_FusedFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Coroutine_USED - if (__pyx_Coroutine_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Generator_USED - if (__pyx_Generator_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_AsyncGen_USED - if (__pyx_AsyncGen_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_StopAsyncIteration_USED - if (__pyx_StopAsyncIteration_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - /*--- Library function declarations ---*/ - /*--- Threads initialization code ---*/ - #if defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS - #ifdef WITH_THREAD /* Python build with threading support? */ - PyEval_InitThreads(); - #endif - #endif - /*--- Module creation code ---*/ - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_m = __pyx_pyinit_module; - Py_INCREF(__pyx_m); - #else - #if PY_MAJOR_VERSION < 3 - __pyx_m = Py_InitModule4("core", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m); - #else - __pyx_m = PyModule_Create(&__pyx_moduledef); - #endif - if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - __pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_d); - __pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_b); - __pyx_cython_runtime = PyImport_AddModule((char *) "cython_runtime"); if (unlikely(!__pyx_cython_runtime)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_cython_runtime); - if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - /*--- Initialize various global constants etc. ---*/ - if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT) - if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - if (__pyx_module_is_main_monotonic_align__core) { - if (PyObject_SetAttr(__pyx_m, __pyx_n_s_name_2, __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - } - #if PY_MAJOR_VERSION >= 3 - { - PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error) - if (!PyDict_GetItemString(modules, "monotonic_align.core")) { - if (unlikely(PyDict_SetItemString(modules, "monotonic_align.core", __pyx_m) < 0)) __PYX_ERR(0, 1, __pyx_L1_error) - } - } - #endif - /*--- Builtin init code ---*/ - if (__Pyx_InitCachedBuiltins() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Constants init code ---*/ - if (__Pyx_InitCachedConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Global type/function init code ---*/ - (void)__Pyx_modinit_global_init_code(); - (void)__Pyx_modinit_variable_export_code(); - (void)__Pyx_modinit_function_export_code(); - if (unlikely(__Pyx_modinit_type_init_code() < 0)) __PYX_ERR(0, 1, __pyx_L1_error) - (void)__Pyx_modinit_type_import_code(); - (void)__Pyx_modinit_variable_import_code(); - (void)__Pyx_modinit_function_import_code(); - /*--- Execution code ---*/ - #if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) - if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - - /* "monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ - __pyx_k_ = (-1e9); - - /* "monotonic_align/core.pyx":1 - * cimport cython # <<<<<<<<<<<<<< - * from cython.parallel import prange - * - */ - __pyx_t_1 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_1) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "View.MemoryView":209 - * info.obj = self - * - * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)") # <<<<<<<<<<<<<< - * - * def __dealloc__(array self): - */ - __pyx_t_1 = __pyx_capsule_create(((void *)(&__pyx_array_getbuffer)), ((char *)"getbuffer(obj, view, flags)")); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 209, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem((PyObject *)__pyx_array_type->tp_dict, __pyx_n_s_pyx_getbuffer, __pyx_t_1) < 0) __PYX_ERR(1, 209, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - PyType_Modified(__pyx_array_type); - - /* "View.MemoryView":286 - * return self.name - * - * cdef generic = Enum("") # <<<<<<<<<<<<<< - * cdef strided = Enum("") # default - * cdef indirect = Enum("") - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__20, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 286, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(generic); - __Pyx_DECREF_SET(generic, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":287 - * - * cdef generic = Enum("") - * cdef strided = Enum("") # default # <<<<<<<<<<<<<< - * cdef indirect = Enum("") - * - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__21, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 287, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(strided); - __Pyx_DECREF_SET(strided, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":288 - * cdef generic = Enum("") - * cdef strided = Enum("") # default - * cdef indirect = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__22, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 288, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(indirect); - __Pyx_DECREF_SET(indirect, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":291 - * - * - * cdef contiguous = Enum("") # <<<<<<<<<<<<<< - * cdef indirect_contiguous = Enum("") - * - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__23, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 291, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(contiguous); - __Pyx_DECREF_SET(contiguous, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":292 - * - * cdef contiguous = Enum("") - * cdef indirect_contiguous = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__24, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 292, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(indirect_contiguous); - __Pyx_DECREF_SET(indirect_contiguous, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":316 - * - * DEF THREAD_LOCKS_PREALLOCATED = 8 - * cdef int __pyx_memoryview_thread_locks_used = 0 # <<<<<<<<<<<<<< - * cdef PyThread_type_lock[THREAD_LOCKS_PREALLOCATED] __pyx_memoryview_thread_locks = [ - * PyThread_allocate_lock(), - */ - __pyx_memoryview_thread_locks_used = 0; - - /* "View.MemoryView":317 - * DEF THREAD_LOCKS_PREALLOCATED = 8 - * cdef int __pyx_memoryview_thread_locks_used = 0 - * cdef PyThread_type_lock[THREAD_LOCKS_PREALLOCATED] __pyx_memoryview_thread_locks = [ # <<<<<<<<<<<<<< - * PyThread_allocate_lock(), - * PyThread_allocate_lock(), - */ - __pyx_t_2[0] = PyThread_allocate_lock(); - __pyx_t_2[1] = PyThread_allocate_lock(); - __pyx_t_2[2] = PyThread_allocate_lock(); - __pyx_t_2[3] = PyThread_allocate_lock(); - __pyx_t_2[4] = PyThread_allocate_lock(); - __pyx_t_2[5] = PyThread_allocate_lock(); - __pyx_t_2[6] = PyThread_allocate_lock(); - __pyx_t_2[7] = PyThread_allocate_lock(); - memcpy(&(__pyx_memoryview_thread_locks[0]), __pyx_t_2, sizeof(__pyx_memoryview_thread_locks[0]) * (8)); - - /* "View.MemoryView":549 - * info.obj = self - * - * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __pyx_capsule_create(((void *)(&__pyx_memoryview_getbuffer)), ((char *)"getbuffer(obj, view, flags)")); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 549, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem((PyObject *)__pyx_memoryview_type->tp_dict, __pyx_n_s_pyx_getbuffer, __pyx_t_1) < 0) __PYX_ERR(1, 549, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - PyType_Modified(__pyx_memoryview_type); - - /* "View.MemoryView":995 - * return self.from_object - * - * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __pyx_capsule_create(((void *)(&__pyx_memoryview_getbuffer)), ((char *)"getbuffer(obj, view, flags)")); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 995, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem((PyObject *)__pyx_memoryviewslice_type->tp_dict, __pyx_n_s_pyx_getbuffer, __pyx_t_1) < 0) __PYX_ERR(1, 995, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - PyType_Modified(__pyx_memoryviewslice_type); - - /* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_15View_dot_MemoryView_1__pyx_unpickle_Enum, NULL, __pyx_n_s_View_MemoryView); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_pyx_unpickle_Enum, __pyx_t_1) < 0) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "(tree fragment)":11 - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<< - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - */ - - /*--- Wrapped vars code ---*/ - - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - if (__pyx_m) { - if (__pyx_d) { - __Pyx_AddTraceback("init monotonic_align.core", __pyx_clineno, __pyx_lineno, __pyx_filename); - } - Py_CLEAR(__pyx_m); - } else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_ImportError, "init monotonic_align.core"); - } - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - #if CYTHON_PEP489_MULTI_PHASE_INIT - return (__pyx_m != NULL) ? 0 : -1; - #elif PY_MAJOR_VERSION >= 3 - return __pyx_m; - #else - return; - #endif -} - -/* --- Runtime support code --- */ -/* Refnanny */ -#if CYTHON_REFNANNY -static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) { - PyObject *m = NULL, *p = NULL; - void *r = NULL; - m = PyImport_ImportModule(modname); - if (!m) goto end; - p = PyObject_GetAttrString(m, "RefNannyAPI"); - if (!p) goto end; - r = PyLong_AsVoidPtr(p); -end: - Py_XDECREF(p); - Py_XDECREF(m); - return (__Pyx_RefNannyAPIStruct *)r; -} -#endif - -/* PyObjectGetAttrStr */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) { - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro)) - return tp->tp_getattro(obj, attr_name); -#if PY_MAJOR_VERSION < 3 - if (likely(tp->tp_getattr)) - return tp->tp_getattr(obj, PyString_AS_STRING(attr_name)); -#endif - return PyObject_GetAttr(obj, attr_name); -} -#endif - -/* GetBuiltinName */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name) { - PyObject* result = __Pyx_PyObject_GetAttrStr(__pyx_b, name); - if (unlikely(!result)) { - PyErr_Format(PyExc_NameError, -#if PY_MAJOR_VERSION >= 3 - "name '%U' is not defined", name); -#else - "name '%.200s' is not defined", PyString_AS_STRING(name)); -#endif - } - return result; -} - -/* MemviewSliceInit */ -static int -__Pyx_init_memviewslice(struct __pyx_memoryview_obj *memview, - int ndim, - __Pyx_memviewslice *memviewslice, - int memview_is_new_reference) -{ - __Pyx_RefNannyDeclarations - int i, retval=-1; - Py_buffer *buf = &memview->view; - __Pyx_RefNannySetupContext("init_memviewslice", 0); - if (unlikely(memviewslice->memview || memviewslice->data)) { - PyErr_SetString(PyExc_ValueError, - "memviewslice is already initialized!"); - goto fail; - } - if (buf->strides) { - for (i = 0; i < ndim; i++) { - memviewslice->strides[i] = buf->strides[i]; - } - } else { - Py_ssize_t stride = buf->itemsize; - for (i = ndim - 1; i >= 0; i--) { - memviewslice->strides[i] = stride; - stride *= buf->shape[i]; - } - } - for (i = 0; i < ndim; i++) { - memviewslice->shape[i] = buf->shape[i]; - if (buf->suboffsets) { - memviewslice->suboffsets[i] = buf->suboffsets[i]; - } else { - memviewslice->suboffsets[i] = -1; - } - } - memviewslice->memview = memview; - memviewslice->data = (char *)buf->buf; - if (__pyx_add_acquisition_count(memview) == 0 && !memview_is_new_reference) { - Py_INCREF(memview); - } - retval = 0; - goto no_fail; -fail: - memviewslice->memview = 0; - memviewslice->data = 0; - retval = -1; -no_fail: - __Pyx_RefNannyFinishContext(); - return retval; -} -#ifndef Py_NO_RETURN -#define Py_NO_RETURN -#endif -static void __pyx_fatalerror(const char *fmt, ...) Py_NO_RETURN { - va_list vargs; - char msg[200]; -#ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, fmt); -#else - va_start(vargs); -#endif - vsnprintf(msg, 200, fmt, vargs); - va_end(vargs); - Py_FatalError(msg); -} -static CYTHON_INLINE int -__pyx_add_acquisition_count_locked(__pyx_atomic_int *acquisition_count, - PyThread_type_lock lock) -{ - int result; - PyThread_acquire_lock(lock, 1); - result = (*acquisition_count)++; - PyThread_release_lock(lock); - return result; -} -static CYTHON_INLINE int -__pyx_sub_acquisition_count_locked(__pyx_atomic_int *acquisition_count, - PyThread_type_lock lock) -{ - int result; - PyThread_acquire_lock(lock, 1); - result = (*acquisition_count)--; - PyThread_release_lock(lock); - return result; -} -static CYTHON_INLINE void -__Pyx_INC_MEMVIEW(__Pyx_memviewslice *memslice, int have_gil, int lineno) -{ - int first_time; - struct __pyx_memoryview_obj *memview = memslice->memview; - if (unlikely(!memview || (PyObject *) memview == Py_None)) - return; - if (unlikely(__pyx_get_slice_count(memview) < 0)) - __pyx_fatalerror("Acquisition count is %d (line %d)", - __pyx_get_slice_count(memview), lineno); - first_time = __pyx_add_acquisition_count(memview) == 0; - if (unlikely(first_time)) { - if (have_gil) { - Py_INCREF((PyObject *) memview); - } else { - PyGILState_STATE _gilstate = PyGILState_Ensure(); - Py_INCREF((PyObject *) memview); - PyGILState_Release(_gilstate); - } - } -} -static CYTHON_INLINE void __Pyx_XDEC_MEMVIEW(__Pyx_memviewslice *memslice, - int have_gil, int lineno) { - int last_time; - struct __pyx_memoryview_obj *memview = memslice->memview; - if (unlikely(!memview || (PyObject *) memview == Py_None)) { - memslice->memview = NULL; - return; - } - if (unlikely(__pyx_get_slice_count(memview) <= 0)) - __pyx_fatalerror("Acquisition count is %d (line %d)", - __pyx_get_slice_count(memview), lineno); - last_time = __pyx_sub_acquisition_count(memview) == 1; - memslice->data = NULL; - if (unlikely(last_time)) { - if (have_gil) { - Py_CLEAR(memslice->memview); - } else { - PyGILState_STATE _gilstate = PyGILState_Ensure(); - Py_CLEAR(memslice->memview); - PyGILState_Release(_gilstate); - } - } else { - memslice->memview = NULL; - } -} - -/* RaiseArgTupleInvalid */ -static void __Pyx_RaiseArgtupleInvalid( - const char* func_name, - int exact, - Py_ssize_t num_min, - Py_ssize_t num_max, - Py_ssize_t num_found) -{ - Py_ssize_t num_expected; - const char *more_or_less; - if (num_found < num_min) { - num_expected = num_min; - more_or_less = "at least"; - } else { - num_expected = num_max; - more_or_less = "at most"; - } - if (exact) { - more_or_less = "exactly"; - } - PyErr_Format(PyExc_TypeError, - "%.200s() takes %.8s %" CYTHON_FORMAT_SSIZE_T "d positional argument%.1s (%" CYTHON_FORMAT_SSIZE_T "d given)", - func_name, more_or_less, num_expected, - (num_expected == 1) ? "" : "s", num_found); -} - -/* RaiseDoubleKeywords */ -static void __Pyx_RaiseDoubleKeywordsError( - const char* func_name, - PyObject* kw_name) -{ - PyErr_Format(PyExc_TypeError, - #if PY_MAJOR_VERSION >= 3 - "%s() got multiple values for keyword argument '%U'", func_name, kw_name); - #else - "%s() got multiple values for keyword argument '%s'", func_name, - PyString_AsString(kw_name)); - #endif -} - -/* ParseKeywords */ -static int __Pyx_ParseOptionalKeywords( - PyObject *kwds, - PyObject **argnames[], - PyObject *kwds2, - PyObject *values[], - Py_ssize_t num_pos_args, - const char* function_name) -{ - PyObject *key = 0, *value = 0; - Py_ssize_t pos = 0; - PyObject*** name; - PyObject*** first_kw_arg = argnames + num_pos_args; - while (PyDict_Next(kwds, &pos, &key, &value)) { - name = first_kw_arg; - while (*name && (**name != key)) name++; - if (*name) { - values[name-argnames] = value; - continue; - } - name = first_kw_arg; - #if PY_MAJOR_VERSION < 3 - if (likely(PyString_Check(key))) { - while (*name) { - if ((CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**name) == PyString_GET_SIZE(key)) - && _PyString_Eq(**name, key)) { - values[name-argnames] = value; - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - if ((**argname == key) || ( - (CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**argname) == PyString_GET_SIZE(key)) - && _PyString_Eq(**argname, key))) { - goto arg_passed_twice; - } - argname++; - } - } - } else - #endif - if (likely(PyUnicode_Check(key))) { - while (*name) { - int cmp = (**name == key) ? 0 : - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**name) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**name, key); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) { - values[name-argnames] = value; - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - int cmp = (**argname == key) ? 0 : - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**argname) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**argname, key); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) goto arg_passed_twice; - argname++; - } - } - } else - goto invalid_keyword_type; - if (kwds2) { - if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad; - } else { - goto invalid_keyword; - } - } - return 0; -arg_passed_twice: - __Pyx_RaiseDoubleKeywordsError(function_name, key); - goto bad; -invalid_keyword_type: - PyErr_Format(PyExc_TypeError, - "%.200s() keywords must be strings", function_name); - goto bad; -invalid_keyword: - PyErr_Format(PyExc_TypeError, - #if PY_MAJOR_VERSION < 3 - "%.200s() got an unexpected keyword argument '%.200s'", - function_name, PyString_AsString(key)); - #else - "%s() got an unexpected keyword argument '%U'", - function_name, key); - #endif -bad: - return -1; -} - -/* None */ -static CYTHON_INLINE void __Pyx_RaiseUnboundLocalError(const char *varname) { - PyErr_Format(PyExc_UnboundLocalError, "local variable '%s' referenced before assignment", varname); -} - -/* ArgTypeTest */ -static int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const char *name, int exact) -{ - if (unlikely(!type)) { - PyErr_SetString(PyExc_SystemError, "Missing type object"); - return 0; - } - else if (exact) { - #if PY_MAJOR_VERSION == 2 - if ((type == &PyBaseString_Type) && likely(__Pyx_PyBaseString_CheckExact(obj))) return 1; - #endif - } - else { - if (likely(__Pyx_TypeCheck(obj, type))) return 1; - } - PyErr_Format(PyExc_TypeError, - "Argument '%.200s' has incorrect type (expected %.200s, got %.200s)", - name, type->tp_name, Py_TYPE(obj)->tp_name); - return 0; -} - -/* PyObjectCall */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) { - PyObject *result; - ternaryfunc call = func->ob_type->tp_call; - if (unlikely(!call)) - return PyObject_Call(func, arg, kw); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = (*call)(func, arg, kw); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyErrFetchRestore */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - tmp_type = tstate->curexc_type; - tmp_value = tstate->curexc_value; - tmp_tb = tstate->curexc_traceback; - tstate->curexc_type = type; - tstate->curexc_value = value; - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -} -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - *type = tstate->curexc_type; - *value = tstate->curexc_value; - *tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; -} -#endif - -/* RaiseException */ -#if PY_MAJOR_VERSION < 3 -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, - CYTHON_UNUSED PyObject *cause) { - __Pyx_PyThreadState_declare - Py_XINCREF(type); - if (!value || value == Py_None) - value = NULL; - else - Py_INCREF(value); - if (!tb || tb == Py_None) - tb = NULL; - else { - Py_INCREF(tb); - if (!PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto raise_error; - } - } - if (PyType_Check(type)) { -#if CYTHON_COMPILING_IN_PYPY - if (!value) { - Py_INCREF(Py_None); - value = Py_None; - } -#endif - PyErr_NormalizeException(&type, &value, &tb); - } else { - if (value) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto raise_error; - } - value = type; - type = (PyObject*) Py_TYPE(type); - Py_INCREF(type); - if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto raise_error; - } - } - __Pyx_PyThreadState_assign - __Pyx_ErrRestore(type, value, tb); - return; -raise_error: - Py_XDECREF(value); - Py_XDECREF(type); - Py_XDECREF(tb); - return; -} -#else -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) { - PyObject* owned_instance = NULL; - if (tb == Py_None) { - tb = 0; - } else if (tb && !PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto bad; - } - if (value == Py_None) - value = 0; - if (PyExceptionInstance_Check(type)) { - if (value) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto bad; - } - value = type; - type = (PyObject*) Py_TYPE(value); - } else if (PyExceptionClass_Check(type)) { - PyObject *instance_class = NULL; - if (value && PyExceptionInstance_Check(value)) { - instance_class = (PyObject*) Py_TYPE(value); - if (instance_class != type) { - int is_subclass = PyObject_IsSubclass(instance_class, type); - if (!is_subclass) { - instance_class = NULL; - } else if (unlikely(is_subclass == -1)) { - goto bad; - } else { - type = instance_class; - } - } - } - if (!instance_class) { - PyObject *args; - if (!value) - args = PyTuple_New(0); - else if (PyTuple_Check(value)) { - Py_INCREF(value); - args = value; - } else - args = PyTuple_Pack(1, value); - if (!args) - goto bad; - owned_instance = PyObject_Call(type, args, NULL); - Py_DECREF(args); - if (!owned_instance) - goto bad; - value = owned_instance; - if (!PyExceptionInstance_Check(value)) { - PyErr_Format(PyExc_TypeError, - "calling %R should have returned an instance of " - "BaseException, not %R", - type, Py_TYPE(value)); - goto bad; - } - } - } else { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto bad; - } - if (cause) { - PyObject *fixed_cause; - if (cause == Py_None) { - fixed_cause = NULL; - } else if (PyExceptionClass_Check(cause)) { - fixed_cause = PyObject_CallObject(cause, NULL); - if (fixed_cause == NULL) - goto bad; - } else if (PyExceptionInstance_Check(cause)) { - fixed_cause = cause; - Py_INCREF(fixed_cause); - } else { - PyErr_SetString(PyExc_TypeError, - "exception causes must derive from " - "BaseException"); - goto bad; - } - PyException_SetCause(value, fixed_cause); - } - PyErr_SetObject(type, value); - if (tb) { -#if CYTHON_COMPILING_IN_PYPY - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_Fetch(&tmp_type, &tmp_value, &tmp_tb); - Py_INCREF(tb); - PyErr_Restore(tmp_type, tmp_value, tb); - Py_XDECREF(tmp_tb); -#else - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject* tmp_tb = tstate->curexc_traceback; - if (tb != tmp_tb) { - Py_INCREF(tb); - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_tb); - } -#endif - } -bad: - Py_XDECREF(owned_instance); - return; -} -#endif - -/* PyCFunctionFastCall */ -#if CYTHON_FAST_PYCCALL -static CYTHON_INLINE PyObject * __Pyx_PyCFunction_FastCall(PyObject *func_obj, PyObject **args, Py_ssize_t nargs) { - PyCFunctionObject *func = (PyCFunctionObject*)func_obj; - PyCFunction meth = PyCFunction_GET_FUNCTION(func); - PyObject *self = PyCFunction_GET_SELF(func); - int flags = PyCFunction_GET_FLAGS(func); - assert(PyCFunction_Check(func)); - assert(METH_FASTCALL == (flags & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS))); - assert(nargs >= 0); - assert(nargs == 0 || args != NULL); - /* _PyCFunction_FastCallDict() must not be called with an exception set, - because it may clear it (directly or indirectly) and so the - caller loses its exception */ - assert(!PyErr_Occurred()); - if ((PY_VERSION_HEX < 0x030700A0) || unlikely(flags & METH_KEYWORDS)) { - return (*((__Pyx_PyCFunctionFastWithKeywords)(void*)meth)) (self, args, nargs, NULL); - } else { - return (*((__Pyx_PyCFunctionFast)(void*)meth)) (self, args, nargs); - } -} -#endif - -/* PyFunctionFastCall */ -#if CYTHON_FAST_PYCALL -static PyObject* __Pyx_PyFunction_FastCallNoKw(PyCodeObject *co, PyObject **args, Py_ssize_t na, - PyObject *globals) { - PyFrameObject *f; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject **fastlocals; - Py_ssize_t i; - PyObject *result; - assert(globals != NULL); - /* XXX Perhaps we should create a specialized - PyFrame_New() that doesn't take locals, but does - take builtins without sanity checking them. - */ - assert(tstate != NULL); - f = PyFrame_New(tstate, co, globals, NULL); - if (f == NULL) { - return NULL; - } - fastlocals = __Pyx_PyFrame_GetLocalsplus(f); - for (i = 0; i < na; i++) { - Py_INCREF(*args); - fastlocals[i] = *args++; - } - result = PyEval_EvalFrameEx(f,0); - ++tstate->recursion_depth; - Py_DECREF(f); - --tstate->recursion_depth; - return result; -} -#if 1 || PY_VERSION_HEX < 0x030600B1 -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs) { - PyCodeObject *co = (PyCodeObject *)PyFunction_GET_CODE(func); - PyObject *globals = PyFunction_GET_GLOBALS(func); - PyObject *argdefs = PyFunction_GET_DEFAULTS(func); - PyObject *closure; -#if PY_MAJOR_VERSION >= 3 - PyObject *kwdefs; -#endif - PyObject *kwtuple, **k; - PyObject **d; - Py_ssize_t nd; - Py_ssize_t nk; - PyObject *result; - assert(kwargs == NULL || PyDict_Check(kwargs)); - nk = kwargs ? PyDict_Size(kwargs) : 0; - if (Py_EnterRecursiveCall((char*)" while calling a Python object")) { - return NULL; - } - if ( -#if PY_MAJOR_VERSION >= 3 - co->co_kwonlyargcount == 0 && -#endif - likely(kwargs == NULL || nk == 0) && - co->co_flags == (CO_OPTIMIZED | CO_NEWLOCALS | CO_NOFREE)) { - if (argdefs == NULL && co->co_argcount == nargs) { - result = __Pyx_PyFunction_FastCallNoKw(co, args, nargs, globals); - goto done; - } - else if (nargs == 0 && argdefs != NULL - && co->co_argcount == Py_SIZE(argdefs)) { - /* function called with no arguments, but all parameters have - a default value: use default values as arguments .*/ - args = &PyTuple_GET_ITEM(argdefs, 0); - result =__Pyx_PyFunction_FastCallNoKw(co, args, Py_SIZE(argdefs), globals); - goto done; - } - } - if (kwargs != NULL) { - Py_ssize_t pos, i; - kwtuple = PyTuple_New(2 * nk); - if (kwtuple == NULL) { - result = NULL; - goto done; - } - k = &PyTuple_GET_ITEM(kwtuple, 0); - pos = i = 0; - while (PyDict_Next(kwargs, &pos, &k[i], &k[i+1])) { - Py_INCREF(k[i]); - Py_INCREF(k[i+1]); - i += 2; - } - nk = i / 2; - } - else { - kwtuple = NULL; - k = NULL; - } - closure = PyFunction_GET_CLOSURE(func); -#if PY_MAJOR_VERSION >= 3 - kwdefs = PyFunction_GET_KW_DEFAULTS(func); -#endif - if (argdefs != NULL) { - d = &PyTuple_GET_ITEM(argdefs, 0); - nd = Py_SIZE(argdefs); - } - else { - d = NULL; - nd = 0; - } -#if PY_MAJOR_VERSION >= 3 - result = PyEval_EvalCodeEx((PyObject*)co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, kwdefs, closure); -#else - result = PyEval_EvalCodeEx(co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, closure); -#endif - Py_XDECREF(kwtuple); -done: - Py_LeaveRecursiveCall(); - return result; -} -#endif -#endif - -/* PyObjectCall2Args */ -static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2) { - PyObject *args, *result = NULL; - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(function)) { - PyObject *args[2] = {arg1, arg2}; - return __Pyx_PyFunction_FastCall(function, args, 2); - } - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(function)) { - PyObject *args[2] = {arg1, arg2}; - return __Pyx_PyCFunction_FastCall(function, args, 2); - } - #endif - args = PyTuple_New(2); - if (unlikely(!args)) goto done; - Py_INCREF(arg1); - PyTuple_SET_ITEM(args, 0, arg1); - Py_INCREF(arg2); - PyTuple_SET_ITEM(args, 1, arg2); - Py_INCREF(function); - result = __Pyx_PyObject_Call(function, args, NULL); - Py_DECREF(args); - Py_DECREF(function); -done: - return result; -} - -/* PyObjectCallMethO */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg) { - PyObject *self, *result; - PyCFunction cfunc; - cfunc = PyCFunction_GET_FUNCTION(func); - self = PyCFunction_GET_SELF(func); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = cfunc(self, arg); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyObjectCallOneArg */ -#if CYTHON_COMPILING_IN_CPYTHON -static PyObject* __Pyx__PyObject_CallOneArg(PyObject *func, PyObject *arg) { - PyObject *result; - PyObject *args = PyTuple_New(1); - if (unlikely(!args)) return NULL; - Py_INCREF(arg); - PyTuple_SET_ITEM(args, 0, arg); - result = __Pyx_PyObject_Call(func, args, NULL); - Py_DECREF(args); - return result; -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { -#if CYTHON_FAST_PYCALL - if (PyFunction_Check(func)) { - return __Pyx_PyFunction_FastCall(func, &arg, 1); - } -#endif - if (likely(PyCFunction_Check(func))) { - if (likely(PyCFunction_GET_FLAGS(func) & METH_O)) { - return __Pyx_PyObject_CallMethO(func, arg); -#if CYTHON_FAST_PYCCALL - } else if (PyCFunction_GET_FLAGS(func) & METH_FASTCALL) { - return __Pyx_PyCFunction_FastCall(func, &arg, 1); -#endif - } - } - return __Pyx__PyObject_CallOneArg(func, arg); -} -#else -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { - PyObject *result; - PyObject *args = PyTuple_Pack(1, arg); - if (unlikely(!args)) return NULL; - result = __Pyx_PyObject_Call(func, args, NULL); - Py_DECREF(args); - return result; -} -#endif - -/* BytesEquals */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY - return PyObject_RichCompareBool(s1, s2, equals); -#else - if (s1 == s2) { - return (equals == Py_EQ); - } else if (PyBytes_CheckExact(s1) & PyBytes_CheckExact(s2)) { - const char *ps1, *ps2; - Py_ssize_t length = PyBytes_GET_SIZE(s1); - if (length != PyBytes_GET_SIZE(s2)) - return (equals == Py_NE); - ps1 = PyBytes_AS_STRING(s1); - ps2 = PyBytes_AS_STRING(s2); - if (ps1[0] != ps2[0]) { - return (equals == Py_NE); - } else if (length == 1) { - return (equals == Py_EQ); - } else { - int result; -#if CYTHON_USE_UNICODE_INTERNALS - Py_hash_t hash1, hash2; - hash1 = ((PyBytesObject*)s1)->ob_shash; - hash2 = ((PyBytesObject*)s2)->ob_shash; - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - return (equals == Py_NE); - } -#endif - result = memcmp(ps1, ps2, (size_t)length); - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & PyBytes_CheckExact(s2)) { - return (equals == Py_NE); - } else if ((s2 == Py_None) & PyBytes_CheckExact(s1)) { - return (equals == Py_NE); - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -#endif -} - -/* UnicodeEquals */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY - return PyObject_RichCompareBool(s1, s2, equals); -#else -#if PY_MAJOR_VERSION < 3 - PyObject* owned_ref = NULL; -#endif - int s1_is_unicode, s2_is_unicode; - if (s1 == s2) { - goto return_eq; - } - s1_is_unicode = PyUnicode_CheckExact(s1); - s2_is_unicode = PyUnicode_CheckExact(s2); -#if PY_MAJOR_VERSION < 3 - if ((s1_is_unicode & (!s2_is_unicode)) && PyString_CheckExact(s2)) { - owned_ref = PyUnicode_FromObject(s2); - if (unlikely(!owned_ref)) - return -1; - s2 = owned_ref; - s2_is_unicode = 1; - } else if ((s2_is_unicode & (!s1_is_unicode)) && PyString_CheckExact(s1)) { - owned_ref = PyUnicode_FromObject(s1); - if (unlikely(!owned_ref)) - return -1; - s1 = owned_ref; - s1_is_unicode = 1; - } else if (((!s2_is_unicode) & (!s1_is_unicode))) { - return __Pyx_PyBytes_Equals(s1, s2, equals); - } -#endif - if (s1_is_unicode & s2_is_unicode) { - Py_ssize_t length; - int kind; - void *data1, *data2; - if (unlikely(__Pyx_PyUnicode_READY(s1) < 0) || unlikely(__Pyx_PyUnicode_READY(s2) < 0)) - return -1; - length = __Pyx_PyUnicode_GET_LENGTH(s1); - if (length != __Pyx_PyUnicode_GET_LENGTH(s2)) { - goto return_ne; - } -#if CYTHON_USE_UNICODE_INTERNALS - { - Py_hash_t hash1, hash2; - #if CYTHON_PEP393_ENABLED - hash1 = ((PyASCIIObject*)s1)->hash; - hash2 = ((PyASCIIObject*)s2)->hash; - #else - hash1 = ((PyUnicodeObject*)s1)->hash; - hash2 = ((PyUnicodeObject*)s2)->hash; - #endif - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - goto return_ne; - } - } -#endif - kind = __Pyx_PyUnicode_KIND(s1); - if (kind != __Pyx_PyUnicode_KIND(s2)) { - goto return_ne; - } - data1 = __Pyx_PyUnicode_DATA(s1); - data2 = __Pyx_PyUnicode_DATA(s2); - if (__Pyx_PyUnicode_READ(kind, data1, 0) != __Pyx_PyUnicode_READ(kind, data2, 0)) { - goto return_ne; - } else if (length == 1) { - goto return_eq; - } else { - int result = memcmp(data1, data2, (size_t)(length * kind)); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & s2_is_unicode) { - goto return_ne; - } else if ((s2 == Py_None) & s1_is_unicode) { - goto return_ne; - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -return_eq: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ); -return_ne: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_NE); -#endif -} - -/* None */ -static CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t a, Py_ssize_t b) { - Py_ssize_t q = a / b; - Py_ssize_t r = a - q*b; - q -= ((r != 0) & ((r ^ b) < 0)); - return q; -} - -/* GetAttr */ -static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *o, PyObject *n) { -#if CYTHON_USE_TYPE_SLOTS -#if PY_MAJOR_VERSION >= 3 - if (likely(PyUnicode_Check(n))) -#else - if (likely(PyString_Check(n))) -#endif - return __Pyx_PyObject_GetAttrStr(o, n); -#endif - return PyObject_GetAttr(o, n); -} - -/* GetItemInt */ -static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j) { - PyObject *r; - if (!j) return NULL; - r = PyObject_GetItem(o, j); - Py_DECREF(j); - return r; -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - Py_ssize_t wrapped_i = i; - if (wraparound & unlikely(i < 0)) { - wrapped_i += PyList_GET_SIZE(o); - } - if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyList_GET_SIZE(o)))) { - PyObject *r = PyList_GET_ITEM(o, wrapped_i); - Py_INCREF(r); - return r; - } - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -#else - return PySequence_GetItem(o, i); -#endif -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - Py_ssize_t wrapped_i = i; - if (wraparound & unlikely(i < 0)) { - wrapped_i += PyTuple_GET_SIZE(o); - } - if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, wrapped_i); - Py_INCREF(r); - return r; - } - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -#else - return PySequence_GetItem(o, i); -#endif -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, int is_list, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS && CYTHON_USE_TYPE_SLOTS - if (is_list || PyList_CheckExact(o)) { - Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyList_GET_SIZE(o); - if ((!boundscheck) || (likely(__Pyx_is_valid_index(n, PyList_GET_SIZE(o))))) { - PyObject *r = PyList_GET_ITEM(o, n); - Py_INCREF(r); - return r; - } - } - else if (PyTuple_CheckExact(o)) { - Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyTuple_GET_SIZE(o); - if ((!boundscheck) || likely(__Pyx_is_valid_index(n, PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, n); - Py_INCREF(r); - return r; - } - } else { - PySequenceMethods *m = Py_TYPE(o)->tp_as_sequence; - if (likely(m && m->sq_item)) { - if (wraparound && unlikely(i < 0) && likely(m->sq_length)) { - Py_ssize_t l = m->sq_length(o); - if (likely(l >= 0)) { - i += l; - } else { - if (!PyErr_ExceptionMatches(PyExc_OverflowError)) - return NULL; - PyErr_Clear(); - } - } - return m->sq_item(o, i); - } - } -#else - if (is_list || PySequence_Check(o)) { - return PySequence_GetItem(o, i); - } -#endif - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -} - -/* ObjectGetItem */ -#if CYTHON_USE_TYPE_SLOTS -static PyObject *__Pyx_PyObject_GetIndex(PyObject *obj, PyObject* index) { - PyObject *runerr; - Py_ssize_t key_value; - PySequenceMethods *m = Py_TYPE(obj)->tp_as_sequence; - if (unlikely(!(m && m->sq_item))) { - PyErr_Format(PyExc_TypeError, "'%.200s' object is not subscriptable", Py_TYPE(obj)->tp_name); - return NULL; - } - key_value = __Pyx_PyIndex_AsSsize_t(index); - if (likely(key_value != -1 || !(runerr = PyErr_Occurred()))) { - return __Pyx_GetItemInt_Fast(obj, key_value, 0, 1, 1); - } - if (PyErr_GivenExceptionMatches(runerr, PyExc_OverflowError)) { - PyErr_Clear(); - PyErr_Format(PyExc_IndexError, "cannot fit '%.200s' into an index-sized integer", Py_TYPE(index)->tp_name); - } - return NULL; -} -static PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject* key) { - PyMappingMethods *m = Py_TYPE(obj)->tp_as_mapping; - if (likely(m && m->mp_subscript)) { - return m->mp_subscript(obj, key); - } - return __Pyx_PyObject_GetIndex(obj, key); -} -#endif - -/* decode_c_string */ -static CYTHON_INLINE PyObject* __Pyx_decode_c_string( - const char* cstring, Py_ssize_t start, Py_ssize_t stop, - const char* encoding, const char* errors, - PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)) { - Py_ssize_t length; - if (unlikely((start < 0) | (stop < 0))) { - size_t slen = strlen(cstring); - if (unlikely(slen > (size_t) PY_SSIZE_T_MAX)) { - PyErr_SetString(PyExc_OverflowError, - "c-string too long to convert to Python"); - return NULL; - } - length = (Py_ssize_t) slen; - if (start < 0) { - start += length; - if (start < 0) - start = 0; - } - if (stop < 0) - stop += length; - } - if (unlikely(stop <= start)) - return __Pyx_NewRef(__pyx_empty_unicode); - length = stop - start; - cstring += start; - if (decode_func) { - return decode_func(cstring, length, errors); - } else { - return PyUnicode_Decode(cstring, length, encoding, errors); - } -} - -/* PyErrExceptionMatches */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx_PyErr_ExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; icurexc_type; - if (exc_type == err) return 1; - if (unlikely(!exc_type)) return 0; - if (unlikely(PyTuple_Check(err))) - return __Pyx_PyErr_ExceptionMatchesTuple(exc_type, err); - return __Pyx_PyErr_GivenExceptionMatches(exc_type, err); -} -#endif - -/* GetAttr3 */ -static PyObject *__Pyx_GetAttr3Default(PyObject *d) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - if (unlikely(!__Pyx_PyErr_ExceptionMatches(PyExc_AttributeError))) - return NULL; - __Pyx_PyErr_Clear(); - Py_INCREF(d); - return d; -} -static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *o, PyObject *n, PyObject *d) { - PyObject *r = __Pyx_GetAttr(o, n); - return (likely(r)) ? r : __Pyx_GetAttr3Default(d); -} - -/* PyDictVersioning */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - return likely(dict) ? __PYX_GET_DICT_VERSION(dict) : 0; -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj) { - PyObject **dictptr = NULL; - Py_ssize_t offset = Py_TYPE(obj)->tp_dictoffset; - if (offset) { -#if CYTHON_COMPILING_IN_CPYTHON - dictptr = (likely(offset > 0)) ? (PyObject **) ((char *)obj + offset) : _PyObject_GetDictPtr(obj); -#else - dictptr = _PyObject_GetDictPtr(obj); -#endif - } - return (dictptr && *dictptr) ? __PYX_GET_DICT_VERSION(*dictptr) : 0; -} -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - if (unlikely(!dict) || unlikely(tp_dict_version != __PYX_GET_DICT_VERSION(dict))) - return 0; - return obj_dict_version == __Pyx_get_object_dict_version(obj); -} -#endif - -/* GetModuleGlobalName */ -#if CYTHON_USE_DICT_VERSIONS -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value) -#else -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name) -#endif -{ - PyObject *result; -#if !CYTHON_AVOID_BORROWED_REFS -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 - result = _PyDict_GetItem_KnownHash(__pyx_d, name, ((PyASCIIObject *) name)->hash); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } else if (unlikely(PyErr_Occurred())) { - return NULL; - } -#else - result = PyDict_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } -#endif -#else - result = PyObject_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } - PyErr_Clear(); -#endif - return __Pyx_GetBuiltinName(name); -} - -/* RaiseTooManyValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected) { - PyErr_Format(PyExc_ValueError, - "too many values to unpack (expected %" CYTHON_FORMAT_SSIZE_T "d)", expected); -} - -/* RaiseNeedMoreValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) { - PyErr_Format(PyExc_ValueError, - "need more than %" CYTHON_FORMAT_SSIZE_T "d value%.1s to unpack", - index, (index == 1) ? "" : "s"); -} - -/* RaiseNoneIterError */ -static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void) { - PyErr_SetString(PyExc_TypeError, "'NoneType' object is not iterable"); -} - -/* ExtTypeTest */ -static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type) { - if (unlikely(!type)) { - PyErr_SetString(PyExc_SystemError, "Missing type object"); - return 0; - } - if (likely(__Pyx_TypeCheck(obj, type))) - return 1; - PyErr_Format(PyExc_TypeError, "Cannot convert %.200s to %.200s", - Py_TYPE(obj)->tp_name, type->tp_name); - return 0; -} - -/* GetTopmostException */ -#if CYTHON_USE_EXC_INFO_STACK -static _PyErr_StackItem * -__Pyx_PyErr_GetTopmostException(PyThreadState *tstate) -{ - _PyErr_StackItem *exc_info = tstate->exc_info; - while ((exc_info->exc_type == NULL || exc_info->exc_type == Py_None) && - exc_info->previous_item != NULL) - { - exc_info = exc_info->previous_item; - } - return exc_info; -} -#endif - -/* SaveResetException */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = __Pyx_PyErr_GetTopmostException(tstate); - *type = exc_info->exc_type; - *value = exc_info->exc_value; - *tb = exc_info->exc_traceback; - #else - *type = tstate->exc_type; - *value = tstate->exc_value; - *tb = tstate->exc_traceback; - #endif - Py_XINCREF(*type); - Py_XINCREF(*value); - Py_XINCREF(*tb); -} -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = type; - exc_info->exc_value = value; - exc_info->exc_traceback = tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = type; - tstate->exc_value = value; - tstate->exc_traceback = tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -} -#endif - -/* GetException */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb) -#endif -{ - PyObject *local_type, *local_value, *local_tb; -#if CYTHON_FAST_THREAD_STATE - PyObject *tmp_type, *tmp_value, *tmp_tb; - local_type = tstate->curexc_type; - local_value = tstate->curexc_value; - local_tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; -#else - PyErr_Fetch(&local_type, &local_value, &local_tb); -#endif - PyErr_NormalizeException(&local_type, &local_value, &local_tb); -#if CYTHON_FAST_THREAD_STATE - if (unlikely(tstate->curexc_type)) -#else - if (unlikely(PyErr_Occurred())) -#endif - goto bad; - #if PY_MAJOR_VERSION >= 3 - if (local_tb) { - if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0)) - goto bad; - } - #endif - Py_XINCREF(local_tb); - Py_XINCREF(local_type); - Py_XINCREF(local_value); - *type = local_type; - *value = local_value; - *tb = local_tb; -#if CYTHON_FAST_THREAD_STATE - #if CYTHON_USE_EXC_INFO_STACK - { - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = local_type; - exc_info->exc_value = local_value; - exc_info->exc_traceback = local_tb; - } - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = local_type; - tstate->exc_value = local_value; - tstate->exc_traceback = local_tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -#else - PyErr_SetExcInfo(local_type, local_value, local_tb); -#endif - return 0; -bad: - *type = 0; - *value = 0; - *tb = 0; - Py_XDECREF(local_type); - Py_XDECREF(local_value); - Py_XDECREF(local_tb); - return -1; -} - -/* SwapException */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = *type; - exc_info->exc_value = *value; - exc_info->exc_traceback = *tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = *type; - tstate->exc_value = *value; - tstate->exc_traceback = *tb; - #endif - *type = tmp_type; - *value = tmp_value; - *tb = tmp_tb; -} -#else -static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_GetExcInfo(&tmp_type, &tmp_value, &tmp_tb); - PyErr_SetExcInfo(*type, *value, *tb); - *type = tmp_type; - *value = tmp_value; - *tb = tmp_tb; -} -#endif - -/* Import */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) { - PyObject *empty_list = 0; - PyObject *module = 0; - PyObject *global_dict = 0; - PyObject *empty_dict = 0; - PyObject *list; - #if PY_MAJOR_VERSION < 3 - PyObject *py_import; - py_import = __Pyx_PyObject_GetAttrStr(__pyx_b, __pyx_n_s_import); - if (!py_import) - goto bad; - #endif - if (from_list) - list = from_list; - else { - empty_list = PyList_New(0); - if (!empty_list) - goto bad; - list = empty_list; - } - global_dict = PyModule_GetDict(__pyx_m); - if (!global_dict) - goto bad; - empty_dict = PyDict_New(); - if (!empty_dict) - goto bad; - { - #if PY_MAJOR_VERSION >= 3 - if (level == -1) { - if ((1) && (strchr(__Pyx_MODULE_NAME, '.'))) { - module = PyImport_ImportModuleLevelObject( - name, global_dict, empty_dict, list, 1); - if (!module) { - if (!PyErr_ExceptionMatches(PyExc_ImportError)) - goto bad; - PyErr_Clear(); - } - } - level = 0; - } - #endif - if (!module) { - #if PY_MAJOR_VERSION < 3 - PyObject *py_level = PyInt_FromLong(level); - if (!py_level) - goto bad; - module = PyObject_CallFunctionObjArgs(py_import, - name, global_dict, empty_dict, list, py_level, (PyObject *)NULL); - Py_DECREF(py_level); - #else - module = PyImport_ImportModuleLevelObject( - name, global_dict, empty_dict, list, level); - #endif - } - } -bad: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(py_import); - #endif - Py_XDECREF(empty_list); - Py_XDECREF(empty_dict); - return module; -} - -/* FastTypeChecks */ -#if CYTHON_COMPILING_IN_CPYTHON -static int __Pyx_InBases(PyTypeObject *a, PyTypeObject *b) { - while (a) { - a = a->tp_base; - if (a == b) - return 1; - } - return b == &PyBaseObject_Type; -} -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b) { - PyObject *mro; - if (a == b) return 1; - mro = a->tp_mro; - if (likely(mro)) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(mro); - for (i = 0; i < n; i++) { - if (PyTuple_GET_ITEM(mro, i) == (PyObject *)b) - return 1; - } - return 0; - } - return __Pyx_InBases(a, b); -} -#if PY_MAJOR_VERSION == 2 -static int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject* exc_type2) { - PyObject *exception, *value, *tb; - int res; - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ErrFetch(&exception, &value, &tb); - res = exc_type1 ? PyObject_IsSubclass(err, exc_type1) : 0; - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - if (!res) { - res = PyObject_IsSubclass(err, exc_type2); - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - } - __Pyx_ErrRestore(exception, value, tb); - return res; -} -#else -static CYTHON_INLINE int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject *exc_type2) { - int res = exc_type1 ? __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type1) : 0; - if (!res) { - res = __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type2); - } - return res; -} -#endif -static int __Pyx_PyErr_GivenExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - assert(PyExceptionClass_Check(exc_type)); - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; i= 0 || (x^b) >= 0)) - return PyInt_FromLong(x); - return PyLong_Type.tp_as_number->nb_add(op1, op2); - } - #endif - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(op1))) { - const long b = intval; - long a, x; -#ifdef HAVE_LONG_LONG - const PY_LONG_LONG llb = intval; - PY_LONG_LONG lla, llx; -#endif - const digit* digits = ((PyLongObject*)op1)->ob_digit; - const Py_ssize_t size = Py_SIZE(op1); - if (likely(__Pyx_sst_abs(size) <= 1)) { - a = likely(size) ? digits[0] : 0; - if (size == -1) a = -a; - } else { - switch (size) { - case -2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - a = -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case 2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - a = (long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case -3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - a = -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case 3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - a = (long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case -4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - a = -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case 4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - a = (long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - default: return PyLong_Type.tp_as_number->nb_add(op1, op2); - } - } - x = a + b; - return PyLong_FromLong(x); -#ifdef HAVE_LONG_LONG - long_long: - llx = lla + llb; - return PyLong_FromLongLong(llx); -#endif - - - } - #endif - if (PyFloat_CheckExact(op1)) { - const long b = intval; - double a = PyFloat_AS_DOUBLE(op1); - double result; - PyFPE_START_PROTECT("add", return NULL) - result = ((double)a) + (double)b; - PyFPE_END_PROTECT(result) - return PyFloat_FromDouble(result); - } - return (inplace ? PyNumber_InPlaceAdd : PyNumber_Add)(op1, op2); -} -#endif - -/* None */ -static CYTHON_INLINE long __Pyx_div_long(long a, long b) { - long q = a / b; - long r = a - q*b; - q -= ((r != 0) & ((r ^ b) < 0)); - return q; -} - -/* ImportFrom */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name) { - PyObject* value = __Pyx_PyObject_GetAttrStr(module, name); - if (unlikely(!value) && PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Format(PyExc_ImportError, - #if PY_MAJOR_VERSION < 3 - "cannot import name %.230s", PyString_AS_STRING(name)); - #else - "cannot import name %S", name); - #endif - } - return value; -} - -/* HasAttr */ -static CYTHON_INLINE int __Pyx_HasAttr(PyObject *o, PyObject *n) { - PyObject *r; - if (unlikely(!__Pyx_PyBaseString_Check(n))) { - PyErr_SetString(PyExc_TypeError, - "hasattr(): attribute name must be string"); - return -1; - } - r = __Pyx_GetAttr(o, n); - if (unlikely(!r)) { - PyErr_Clear(); - return 0; - } else { - Py_DECREF(r); - return 1; - } -} - -/* PyObject_GenericGetAttrNoDict */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static PyObject *__Pyx_RaiseGenericGetAttributeError(PyTypeObject *tp, PyObject *attr_name) { - PyErr_Format(PyExc_AttributeError, -#if PY_MAJOR_VERSION >= 3 - "'%.50s' object has no attribute '%U'", - tp->tp_name, attr_name); -#else - "'%.50s' object has no attribute '%.400s'", - tp->tp_name, PyString_AS_STRING(attr_name)); -#endif - return NULL; -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name) { - PyObject *descr; - PyTypeObject *tp = Py_TYPE(obj); - if (unlikely(!PyString_Check(attr_name))) { - return PyObject_GenericGetAttr(obj, attr_name); - } - assert(!tp->tp_dictoffset); - descr = _PyType_Lookup(tp, attr_name); - if (unlikely(!descr)) { - return __Pyx_RaiseGenericGetAttributeError(tp, attr_name); - } - Py_INCREF(descr); - #if PY_MAJOR_VERSION < 3 - if (likely(PyType_HasFeature(Py_TYPE(descr), Py_TPFLAGS_HAVE_CLASS))) - #endif - { - descrgetfunc f = Py_TYPE(descr)->tp_descr_get; - if (unlikely(f)) { - PyObject *res = f(descr, obj, (PyObject *)tp); - Py_DECREF(descr); - return res; - } - } - return descr; -} -#endif - -/* PyObject_GenericGetAttr */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* attr_name) { - if (unlikely(Py_TYPE(obj)->tp_dictoffset)) { - return PyObject_GenericGetAttr(obj, attr_name); - } - return __Pyx_PyObject_GenericGetAttrNoDict(obj, attr_name); -} -#endif - -/* SetVTable */ -static int __Pyx_SetVtable(PyObject *dict, void *vtable) { -#if PY_VERSION_HEX >= 0x02070000 - PyObject *ob = PyCapsule_New(vtable, 0, 0); -#else - PyObject *ob = PyCObject_FromVoidPtr(vtable, 0); -#endif - if (!ob) - goto bad; - if (PyDict_SetItem(dict, __pyx_n_s_pyx_vtable, ob) < 0) - goto bad; - Py_DECREF(ob); - return 0; -bad: - Py_XDECREF(ob); - return -1; -} - -/* PyObjectGetAttrStrNoError */ -static void __Pyx_PyObject_GetAttrStr_ClearAttributeError(void) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - if (likely(__Pyx_PyErr_ExceptionMatches(PyExc_AttributeError))) - __Pyx_PyErr_Clear(); -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name) { - PyObject *result; -#if CYTHON_COMPILING_IN_CPYTHON && CYTHON_USE_TYPE_SLOTS && PY_VERSION_HEX >= 0x030700B1 - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro == PyObject_GenericGetAttr)) { - return _PyObject_GenericGetAttrWithDict(obj, attr_name, NULL, 1); - } -#endif - result = __Pyx_PyObject_GetAttrStr(obj, attr_name); - if (unlikely(!result)) { - __Pyx_PyObject_GetAttrStr_ClearAttributeError(); - } - return result; -} - -/* SetupReduce */ -static int __Pyx_setup_reduce_is_named(PyObject* meth, PyObject* name) { - int ret; - PyObject *name_attr; - name_attr = __Pyx_PyObject_GetAttrStr(meth, __pyx_n_s_name_2); - if (likely(name_attr)) { - ret = PyObject_RichCompareBool(name_attr, name, Py_EQ); - } else { - ret = -1; - } - if (unlikely(ret < 0)) { - PyErr_Clear(); - ret = 0; - } - Py_XDECREF(name_attr); - return ret; -} -static int __Pyx_setup_reduce(PyObject* type_obj) { - int ret = 0; - PyObject *object_reduce = NULL; - PyObject *object_reduce_ex = NULL; - PyObject *reduce = NULL; - PyObject *reduce_ex = NULL; - PyObject *reduce_cython = NULL; - PyObject *setstate = NULL; - PyObject *setstate_cython = NULL; -#if CYTHON_USE_PYTYPE_LOOKUP - if (_PyType_Lookup((PyTypeObject*)type_obj, __pyx_n_s_getstate)) goto __PYX_GOOD; -#else - if (PyObject_HasAttr(type_obj, __pyx_n_s_getstate)) goto __PYX_GOOD; -#endif -#if CYTHON_USE_PYTYPE_LOOKUP - object_reduce_ex = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_reduce_ex); if (!object_reduce_ex) goto __PYX_BAD; -#else - object_reduce_ex = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, __pyx_n_s_reduce_ex); if (!object_reduce_ex) goto __PYX_BAD; -#endif - reduce_ex = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce_ex); if (unlikely(!reduce_ex)) goto __PYX_BAD; - if (reduce_ex == object_reduce_ex) { -#if CYTHON_USE_PYTYPE_LOOKUP - object_reduce = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_reduce); if (!object_reduce) goto __PYX_BAD; -#else - object_reduce = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, __pyx_n_s_reduce); if (!object_reduce) goto __PYX_BAD; -#endif - reduce = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce); if (unlikely(!reduce)) goto __PYX_BAD; - if (reduce == object_reduce || __Pyx_setup_reduce_is_named(reduce, __pyx_n_s_reduce_cython)) { - reduce_cython = __Pyx_PyObject_GetAttrStrNoError(type_obj, __pyx_n_s_reduce_cython); - if (likely(reduce_cython)) { - ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_reduce, reduce_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_reduce_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - } else if (reduce == object_reduce || PyErr_Occurred()) { - goto __PYX_BAD; - } - setstate = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_setstate); - if (!setstate) PyErr_Clear(); - if (!setstate || __Pyx_setup_reduce_is_named(setstate, __pyx_n_s_setstate_cython)) { - setstate_cython = __Pyx_PyObject_GetAttrStrNoError(type_obj, __pyx_n_s_setstate_cython); - if (likely(setstate_cython)) { - ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_setstate, setstate_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_setstate_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - } else if (!setstate || PyErr_Occurred()) { - goto __PYX_BAD; - } - } - PyType_Modified((PyTypeObject*)type_obj); - } - } - goto __PYX_GOOD; -__PYX_BAD: - if (!PyErr_Occurred()) - PyErr_Format(PyExc_RuntimeError, "Unable to initialize pickling for %s", ((PyTypeObject*)type_obj)->tp_name); - ret = -1; -__PYX_GOOD: -#if !CYTHON_USE_PYTYPE_LOOKUP - Py_XDECREF(object_reduce); - Py_XDECREF(object_reduce_ex); -#endif - Py_XDECREF(reduce); - Py_XDECREF(reduce_ex); - Py_XDECREF(reduce_cython); - Py_XDECREF(setstate); - Py_XDECREF(setstate_cython); - return ret; -} - -/* CLineInTraceback */ -#ifndef CYTHON_CLINE_IN_TRACEBACK -static int __Pyx_CLineForTraceback(CYTHON_NCP_UNUSED PyThreadState *tstate, int c_line) { - PyObject *use_cline; - PyObject *ptype, *pvalue, *ptraceback; -#if CYTHON_COMPILING_IN_CPYTHON - PyObject **cython_runtime_dict; -#endif - if (unlikely(!__pyx_cython_runtime)) { - return c_line; - } - __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback); -#if CYTHON_COMPILING_IN_CPYTHON - cython_runtime_dict = _PyObject_GetDictPtr(__pyx_cython_runtime); - if (likely(cython_runtime_dict)) { - __PYX_PY_DICT_LOOKUP_IF_MODIFIED( - use_cline, *cython_runtime_dict, - __Pyx_PyDict_GetItemStr(*cython_runtime_dict, __pyx_n_s_cline_in_traceback)) - } else -#endif - { - PyObject *use_cline_obj = __Pyx_PyObject_GetAttrStr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback); - if (use_cline_obj) { - use_cline = PyObject_Not(use_cline_obj) ? Py_False : Py_True; - Py_DECREF(use_cline_obj); - } else { - PyErr_Clear(); - use_cline = NULL; - } - } - if (!use_cline) { - c_line = 0; - PyObject_SetAttr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback, Py_False); - } - else if (use_cline == Py_False || (use_cline != Py_True && PyObject_Not(use_cline) != 0)) { - c_line = 0; - } - __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback); - return c_line; -} -#endif - -/* CodeObjectCache */ -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) { - int start = 0, mid = 0, end = count - 1; - if (end >= 0 && code_line > entries[end].code_line) { - return count; - } - while (start < end) { - mid = start + (end - start) / 2; - if (code_line < entries[mid].code_line) { - end = mid; - } else if (code_line > entries[mid].code_line) { - start = mid + 1; - } else { - return mid; - } - } - if (code_line <= entries[mid].code_line) { - return mid; - } else { - return mid + 1; - } -} -static PyCodeObject *__pyx_find_code_object(int code_line) { - PyCodeObject* code_object; - int pos; - if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) { - return NULL; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) { - return NULL; - } - code_object = __pyx_code_cache.entries[pos].code_object; - Py_INCREF(code_object); - return code_object; -} -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) { - int pos, i; - __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries; - if (unlikely(!code_line)) { - return; - } - if (unlikely(!entries)) { - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry)); - if (likely(entries)) { - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = 64; - __pyx_code_cache.count = 1; - entries[0].code_line = code_line; - entries[0].code_object = code_object; - Py_INCREF(code_object); - } - return; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) { - PyCodeObject* tmp = entries[pos].code_object; - entries[pos].code_object = code_object; - Py_DECREF(tmp); - return; - } - if (__pyx_code_cache.count == __pyx_code_cache.max_count) { - int new_max = __pyx_code_cache.max_count + 64; - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc( - __pyx_code_cache.entries, ((size_t)new_max) * sizeof(__Pyx_CodeObjectCacheEntry)); - if (unlikely(!entries)) { - return; - } - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = new_max; - } - for (i=__pyx_code_cache.count; i>pos; i--) { - entries[i] = entries[i-1]; - } - entries[pos].code_line = code_line; - entries[pos].code_object = code_object; - __pyx_code_cache.count++; - Py_INCREF(code_object); -} - -/* AddTraceback */ -#include "compile.h" -#include "frameobject.h" -#include "traceback.h" -static PyCodeObject* __Pyx_CreateCodeObjectForTraceback( - const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = 0; - PyObject *py_srcfile = 0; - PyObject *py_funcname = 0; - #if PY_MAJOR_VERSION < 3 - py_srcfile = PyString_FromString(filename); - #else - py_srcfile = PyUnicode_FromString(filename); - #endif - if (!py_srcfile) goto bad; - if (c_line) { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - #else - py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - #endif - } - else { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromString(funcname); - #else - py_funcname = PyUnicode_FromString(funcname); - #endif - } - if (!py_funcname) goto bad; - py_code = __Pyx_PyCode_New( - 0, - 0, - 0, - 0, - 0, - __pyx_empty_bytes, /*PyObject *code,*/ - __pyx_empty_tuple, /*PyObject *consts,*/ - __pyx_empty_tuple, /*PyObject *names,*/ - __pyx_empty_tuple, /*PyObject *varnames,*/ - __pyx_empty_tuple, /*PyObject *freevars,*/ - __pyx_empty_tuple, /*PyObject *cellvars,*/ - py_srcfile, /*PyObject *filename,*/ - py_funcname, /*PyObject *name,*/ - py_line, - __pyx_empty_bytes /*PyObject *lnotab*/ - ); - Py_DECREF(py_srcfile); - Py_DECREF(py_funcname); - return py_code; -bad: - Py_XDECREF(py_srcfile); - Py_XDECREF(py_funcname); - return NULL; -} -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = 0; - PyFrameObject *py_frame = 0; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - if (c_line) { - c_line = __Pyx_CLineForTraceback(tstate, c_line); - } - py_code = __pyx_find_code_object(c_line ? -c_line : py_line); - if (!py_code) { - py_code = __Pyx_CreateCodeObjectForTraceback( - funcname, c_line, py_line, filename); - if (!py_code) goto bad; - __pyx_insert_code_object(c_line ? -c_line : py_line, py_code); - } - py_frame = PyFrame_New( - tstate, /*PyThreadState *tstate,*/ - py_code, /*PyCodeObject *code,*/ - __pyx_d, /*PyObject *globals,*/ - 0 /*PyObject *locals*/ - ); - if (!py_frame) goto bad; - __Pyx_PyFrame_SetLineNumber(py_frame, py_line); - PyTraceBack_Here(py_frame); -bad: - Py_XDECREF(py_code); - Py_XDECREF(py_frame); -} - -#if PY_MAJOR_VERSION < 3 -static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags) { - if (PyObject_CheckBuffer(obj)) return PyObject_GetBuffer(obj, view, flags); - if (__Pyx_TypeCheck(obj, __pyx_array_type)) return __pyx_array_getbuffer(obj, view, flags); - if (__Pyx_TypeCheck(obj, __pyx_memoryview_type)) return __pyx_memoryview_getbuffer(obj, view, flags); - PyErr_Format(PyExc_TypeError, "'%.200s' does not have the buffer interface", Py_TYPE(obj)->tp_name); - return -1; -} -static void __Pyx_ReleaseBuffer(Py_buffer *view) { - PyObject *obj = view->obj; - if (!obj) return; - if (PyObject_CheckBuffer(obj)) { - PyBuffer_Release(view); - return; - } - if ((0)) {} - view->obj = NULL; - Py_DECREF(obj); -} -#endif - - -/* MemviewSliceIsContig */ -static int -__pyx_memviewslice_is_contig(const __Pyx_memviewslice mvs, char order, int ndim) -{ - int i, index, step, start; - Py_ssize_t itemsize = mvs.memview->view.itemsize; - if (order == 'F') { - step = 1; - start = 0; - } else { - step = -1; - start = ndim - 1; - } - for (i = 0; i < ndim; i++) { - index = start + step * i; - if (mvs.suboffsets[index] >= 0 || mvs.strides[index] != itemsize) - return 0; - itemsize *= mvs.shape[index]; - } - return 1; -} - -/* OverlappingSlices */ -static void -__pyx_get_array_memory_extents(__Pyx_memviewslice *slice, - void **out_start, void **out_end, - int ndim, size_t itemsize) -{ - char *start, *end; - int i; - start = end = slice->data; - for (i = 0; i < ndim; i++) { - Py_ssize_t stride = slice->strides[i]; - Py_ssize_t extent = slice->shape[i]; - if (extent == 0) { - *out_start = *out_end = start; - return; - } else { - if (stride > 0) - end += stride * (extent - 1); - else - start += stride * (extent - 1); - } - } - *out_start = start; - *out_end = end + itemsize; -} -static int -__pyx_slices_overlap(__Pyx_memviewslice *slice1, - __Pyx_memviewslice *slice2, - int ndim, size_t itemsize) -{ - void *start1, *end1, *start2, *end2; - __pyx_get_array_memory_extents(slice1, &start1, &end1, ndim, itemsize); - __pyx_get_array_memory_extents(slice2, &start2, &end2, ndim, itemsize); - return (start1 < end2) && (start2 < end1); -} - -/* Capsule */ -static CYTHON_INLINE PyObject * -__pyx_capsule_create(void *p, CYTHON_UNUSED const char *sig) -{ - PyObject *cobj; -#if PY_VERSION_HEX >= 0x02070000 - cobj = PyCapsule_New(p, sig, NULL); -#else - cobj = PyCObject_FromVoidPtr(p, NULL); -#endif - return cobj; -} - -/* IsLittleEndian */ -static CYTHON_INLINE int __Pyx_Is_Little_Endian(void) -{ - union { - uint32_t u32; - uint8_t u8[4]; - } S; - S.u32 = 0x01020304; - return S.u8[0] == 4; -} - -/* BufferFormatCheck */ -static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx, - __Pyx_BufFmt_StackElem* stack, - __Pyx_TypeInfo* type) { - stack[0].field = &ctx->root; - stack[0].parent_offset = 0; - ctx->root.type = type; - ctx->root.name = "buffer dtype"; - ctx->root.offset = 0; - ctx->head = stack; - ctx->head->field = &ctx->root; - ctx->fmt_offset = 0; - ctx->head->parent_offset = 0; - ctx->new_packmode = '@'; - ctx->enc_packmode = '@'; - ctx->new_count = 1; - ctx->enc_count = 0; - ctx->enc_type = 0; - ctx->is_complex = 0; - ctx->is_valid_array = 0; - ctx->struct_alignment = 0; - while (type->typegroup == 'S') { - ++ctx->head; - ctx->head->field = type->fields; - ctx->head->parent_offset = 0; - type = type->fields->type; - } -} -static int __Pyx_BufFmt_ParseNumber(const char** ts) { - int count; - const char* t = *ts; - if (*t < '0' || *t > '9') { - return -1; - } else { - count = *t++ - '0'; - while (*t >= '0' && *t <= '9') { - count *= 10; - count += *t++ - '0'; - } - } - *ts = t; - return count; -} -static int __Pyx_BufFmt_ExpectNumber(const char **ts) { - int number = __Pyx_BufFmt_ParseNumber(ts); - if (number == -1) - PyErr_Format(PyExc_ValueError,\ - "Does not understand character buffer dtype format string ('%c')", **ts); - return number; -} -static void __Pyx_BufFmt_RaiseUnexpectedChar(char ch) { - PyErr_Format(PyExc_ValueError, - "Unexpected format string character: '%c'", ch); -} -static const char* __Pyx_BufFmt_DescribeTypeChar(char ch, int is_complex) { - switch (ch) { - case '?': return "'bool'"; - case 'c': return "'char'"; - case 'b': return "'signed char'"; - case 'B': return "'unsigned char'"; - case 'h': return "'short'"; - case 'H': return "'unsigned short'"; - case 'i': return "'int'"; - case 'I': return "'unsigned int'"; - case 'l': return "'long'"; - case 'L': return "'unsigned long'"; - case 'q': return "'long long'"; - case 'Q': return "'unsigned long long'"; - case 'f': return (is_complex ? "'complex float'" : "'float'"); - case 'd': return (is_complex ? "'complex double'" : "'double'"); - case 'g': return (is_complex ? "'complex long double'" : "'long double'"); - case 'T': return "a struct"; - case 'O': return "Python object"; - case 'P': return "a pointer"; - case 's': case 'p': return "a string"; - case 0: return "end"; - default: return "unparseable format string"; - } -} -static size_t __Pyx_BufFmt_TypeCharToStandardSize(char ch, int is_complex) { - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return 2; - case 'i': case 'I': case 'l': case 'L': return 4; - case 'q': case 'Q': return 8; - case 'f': return (is_complex ? 8 : 4); - case 'd': return (is_complex ? 16 : 8); - case 'g': { - PyErr_SetString(PyExc_ValueError, "Python does not define a standard format string size for long double ('g').."); - return 0; - } - case 'O': case 'P': return sizeof(void*); - default: - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } -} -static size_t __Pyx_BufFmt_TypeCharToNativeSize(char ch, int is_complex) { - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return sizeof(short); - case 'i': case 'I': return sizeof(int); - case 'l': case 'L': return sizeof(long); - #ifdef HAVE_LONG_LONG - case 'q': case 'Q': return sizeof(PY_LONG_LONG); - #endif - case 'f': return sizeof(float) * (is_complex ? 2 : 1); - case 'd': return sizeof(double) * (is_complex ? 2 : 1); - case 'g': return sizeof(long double) * (is_complex ? 2 : 1); - case 'O': case 'P': return sizeof(void*); - default: { - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } - } -} -typedef struct { char c; short x; } __Pyx_st_short; -typedef struct { char c; int x; } __Pyx_st_int; -typedef struct { char c; long x; } __Pyx_st_long; -typedef struct { char c; float x; } __Pyx_st_float; -typedef struct { char c; double x; } __Pyx_st_double; -typedef struct { char c; long double x; } __Pyx_st_longdouble; -typedef struct { char c; void *x; } __Pyx_st_void_p; -#ifdef HAVE_LONG_LONG -typedef struct { char c; PY_LONG_LONG x; } __Pyx_st_longlong; -#endif -static size_t __Pyx_BufFmt_TypeCharToAlignment(char ch, CYTHON_UNUSED int is_complex) { - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return sizeof(__Pyx_st_short) - sizeof(short); - case 'i': case 'I': return sizeof(__Pyx_st_int) - sizeof(int); - case 'l': case 'L': return sizeof(__Pyx_st_long) - sizeof(long); -#ifdef HAVE_LONG_LONG - case 'q': case 'Q': return sizeof(__Pyx_st_longlong) - sizeof(PY_LONG_LONG); -#endif - case 'f': return sizeof(__Pyx_st_float) - sizeof(float); - case 'd': return sizeof(__Pyx_st_double) - sizeof(double); - case 'g': return sizeof(__Pyx_st_longdouble) - sizeof(long double); - case 'P': case 'O': return sizeof(__Pyx_st_void_p) - sizeof(void*); - default: - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } -} -/* These are for computing the padding at the end of the struct to align - on the first member of the struct. This will probably the same as above, - but we don't have any guarantees. - */ -typedef struct { short x; char c; } __Pyx_pad_short; -typedef struct { int x; char c; } __Pyx_pad_int; -typedef struct { long x; char c; } __Pyx_pad_long; -typedef struct { float x; char c; } __Pyx_pad_float; -typedef struct { double x; char c; } __Pyx_pad_double; -typedef struct { long double x; char c; } __Pyx_pad_longdouble; -typedef struct { void *x; char c; } __Pyx_pad_void_p; -#ifdef HAVE_LONG_LONG -typedef struct { PY_LONG_LONG x; char c; } __Pyx_pad_longlong; -#endif -static size_t __Pyx_BufFmt_TypeCharToPadding(char ch, CYTHON_UNUSED int is_complex) { - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return sizeof(__Pyx_pad_short) - sizeof(short); - case 'i': case 'I': return sizeof(__Pyx_pad_int) - sizeof(int); - case 'l': case 'L': return sizeof(__Pyx_pad_long) - sizeof(long); -#ifdef HAVE_LONG_LONG - case 'q': case 'Q': return sizeof(__Pyx_pad_longlong) - sizeof(PY_LONG_LONG); -#endif - case 'f': return sizeof(__Pyx_pad_float) - sizeof(float); - case 'd': return sizeof(__Pyx_pad_double) - sizeof(double); - case 'g': return sizeof(__Pyx_pad_longdouble) - sizeof(long double); - case 'P': case 'O': return sizeof(__Pyx_pad_void_p) - sizeof(void*); - default: - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } -} -static char __Pyx_BufFmt_TypeCharToGroup(char ch, int is_complex) { - switch (ch) { - case 'c': - return 'H'; - case 'b': case 'h': case 'i': - case 'l': case 'q': case 's': case 'p': - return 'I'; - case '?': case 'B': case 'H': case 'I': case 'L': case 'Q': - return 'U'; - case 'f': case 'd': case 'g': - return (is_complex ? 'C' : 'R'); - case 'O': - return 'O'; - case 'P': - return 'P'; - default: { - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } - } -} -static void __Pyx_BufFmt_RaiseExpected(__Pyx_BufFmt_Context* ctx) { - if (ctx->head == NULL || ctx->head->field == &ctx->root) { - const char* expected; - const char* quote; - if (ctx->head == NULL) { - expected = "end"; - quote = ""; - } else { - expected = ctx->head->field->type->name; - quote = "'"; - } - PyErr_Format(PyExc_ValueError, - "Buffer dtype mismatch, expected %s%s%s but got %s", - quote, expected, quote, - __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex)); - } else { - __Pyx_StructField* field = ctx->head->field; - __Pyx_StructField* parent = (ctx->head - 1)->field; - PyErr_Format(PyExc_ValueError, - "Buffer dtype mismatch, expected '%s' but got %s in '%s.%s'", - field->type->name, __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex), - parent->type->name, field->name); - } -} -static int __Pyx_BufFmt_ProcessTypeChunk(__Pyx_BufFmt_Context* ctx) { - char group; - size_t size, offset, arraysize = 1; - if (ctx->enc_type == 0) return 0; - if (ctx->head->field->type->arraysize[0]) { - int i, ndim = 0; - if (ctx->enc_type == 's' || ctx->enc_type == 'p') { - ctx->is_valid_array = ctx->head->field->type->ndim == 1; - ndim = 1; - if (ctx->enc_count != ctx->head->field->type->arraysize[0]) { - PyErr_Format(PyExc_ValueError, - "Expected a dimension of size %zu, got %zu", - ctx->head->field->type->arraysize[0], ctx->enc_count); - return -1; - } - } - if (!ctx->is_valid_array) { - PyErr_Format(PyExc_ValueError, "Expected %d dimensions, got %d", - ctx->head->field->type->ndim, ndim); - return -1; - } - for (i = 0; i < ctx->head->field->type->ndim; i++) { - arraysize *= ctx->head->field->type->arraysize[i]; - } - ctx->is_valid_array = 0; - ctx->enc_count = 1; - } - group = __Pyx_BufFmt_TypeCharToGroup(ctx->enc_type, ctx->is_complex); - do { - __Pyx_StructField* field = ctx->head->field; - __Pyx_TypeInfo* type = field->type; - if (ctx->enc_packmode == '@' || ctx->enc_packmode == '^') { - size = __Pyx_BufFmt_TypeCharToNativeSize(ctx->enc_type, ctx->is_complex); - } else { - size = __Pyx_BufFmt_TypeCharToStandardSize(ctx->enc_type, ctx->is_complex); - } - if (ctx->enc_packmode == '@') { - size_t align_at = __Pyx_BufFmt_TypeCharToAlignment(ctx->enc_type, ctx->is_complex); - size_t align_mod_offset; - if (align_at == 0) return -1; - align_mod_offset = ctx->fmt_offset % align_at; - if (align_mod_offset > 0) ctx->fmt_offset += align_at - align_mod_offset; - if (ctx->struct_alignment == 0) - ctx->struct_alignment = __Pyx_BufFmt_TypeCharToPadding(ctx->enc_type, - ctx->is_complex); - } - if (type->size != size || type->typegroup != group) { - if (type->typegroup == 'C' && type->fields != NULL) { - size_t parent_offset = ctx->head->parent_offset + field->offset; - ++ctx->head; - ctx->head->field = type->fields; - ctx->head->parent_offset = parent_offset; - continue; - } - if ((type->typegroup == 'H' || group == 'H') && type->size == size) { - } else { - __Pyx_BufFmt_RaiseExpected(ctx); - return -1; - } - } - offset = ctx->head->parent_offset + field->offset; - if (ctx->fmt_offset != offset) { - PyErr_Format(PyExc_ValueError, - "Buffer dtype mismatch; next field is at offset %" CYTHON_FORMAT_SSIZE_T "d but %" CYTHON_FORMAT_SSIZE_T "d expected", - (Py_ssize_t)ctx->fmt_offset, (Py_ssize_t)offset); - return -1; - } - ctx->fmt_offset += size; - if (arraysize) - ctx->fmt_offset += (arraysize - 1) * size; - --ctx->enc_count; - while (1) { - if (field == &ctx->root) { - ctx->head = NULL; - if (ctx->enc_count != 0) { - __Pyx_BufFmt_RaiseExpected(ctx); - return -1; - } - break; - } - ctx->head->field = ++field; - if (field->type == NULL) { - --ctx->head; - field = ctx->head->field; - continue; - } else if (field->type->typegroup == 'S') { - size_t parent_offset = ctx->head->parent_offset + field->offset; - if (field->type->fields->type == NULL) continue; - field = field->type->fields; - ++ctx->head; - ctx->head->field = field; - ctx->head->parent_offset = parent_offset; - break; - } else { - break; - } - } - } while (ctx->enc_count); - ctx->enc_type = 0; - ctx->is_complex = 0; - return 0; -} -static PyObject * -__pyx_buffmt_parse_array(__Pyx_BufFmt_Context* ctx, const char** tsp) -{ - const char *ts = *tsp; - int i = 0, number, ndim; - ++ts; - if (ctx->new_count != 1) { - PyErr_SetString(PyExc_ValueError, - "Cannot handle repeated arrays in format string"); - return NULL; - } - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ndim = ctx->head->field->type->ndim; - while (*ts && *ts != ')') { - switch (*ts) { - case ' ': case '\f': case '\r': case '\n': case '\t': case '\v': continue; - default: break; - } - number = __Pyx_BufFmt_ExpectNumber(&ts); - if (number == -1) return NULL; - if (i < ndim && (size_t) number != ctx->head->field->type->arraysize[i]) - return PyErr_Format(PyExc_ValueError, - "Expected a dimension of size %zu, got %d", - ctx->head->field->type->arraysize[i], number); - if (*ts != ',' && *ts != ')') - return PyErr_Format(PyExc_ValueError, - "Expected a comma in format string, got '%c'", *ts); - if (*ts == ',') ts++; - i++; - } - if (i != ndim) - return PyErr_Format(PyExc_ValueError, "Expected %d dimension(s), got %d", - ctx->head->field->type->ndim, i); - if (!*ts) { - PyErr_SetString(PyExc_ValueError, - "Unexpected end of format string, expected ')'"); - return NULL; - } - ctx->is_valid_array = 1; - ctx->new_count = 1; - *tsp = ++ts; - return Py_None; -} -static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts) { - int got_Z = 0; - while (1) { - switch(*ts) { - case 0: - if (ctx->enc_type != 0 && ctx->head == NULL) { - __Pyx_BufFmt_RaiseExpected(ctx); - return NULL; - } - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - if (ctx->head != NULL) { - __Pyx_BufFmt_RaiseExpected(ctx); - return NULL; - } - return ts; - case ' ': - case '\r': - case '\n': - ++ts; - break; - case '<': - if (!__Pyx_Is_Little_Endian()) { - PyErr_SetString(PyExc_ValueError, "Little-endian buffer not supported on big-endian compiler"); - return NULL; - } - ctx->new_packmode = '='; - ++ts; - break; - case '>': - case '!': - if (__Pyx_Is_Little_Endian()) { - PyErr_SetString(PyExc_ValueError, "Big-endian buffer not supported on little-endian compiler"); - return NULL; - } - ctx->new_packmode = '='; - ++ts; - break; - case '=': - case '@': - case '^': - ctx->new_packmode = *ts++; - break; - case 'T': - { - const char* ts_after_sub; - size_t i, struct_count = ctx->new_count; - size_t struct_alignment = ctx->struct_alignment; - ctx->new_count = 1; - ++ts; - if (*ts != '{') { - PyErr_SetString(PyExc_ValueError, "Buffer acquisition: Expected '{' after 'T'"); - return NULL; - } - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->enc_type = 0; - ctx->enc_count = 0; - ctx->struct_alignment = 0; - ++ts; - ts_after_sub = ts; - for (i = 0; i != struct_count; ++i) { - ts_after_sub = __Pyx_BufFmt_CheckString(ctx, ts); - if (!ts_after_sub) return NULL; - } - ts = ts_after_sub; - if (struct_alignment) ctx->struct_alignment = struct_alignment; - } - break; - case '}': - { - size_t alignment = ctx->struct_alignment; - ++ts; - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->enc_type = 0; - if (alignment && ctx->fmt_offset % alignment) { - ctx->fmt_offset += alignment - (ctx->fmt_offset % alignment); - } - } - return ts; - case 'x': - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->fmt_offset += ctx->new_count; - ctx->new_count = 1; - ctx->enc_count = 0; - ctx->enc_type = 0; - ctx->enc_packmode = ctx->new_packmode; - ++ts; - break; - case 'Z': - got_Z = 1; - ++ts; - if (*ts != 'f' && *ts != 'd' && *ts != 'g') { - __Pyx_BufFmt_RaiseUnexpectedChar('Z'); - return NULL; - } - CYTHON_FALLTHROUGH; - case '?': case 'c': case 'b': case 'B': case 'h': case 'H': case 'i': case 'I': - case 'l': case 'L': case 'q': case 'Q': - case 'f': case 'd': case 'g': - case 'O': case 'p': - if ((ctx->enc_type == *ts) && (got_Z == ctx->is_complex) && - (ctx->enc_packmode == ctx->new_packmode) && (!ctx->is_valid_array)) { - ctx->enc_count += ctx->new_count; - ctx->new_count = 1; - got_Z = 0; - ++ts; - break; - } - CYTHON_FALLTHROUGH; - case 's': - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->enc_count = ctx->new_count; - ctx->enc_packmode = ctx->new_packmode; - ctx->enc_type = *ts; - ctx->is_complex = got_Z; - ++ts; - ctx->new_count = 1; - got_Z = 0; - break; - case ':': - ++ts; - while(*ts != ':') ++ts; - ++ts; - break; - case '(': - if (!__pyx_buffmt_parse_array(ctx, &ts)) return NULL; - break; - default: - { - int number = __Pyx_BufFmt_ExpectNumber(&ts); - if (number == -1) return NULL; - ctx->new_count = (size_t)number; - } - } - } -} - -/* TypeInfoCompare */ - static int -__pyx_typeinfo_cmp(__Pyx_TypeInfo *a, __Pyx_TypeInfo *b) -{ - int i; - if (!a || !b) - return 0; - if (a == b) - return 1; - if (a->size != b->size || a->typegroup != b->typegroup || - a->is_unsigned != b->is_unsigned || a->ndim != b->ndim) { - if (a->typegroup == 'H' || b->typegroup == 'H') { - return a->size == b->size; - } else { - return 0; - } - } - if (a->ndim) { - for (i = 0; i < a->ndim; i++) - if (a->arraysize[i] != b->arraysize[i]) - return 0; - } - if (a->typegroup == 'S') { - if (a->flags != b->flags) - return 0; - if (a->fields || b->fields) { - if (!(a->fields && b->fields)) - return 0; - for (i = 0; a->fields[i].type && b->fields[i].type; i++) { - __Pyx_StructField *field_a = a->fields + i; - __Pyx_StructField *field_b = b->fields + i; - if (field_a->offset != field_b->offset || - !__pyx_typeinfo_cmp(field_a->type, field_b->type)) - return 0; - } - return !a->fields[i].type && !b->fields[i].type; - } - } - return 1; -} - -/* MemviewSliceValidateAndInit */ - static int -__pyx_check_strides(Py_buffer *buf, int dim, int ndim, int spec) -{ - if (buf->shape[dim] <= 1) - return 1; - if (buf->strides) { - if (spec & __Pyx_MEMVIEW_CONTIG) { - if (spec & (__Pyx_MEMVIEW_PTR|__Pyx_MEMVIEW_FULL)) { - if (unlikely(buf->strides[dim] != sizeof(void *))) { - PyErr_Format(PyExc_ValueError, - "Buffer is not indirectly contiguous " - "in dimension %d.", dim); - goto fail; - } - } else if (unlikely(buf->strides[dim] != buf->itemsize)) { - PyErr_SetString(PyExc_ValueError, - "Buffer and memoryview are not contiguous " - "in the same dimension."); - goto fail; - } - } - if (spec & __Pyx_MEMVIEW_FOLLOW) { - Py_ssize_t stride = buf->strides[dim]; - if (stride < 0) - stride = -stride; - if (unlikely(stride < buf->itemsize)) { - PyErr_SetString(PyExc_ValueError, - "Buffer and memoryview are not contiguous " - "in the same dimension."); - goto fail; - } - } - } else { - if (unlikely(spec & __Pyx_MEMVIEW_CONTIG && dim != ndim - 1)) { - PyErr_Format(PyExc_ValueError, - "C-contiguous buffer is not contiguous in " - "dimension %d", dim); - goto fail; - } else if (unlikely(spec & (__Pyx_MEMVIEW_PTR))) { - PyErr_Format(PyExc_ValueError, - "C-contiguous buffer is not indirect in " - "dimension %d", dim); - goto fail; - } else if (unlikely(buf->suboffsets)) { - PyErr_SetString(PyExc_ValueError, - "Buffer exposes suboffsets but no strides"); - goto fail; - } - } - return 1; -fail: - return 0; -} -static int -__pyx_check_suboffsets(Py_buffer *buf, int dim, CYTHON_UNUSED int ndim, int spec) -{ - if (spec & __Pyx_MEMVIEW_DIRECT) { - if (unlikely(buf->suboffsets && buf->suboffsets[dim] >= 0)) { - PyErr_Format(PyExc_ValueError, - "Buffer not compatible with direct access " - "in dimension %d.", dim); - goto fail; - } - } - if (spec & __Pyx_MEMVIEW_PTR) { - if (unlikely(!buf->suboffsets || (buf->suboffsets[dim] < 0))) { - PyErr_Format(PyExc_ValueError, - "Buffer is not indirectly accessible " - "in dimension %d.", dim); - goto fail; - } - } - return 1; -fail: - return 0; -} -static int -__pyx_verify_contig(Py_buffer *buf, int ndim, int c_or_f_flag) -{ - int i; - if (c_or_f_flag & __Pyx_IS_F_CONTIG) { - Py_ssize_t stride = 1; - for (i = 0; i < ndim; i++) { - if (unlikely(stride * buf->itemsize != buf->strides[i] && buf->shape[i] > 1)) { - PyErr_SetString(PyExc_ValueError, - "Buffer not fortran contiguous."); - goto fail; - } - stride = stride * buf->shape[i]; - } - } else if (c_or_f_flag & __Pyx_IS_C_CONTIG) { - Py_ssize_t stride = 1; - for (i = ndim - 1; i >- 1; i--) { - if (unlikely(stride * buf->itemsize != buf->strides[i] && buf->shape[i] > 1)) { - PyErr_SetString(PyExc_ValueError, - "Buffer not C contiguous."); - goto fail; - } - stride = stride * buf->shape[i]; - } - } - return 1; -fail: - return 0; -} -static int __Pyx_ValidateAndInit_memviewslice( - int *axes_specs, - int c_or_f_flag, - int buf_flags, - int ndim, - __Pyx_TypeInfo *dtype, - __Pyx_BufFmt_StackElem stack[], - __Pyx_memviewslice *memviewslice, - PyObject *original_obj) -{ - struct __pyx_memoryview_obj *memview, *new_memview; - __Pyx_RefNannyDeclarations - Py_buffer *buf; - int i, spec = 0, retval = -1; - __Pyx_BufFmt_Context ctx; - int from_memoryview = __pyx_memoryview_check(original_obj); - __Pyx_RefNannySetupContext("ValidateAndInit_memviewslice", 0); - if (from_memoryview && __pyx_typeinfo_cmp(dtype, ((struct __pyx_memoryview_obj *) - original_obj)->typeinfo)) { - memview = (struct __pyx_memoryview_obj *) original_obj; - new_memview = NULL; - } else { - memview = (struct __pyx_memoryview_obj *) __pyx_memoryview_new( - original_obj, buf_flags, 0, dtype); - new_memview = memview; - if (unlikely(!memview)) - goto fail; - } - buf = &memview->view; - if (unlikely(buf->ndim != ndim)) { - PyErr_Format(PyExc_ValueError, - "Buffer has wrong number of dimensions (expected %d, got %d)", - ndim, buf->ndim); - goto fail; - } - if (new_memview) { - __Pyx_BufFmt_Init(&ctx, stack, dtype); - if (unlikely(!__Pyx_BufFmt_CheckString(&ctx, buf->format))) goto fail; - } - if (unlikely((unsigned) buf->itemsize != dtype->size)) { - PyErr_Format(PyExc_ValueError, - "Item size of buffer (%" CYTHON_FORMAT_SSIZE_T "u byte%s) " - "does not match size of '%s' (%" CYTHON_FORMAT_SSIZE_T "u byte%s)", - buf->itemsize, - (buf->itemsize > 1) ? "s" : "", - dtype->name, - dtype->size, - (dtype->size > 1) ? "s" : ""); - goto fail; - } - if (buf->len > 0) { - for (i = 0; i < ndim; i++) { - spec = axes_specs[i]; - if (unlikely(!__pyx_check_strides(buf, i, ndim, spec))) - goto fail; - if (unlikely(!__pyx_check_suboffsets(buf, i, ndim, spec))) - goto fail; - } - if (unlikely(buf->strides && !__pyx_verify_contig(buf, ndim, c_or_f_flag))) - goto fail; - } - if (unlikely(__Pyx_init_memviewslice(memview, ndim, memviewslice, - new_memview != NULL) == -1)) { - goto fail; - } - retval = 0; - goto no_fail; -fail: - Py_XDECREF(new_memview); - retval = -1; -no_fail: - __Pyx_RefNannyFinishContext(); - return retval; -} - -/* ObjectToMemviewSlice */ - static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(PyObject *obj, int writable_flag) { - __Pyx_memviewslice result = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_BufFmt_StackElem stack[1]; - int axes_specs[] = { (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_CONTIG) }; - int retcode; - if (obj == Py_None) { - result.memview = (struct __pyx_memoryview_obj *) Py_None; - return result; - } - retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, __Pyx_IS_C_CONTIG, - (PyBUF_C_CONTIGUOUS | PyBUF_FORMAT) | writable_flag, 3, - &__Pyx_TypeInfo_int, stack, - &result, obj); - if (unlikely(retcode == -1)) - goto __pyx_fail; - return result; -__pyx_fail: - result.memview = NULL; - result.data = NULL; - return result; -} - -/* ObjectToMemviewSlice */ - static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(PyObject *obj, int writable_flag) { - __Pyx_memviewslice result = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_BufFmt_StackElem stack[1]; - int axes_specs[] = { (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_CONTIG) }; - int retcode; - if (obj == Py_None) { - result.memview = (struct __pyx_memoryview_obj *) Py_None; - return result; - } - retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, __Pyx_IS_C_CONTIG, - (PyBUF_C_CONTIGUOUS | PyBUF_FORMAT) | writable_flag, 3, - &__Pyx_TypeInfo_float, stack, - &result, obj); - if (unlikely(retcode == -1)) - goto __pyx_fail; - return result; -__pyx_fail: - result.memview = NULL; - result.data = NULL; - return result; -} - -/* ObjectToMemviewSlice */ - static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_dc_int(PyObject *obj, int writable_flag) { - __Pyx_memviewslice result = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_BufFmt_StackElem stack[1]; - int axes_specs[] = { (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_CONTIG) }; - int retcode; - if (obj == Py_None) { - result.memview = (struct __pyx_memoryview_obj *) Py_None; - return result; - } - retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, __Pyx_IS_C_CONTIG, - (PyBUF_C_CONTIGUOUS | PyBUF_FORMAT) | writable_flag, 1, - &__Pyx_TypeInfo_int, stack, - &result, obj); - if (unlikely(retcode == -1)) - goto __pyx_fail; - return result; -__pyx_fail: - result.memview = NULL; - result.data = NULL; - return result; -} - -/* CIntToPy */ - static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value) { - const int neg_one = (int) ((int) 0 - (int) 1), const_zero = (int) 0; - const int is_unsigned = neg_one > const_zero; - if (is_unsigned) { - if (sizeof(int) < sizeof(long)) { - return PyInt_FromLong((long) value); - } else if (sizeof(int) <= sizeof(unsigned long)) { - return PyLong_FromUnsignedLong((unsigned long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { - return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); -#endif - } - } else { - if (sizeof(int) <= sizeof(long)) { - return PyInt_FromLong((long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { - return PyLong_FromLongLong((PY_LONG_LONG) value); -#endif - } - } - { - int one = 1; int little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&value; - return _PyLong_FromByteArray(bytes, sizeof(int), - little, !is_unsigned); - } -} - -/* CIntFromPyVerify */ - #define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0) -#define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1) -#define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\ - {\ - func_type value = func_value;\ - if (sizeof(target_type) < sizeof(func_type)) {\ - if (unlikely(value != (func_type) (target_type) value)) {\ - func_type zero = 0;\ - if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\ - return (target_type) -1;\ - if (is_unsigned && unlikely(value < zero))\ - goto raise_neg_overflow;\ - else\ - goto raise_overflow;\ - }\ - }\ - return (target_type) value;\ - } - -/* CIntToPy */ - static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) { - const long neg_one = (long) ((long) 0 - (long) 1), const_zero = (long) 0; - const int is_unsigned = neg_one > const_zero; - if (is_unsigned) { - if (sizeof(long) < sizeof(long)) { - return PyInt_FromLong((long) value); - } else if (sizeof(long) <= sizeof(unsigned long)) { - return PyLong_FromUnsignedLong((unsigned long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { - return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); -#endif - } - } else { - if (sizeof(long) <= sizeof(long)) { - return PyInt_FromLong((long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { - return PyLong_FromLongLong((PY_LONG_LONG) value); -#endif - } - } - { - int one = 1; int little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&value; - return _PyLong_FromByteArray(bytes, sizeof(long), - little, !is_unsigned); - } -} - -/* MemviewSliceCopyTemplate */ - static __Pyx_memviewslice -__pyx_memoryview_copy_new_contig(const __Pyx_memviewslice *from_mvs, - const char *mode, int ndim, - size_t sizeof_dtype, int contig_flag, - int dtype_is_object) -{ - __Pyx_RefNannyDeclarations - int i; - __Pyx_memviewslice new_mvs = { 0, 0, { 0 }, { 0 }, { 0 } }; - struct __pyx_memoryview_obj *from_memview = from_mvs->memview; - Py_buffer *buf = &from_memview->view; - PyObject *shape_tuple = NULL; - PyObject *temp_int = NULL; - struct __pyx_array_obj *array_obj = NULL; - struct __pyx_memoryview_obj *memview_obj = NULL; - __Pyx_RefNannySetupContext("__pyx_memoryview_copy_new_contig", 0); - for (i = 0; i < ndim; i++) { - if (unlikely(from_mvs->suboffsets[i] >= 0)) { - PyErr_Format(PyExc_ValueError, "Cannot copy memoryview slice with " - "indirect dimensions (axis %d)", i); - goto fail; - } - } - shape_tuple = PyTuple_New(ndim); - if (unlikely(!shape_tuple)) { - goto fail; - } - __Pyx_GOTREF(shape_tuple); - for(i = 0; i < ndim; i++) { - temp_int = PyInt_FromSsize_t(from_mvs->shape[i]); - if(unlikely(!temp_int)) { - goto fail; - } else { - PyTuple_SET_ITEM(shape_tuple, i, temp_int); - temp_int = NULL; - } - } - array_obj = __pyx_array_new(shape_tuple, sizeof_dtype, buf->format, (char *) mode, NULL); - if (unlikely(!array_obj)) { - goto fail; - } - __Pyx_GOTREF(array_obj); - memview_obj = (struct __pyx_memoryview_obj *) __pyx_memoryview_new( - (PyObject *) array_obj, contig_flag, - dtype_is_object, - from_mvs->memview->typeinfo); - if (unlikely(!memview_obj)) - goto fail; - if (unlikely(__Pyx_init_memviewslice(memview_obj, ndim, &new_mvs, 1) < 0)) - goto fail; - if (unlikely(__pyx_memoryview_copy_contents(*from_mvs, new_mvs, ndim, ndim, - dtype_is_object) < 0)) - goto fail; - goto no_fail; -fail: - __Pyx_XDECREF(new_mvs.memview); - new_mvs.memview = NULL; - new_mvs.data = NULL; -no_fail: - __Pyx_XDECREF(shape_tuple); - __Pyx_XDECREF(temp_int); - __Pyx_XDECREF(array_obj); - __Pyx_RefNannyFinishContext(); - return new_mvs; -} - -/* CIntFromPy */ - static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) { - const int neg_one = (int) ((int) 0 - (int) 1), const_zero = (int) 0; - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if (sizeof(int) < sizeof(long)) { - __PYX_VERIFY_RETURN_INT(int, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (int) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (int) 0; - case 1: __PYX_VERIFY_RETURN_INT(int, digit, digits[0]) - case 2: - if (8 * sizeof(int) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) >= 2 * PyLong_SHIFT) { - return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 3: - if (8 * sizeof(int) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) >= 3 * PyLong_SHIFT) { - return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 4: - if (8 * sizeof(int) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) >= 4 * PyLong_SHIFT) { - return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (int) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if (sizeof(int) <= sizeof(unsigned long)) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (int) 0; - case -1: __PYX_VERIFY_RETURN_INT(int, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT(int, digit, +digits[0]) - case -2: - if (8 * sizeof(int) - 1 > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { - return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 2: - if (8 * sizeof(int) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { - return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -3: - if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { - return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 3: - if (8 * sizeof(int) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { - return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -4: - if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { - return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 4: - if (8 * sizeof(int) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { - return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - } -#endif - if (sizeof(int) <= sizeof(long)) { - __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); -#else - int val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return (int) -1; - } - } else { - int val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (int) -1; - val = __Pyx_PyInt_As_int(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to int"); - return (int) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to int"); - return (int) -1; -} - -/* CIntFromPy */ - static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) { - const long neg_one = (long) ((long) 0 - (long) 1), const_zero = (long) 0; - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if (sizeof(long) < sizeof(long)) { - __PYX_VERIFY_RETURN_INT(long, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (long) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (long) 0; - case 1: __PYX_VERIFY_RETURN_INT(long, digit, digits[0]) - case 2: - if (8 * sizeof(long) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) >= 2 * PyLong_SHIFT) { - return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 3: - if (8 * sizeof(long) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) >= 3 * PyLong_SHIFT) { - return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 4: - if (8 * sizeof(long) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) >= 4 * PyLong_SHIFT) { - return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (long) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if (sizeof(long) <= sizeof(unsigned long)) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (long) 0; - case -1: __PYX_VERIFY_RETURN_INT(long, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT(long, digit, +digits[0]) - case -2: - if (8 * sizeof(long) - 1 > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 2: - if (8 * sizeof(long) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -3: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 3: - if (8 * sizeof(long) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -4: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 4: - if (8 * sizeof(long) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - } -#endif - if (sizeof(long) <= sizeof(long)) { - __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); -#else - long val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return (long) -1; - } - } else { - long val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (long) -1; - val = __Pyx_PyInt_As_long(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to long"); - return (long) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to long"); - return (long) -1; -} - -/* CIntFromPy */ - static CYTHON_INLINE char __Pyx_PyInt_As_char(PyObject *x) { - const char neg_one = (char) ((char) 0 - (char) 1), const_zero = (char) 0; - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if (sizeof(char) < sizeof(long)) { - __PYX_VERIFY_RETURN_INT(char, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (char) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (char) 0; - case 1: __PYX_VERIFY_RETURN_INT(char, digit, digits[0]) - case 2: - if (8 * sizeof(char) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) >= 2 * PyLong_SHIFT) { - return (char) (((((char)digits[1]) << PyLong_SHIFT) | (char)digits[0])); - } - } - break; - case 3: - if (8 * sizeof(char) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) >= 3 * PyLong_SHIFT) { - return (char) (((((((char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0])); - } - } - break; - case 4: - if (8 * sizeof(char) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) >= 4 * PyLong_SHIFT) { - return (char) (((((((((char)digits[3]) << PyLong_SHIFT) | (char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0])); - } - } - break; - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (char) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if (sizeof(char) <= sizeof(unsigned long)) { - __PYX_VERIFY_RETURN_INT_EXC(char, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(char) <= sizeof(unsigned PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(char, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (char) 0; - case -1: __PYX_VERIFY_RETURN_INT(char, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT(char, digit, +digits[0]) - case -2: - if (8 * sizeof(char) - 1 > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 2 * PyLong_SHIFT) { - return (char) (((char)-1)*(((((char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case 2: - if (8 * sizeof(char) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 2 * PyLong_SHIFT) { - return (char) ((((((char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case -3: - if (8 * sizeof(char) - 1 > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 3 * PyLong_SHIFT) { - return (char) (((char)-1)*(((((((char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case 3: - if (8 * sizeof(char) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 3 * PyLong_SHIFT) { - return (char) ((((((((char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case -4: - if (8 * sizeof(char) - 1 > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 4 * PyLong_SHIFT) { - return (char) (((char)-1)*(((((((((char)digits[3]) << PyLong_SHIFT) | (char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case 4: - if (8 * sizeof(char) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 4 * PyLong_SHIFT) { - return (char) ((((((((((char)digits[3]) << PyLong_SHIFT) | (char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - } -#endif - if (sizeof(char) <= sizeof(long)) { - __PYX_VERIFY_RETURN_INT_EXC(char, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(char) <= sizeof(PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(char, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); -#else - char val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return (char) -1; - } - } else { - char val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (char) -1; - val = __Pyx_PyInt_As_char(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to char"); - return (char) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to char"); - return (char) -1; -} - -/* CheckBinaryVersion */ - static int __Pyx_check_binary_version(void) { - char ctversion[4], rtversion[4]; - PyOS_snprintf(ctversion, 4, "%d.%d", PY_MAJOR_VERSION, PY_MINOR_VERSION); - PyOS_snprintf(rtversion, 4, "%s", Py_GetVersion()); - if (ctversion[0] != rtversion[0] || ctversion[2] != rtversion[2]) { - char message[200]; - PyOS_snprintf(message, sizeof(message), - "compiletime version %s of module '%.100s' " - "does not match runtime version %s", - ctversion, __Pyx_MODULE_NAME, rtversion); - return PyErr_WarnEx(NULL, message, 1); - } - return 0; -} - -/* InitStrings */ - static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) { - while (t->p) { - #if PY_MAJOR_VERSION < 3 - if (t->is_unicode) { - *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL); - } else if (t->intern) { - *t->p = PyString_InternFromString(t->s); - } else { - *t->p = PyString_FromStringAndSize(t->s, t->n - 1); - } - #else - if (t->is_unicode | t->is_str) { - if (t->intern) { - *t->p = PyUnicode_InternFromString(t->s); - } else if (t->encoding) { - *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL); - } else { - *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1); - } - } else { - *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1); - } - #endif - if (!*t->p) - return -1; - if (PyObject_Hash(*t->p) == -1) - return -1; - ++t; - } - return 0; -} - -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) { - return __Pyx_PyUnicode_FromStringAndSize(c_str, (Py_ssize_t)strlen(c_str)); -} -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject* o) { - Py_ssize_t ignore; - return __Pyx_PyObject_AsStringAndSize(o, &ignore); -} -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -#if !CYTHON_PEP393_ENABLED -static const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - char* defenc_c; - PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL); - if (!defenc) return NULL; - defenc_c = PyBytes_AS_STRING(defenc); -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - { - char* end = defenc_c + PyBytes_GET_SIZE(defenc); - char* c; - for (c = defenc_c; c < end; c++) { - if ((unsigned char) (*c) >= 128) { - PyUnicode_AsASCIIString(o); - return NULL; - } - } - } -#endif - *length = PyBytes_GET_SIZE(defenc); - return defenc_c; -} -#else -static CYTHON_INLINE const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - if (unlikely(__Pyx_PyUnicode_READY(o) == -1)) return NULL; -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - if (likely(PyUnicode_IS_ASCII(o))) { - *length = PyUnicode_GET_LENGTH(o); - return PyUnicode_AsUTF8(o); - } else { - PyUnicode_AsASCIIString(o); - return NULL; - } -#else - return PyUnicode_AsUTF8AndSize(o, length); -#endif -} -#endif -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) { -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT - if ( -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - __Pyx_sys_getdefaultencoding_not_ascii && -#endif - PyUnicode_Check(o)) { - return __Pyx_PyUnicode_AsStringAndSize(o, length); - } else -#endif -#if (!CYTHON_COMPILING_IN_PYPY) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE)) - if (PyByteArray_Check(o)) { - *length = PyByteArray_GET_SIZE(o); - return PyByteArray_AS_STRING(o); - } else -#endif - { - char* result; - int r = PyBytes_AsStringAndSize(o, &result, length); - if (unlikely(r < 0)) { - return NULL; - } else { - return result; - } - } -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { - int is_true = x == Py_True; - if (is_true | (x == Py_False) | (x == Py_None)) return is_true; - else return PyObject_IsTrue(x); -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject* x) { - int retval; - if (unlikely(!x)) return -1; - retval = __Pyx_PyObject_IsTrue(x); - Py_DECREF(x); - return retval; -} -static PyObject* __Pyx_PyNumber_IntOrLongWrongResultType(PyObject* result, const char* type_name) { -#if PY_MAJOR_VERSION >= 3 - if (PyLong_Check(result)) { - if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1, - "__int__ returned non-int (type %.200s). " - "The ability to return an instance of a strict subclass of int " - "is deprecated, and may be removed in a future version of Python.", - Py_TYPE(result)->tp_name)) { - Py_DECREF(result); - return NULL; - } - return result; - } -#endif - PyErr_Format(PyExc_TypeError, - "__%.4s__ returned non-%.4s (type %.200s)", - type_name, type_name, Py_TYPE(result)->tp_name); - Py_DECREF(result); - return NULL; -} -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) { -#if CYTHON_USE_TYPE_SLOTS - PyNumberMethods *m; -#endif - const char *name = NULL; - PyObject *res = NULL; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x) || PyLong_Check(x))) -#else - if (likely(PyLong_Check(x))) -#endif - return __Pyx_NewRef(x); -#if CYTHON_USE_TYPE_SLOTS - m = Py_TYPE(x)->tp_as_number; - #if PY_MAJOR_VERSION < 3 - if (m && m->nb_int) { - name = "int"; - res = m->nb_int(x); - } - else if (m && m->nb_long) { - name = "long"; - res = m->nb_long(x); - } - #else - if (likely(m && m->nb_int)) { - name = "int"; - res = m->nb_int(x); - } - #endif -#else - if (!PyBytes_CheckExact(x) && !PyUnicode_CheckExact(x)) { - res = PyNumber_Int(x); - } -#endif - if (likely(res)) { -#if PY_MAJOR_VERSION < 3 - if (unlikely(!PyInt_Check(res) && !PyLong_Check(res))) { -#else - if (unlikely(!PyLong_CheckExact(res))) { -#endif - return __Pyx_PyNumber_IntOrLongWrongResultType(res, name); - } - } - else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_TypeError, - "an integer is required"); - } - return res; -} -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { - Py_ssize_t ival; - PyObject *x; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(b))) { - if (sizeof(Py_ssize_t) >= sizeof(long)) - return PyInt_AS_LONG(b); - else - return PyInt_AsSsize_t(b); - } -#endif - if (likely(PyLong_CheckExact(b))) { - #if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)b)->ob_digit; - const Py_ssize_t size = Py_SIZE(b); - if (likely(__Pyx_sst_abs(size) <= 1)) { - ival = likely(size) ? digits[0] : 0; - if (size == -1) ival = -ival; - return ival; - } else { - switch (size) { - case 2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - } - } - #endif - return PyLong_AsSsize_t(b); - } - x = PyNumber_Index(b); - if (!x) return -1; - ival = PyInt_AsSsize_t(x); - Py_DECREF(x); - return ival; -} -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b) { - return b ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False); -} -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) { - return PyInt_FromSize_t(ival); -} - - -#endif /* Py_PYTHON_H */ diff --git a/spaces/xyha/sd/README.md b/spaces/xyha/sd/README.md deleted file mode 100644 index d1c909620266923065de4be30abc15abc1ff8d1e..0000000000000000000000000000000000000000 --- a/spaces/xyha/sd/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Sd -emoji: 💻 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/yderre-aubay/midi-player-demo/src/main/components/ArrangeView/ArrangeEditor.tsx b/spaces/yderre-aubay/midi-player-demo/src/main/components/ArrangeView/ArrangeEditor.tsx deleted file mode 100644 index 6a910b53bc1f634b74a561166d65b9ad452e0b66..0000000000000000000000000000000000000000 --- a/spaces/yderre-aubay/midi-player-demo/src/main/components/ArrangeView/ArrangeEditor.tsx +++ /dev/null @@ -1,23 +0,0 @@ -import styled from "@emotion/styled" -import { FC } from "react" -import { ArrangeViewKeyboardShortcut } from "../KeyboardShortcut/ArrangeViewKeyboardShortcut" -import { ArrangeToolbar } from "./ArrangeToolbar" -import { ArrangeView } from "./ArrangeView" - -const Container = styled.div` - overflow: hidden; - display: flex; - flex-direction: column; - flex-grow: 1; - position: relative; -` - -export const ArrangeEditor: FC = () => { - return ( - - - - - - ) -} diff --git a/spaces/yderre-aubay/midi-player-demo/src/main/components/GLNodes/Beats.tsx b/spaces/yderre-aubay/midi-player-demo/src/main/components/GLNodes/Beats.tsx deleted file mode 100644 index 429095f4b84d80c8794aac509303d24ecdf6a905..0000000000000000000000000000000000000000 --- a/spaces/yderre-aubay/midi-player-demo/src/main/components/GLNodes/Beats.tsx +++ /dev/null @@ -1,45 +0,0 @@ -import { Rectangles } from "@ryohey/webgl-react" -import Color from "color" -import { partition } from "lodash" -import { FC } from "react" -import { IRect } from "../../../common/geometry" -import { BeatWithX } from "../../../common/helpers/mapBeats" -import { colorToVec4 } from "../../gl/color" -import { useTheme } from "../../hooks/useTheme" - -export const Beats: FC<{ - height: number - beats: BeatWithX[] - zIndex: number -}> = ({ height, beats, zIndex }) => { - const theme = useTheme() - - const vline = (x: number): IRect => ({ - x, - y: 0, - width: 1, - height, - }) - - const [highlightedBeats, nonHighlightedBeats] = partition( - beats, - (b) => b.beat === 0, - ) - - const lines = nonHighlightedBeats.map((b) => vline(b.x)) - const highlightedLines = highlightedBeats.map((b) => vline(b.x)) - - const color = colorToVec4(Color(theme.dividerColor).alpha(0.2)) - const highlightedColor = colorToVec4(Color(theme.dividerColor).alpha(0.5)) - - return ( - <> - - - - ) -} diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/deprecated/bort/convert_bort_original_gluonnlp_checkpoint_to_pytorch.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/deprecated/bort/convert_bort_original_gluonnlp_checkpoint_to_pytorch.py deleted file mode 100644 index 4753f593da19b2da994acdebdd2524a42841e4f4..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/deprecated/bort/convert_bort_original_gluonnlp_checkpoint_to_pytorch.py +++ /dev/null @@ -1,319 +0,0 @@ -# coding=utf-8 -# Copyright 2020, The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Convert Bort checkpoint.""" - - -import argparse -import os - -import gluonnlp as nlp -import mxnet as mx -import numpy as np -import torch -from gluonnlp.base import get_home_dir -from gluonnlp.model.bert import BERTEncoder -from gluonnlp.model.utils import _load_vocab -from gluonnlp.vocab import Vocab -from packaging import version -from torch import nn - -from transformers import BertConfig, BertForMaskedLM, BertModel, RobertaTokenizer -from transformers.models.bert.modeling_bert import ( - BertIntermediate, - BertLayer, - BertOutput, - BertSelfAttention, - BertSelfOutput, -) -from transformers.utils import logging - - -if version.parse(nlp.__version__) != version.parse("0.8.3"): - raise Exception("requires gluonnlp == 0.8.3") - -if version.parse(mx.__version__) != version.parse("1.5.0"): - raise Exception("requires mxnet == 1.5.0") - -logging.set_verbosity_info() -logger = logging.get_logger(__name__) - -SAMPLE_TEXT = "The Nymphenburg Palace is a beautiful palace in Munich!" - - -def convert_bort_checkpoint_to_pytorch(bort_checkpoint_path: str, pytorch_dump_folder_path: str): - """ - Convert the original Bort checkpoint (based on MXNET and Gluonnlp) to our BERT structure- - """ - - # Original Bort configuration - bort_4_8_768_1024_hparams = { - "attention_cell": "multi_head", - "num_layers": 4, - "units": 1024, - "hidden_size": 768, - "max_length": 512, - "num_heads": 8, - "scaled": True, - "dropout": 0.1, - "use_residual": True, - "embed_size": 1024, - "embed_dropout": 0.1, - "word_embed": None, - "layer_norm_eps": 1e-5, - "token_type_vocab_size": 2, - } - - predefined_args = bort_4_8_768_1024_hparams - - # Let's construct the original Bort model here - # Taken from official BERT implementation, see: - # https://github.com/alexa/bort/blob/master/bort/bort.py - encoder = BERTEncoder( - attention_cell=predefined_args["attention_cell"], - num_layers=predefined_args["num_layers"], - units=predefined_args["units"], - hidden_size=predefined_args["hidden_size"], - max_length=predefined_args["max_length"], - num_heads=predefined_args["num_heads"], - scaled=predefined_args["scaled"], - dropout=predefined_args["dropout"], - output_attention=False, - output_all_encodings=False, - use_residual=predefined_args["use_residual"], - activation=predefined_args.get("activation", "gelu"), - layer_norm_eps=predefined_args.get("layer_norm_eps", None), - ) - - # Vocab information needs to be fetched first - # It's the same as RoBERTa, so RobertaTokenizer can be used later - vocab_name = "openwebtext_ccnews_stories_books_cased" - - # Specify download folder to Gluonnlp's vocab - gluon_cache_dir = os.path.join(get_home_dir(), "models") - bort_vocab = _load_vocab(vocab_name, None, gluon_cache_dir, cls=Vocab) - - original_bort = nlp.model.BERTModel( - encoder, - len(bort_vocab), - units=predefined_args["units"], - embed_size=predefined_args["embed_size"], - embed_dropout=predefined_args["embed_dropout"], - word_embed=predefined_args["word_embed"], - use_pooler=False, - use_token_type_embed=False, - token_type_vocab_size=predefined_args["token_type_vocab_size"], - use_classifier=False, - use_decoder=False, - ) - - original_bort.load_parameters(bort_checkpoint_path, cast_dtype=True, ignore_extra=True) - params = original_bort._collect_params_with_prefix() - - # Build our config 🤗 - hf_bort_config_json = { - "architectures": ["BertForMaskedLM"], - "attention_probs_dropout_prob": predefined_args["dropout"], - "hidden_act": "gelu", - "hidden_dropout_prob": predefined_args["dropout"], - "hidden_size": predefined_args["embed_size"], - "initializer_range": 0.02, - "intermediate_size": predefined_args["hidden_size"], - "layer_norm_eps": predefined_args["layer_norm_eps"], - "max_position_embeddings": predefined_args["max_length"], - "model_type": "bort", - "num_attention_heads": predefined_args["num_heads"], - "num_hidden_layers": predefined_args["num_layers"], - "pad_token_id": 1, # 2 = BERT, 1 = RoBERTa - "type_vocab_size": 1, # 2 = BERT, 1 = RoBERTa - "vocab_size": len(bort_vocab), - } - - hf_bort_config = BertConfig.from_dict(hf_bort_config_json) - hf_bort_model = BertForMaskedLM(hf_bort_config) - hf_bort_model.eval() - - # Parameter mapping table (Gluonnlp to Transformers) - # * denotes layer index - # - # | Gluon Parameter | Transformers Parameter - # | -------------------------------------------------------------- | ---------------------- - # | `encoder.layer_norm.beta` | `bert.embeddings.LayerNorm.bias` - # | `encoder.layer_norm.gamma` | `bert.embeddings.LayerNorm.weight` - # | `encoder.position_weight` | `bert.embeddings.position_embeddings.weight` - # | `word_embed.0.weight` | `bert.embeddings.word_embeddings.weight` - # | `encoder.transformer_cells.*.attention_cell.proj_key.bias` | `bert.encoder.layer.*.attention.self.key.bias` - # | `encoder.transformer_cells.*.attention_cell.proj_key.weight` | `bert.encoder.layer.*.attention.self.key.weight` - # | `encoder.transformer_cells.*.attention_cell.proj_query.bias` | `bert.encoder.layer.*.attention.self.query.bias` - # | `encoder.transformer_cells.*.attention_cell.proj_query.weight` | `bert.encoder.layer.*.attention.self.query.weight` - # | `encoder.transformer_cells.*.attention_cell.proj_value.bias` | `bert.encoder.layer.*.attention.self.value.bias` - # | `encoder.transformer_cells.*.attention_cell.proj_value.weight` | `bert.encoder.layer.*.attention.self.value.weight` - # | `encoder.transformer_cells.*.ffn.ffn_2.bias` | `bert.encoder.layer.*.attention.output.dense.bias` - # | `encoder.transformer_cells.*.ffn.ffn_2.weight` | `bert.encoder.layer.*.attention.output.dense.weight` - # | `encoder.transformer_cells.*.layer_norm.beta` | `bert.encoder.layer.*.attention.output.LayerNorm.bias` - # | `encoder.transformer_cells.*.layer_norm.gamma` | `bert.encoder.layer.*.attention.output.LayerNorm.weight` - # | `encoder.transformer_cells.*.ffn.ffn_1.bias` | `bert.encoder.layer.*.intermediate.dense.bias` - # | `encoder.transformer_cells.*.ffn.ffn_1.weight` | `bert.encoder.layer.*.intermediate.dense.weight` - # | `encoder.transformer_cells.*.ffn.layer_norm.beta` | `bert.encoder.layer.*.output.LayerNorm.bias` - # | `encoder.transformer_cells.*.ffn.layer_norm.gamma` | `bert.encoder.layer.*.output.LayerNorm.weight` - # | `encoder.transformer_cells.*.proj.bias` | `bert.encoder.layer.*.output.dense.bias` - # | `encoder.transformer_cells.*.proj.weight` | `bert.encoder.layer.*.output.dense.weight` - - # Helper function to convert MXNET Arrays to PyTorch - def to_torch(mx_array) -> nn.Parameter: - return nn.Parameter(torch.FloatTensor(mx_array.data().asnumpy())) - - # Check param shapes and map new HF param back - def check_and_map_params(hf_param, gluon_param): - shape_hf = hf_param.shape - - gluon_param = to_torch(params[gluon_param]) - shape_gluon = gluon_param.shape - - assert ( - shape_hf == shape_gluon - ), f"The gluon parameter {gluon_param} has shape {shape_gluon}, but expects shape {shape_hf} for Transformers" - - return gluon_param - - hf_bort_model.bert.embeddings.word_embeddings.weight = check_and_map_params( - hf_bort_model.bert.embeddings.word_embeddings.weight, "word_embed.0.weight" - ) - hf_bort_model.bert.embeddings.position_embeddings.weight = check_and_map_params( - hf_bort_model.bert.embeddings.position_embeddings.weight, "encoder.position_weight" - ) - hf_bort_model.bert.embeddings.LayerNorm.bias = check_and_map_params( - hf_bort_model.bert.embeddings.LayerNorm.bias, "encoder.layer_norm.beta" - ) - hf_bort_model.bert.embeddings.LayerNorm.weight = check_and_map_params( - hf_bort_model.bert.embeddings.LayerNorm.weight, "encoder.layer_norm.gamma" - ) - - # Inspired by RoBERTa conversion script, we just zero them out (Bort does not use them) - hf_bort_model.bert.embeddings.token_type_embeddings.weight.data = torch.zeros_like( - hf_bort_model.bert.embeddings.token_type_embeddings.weight.data - ) - - for i in range(hf_bort_config.num_hidden_layers): - layer: BertLayer = hf_bort_model.bert.encoder.layer[i] - - # self attention - self_attn: BertSelfAttention = layer.attention.self - - self_attn.key.bias.data = check_and_map_params( - self_attn.key.bias.data, f"encoder.transformer_cells.{i}.attention_cell.proj_key.bias" - ) - - self_attn.key.weight.data = check_and_map_params( - self_attn.key.weight.data, f"encoder.transformer_cells.{i}.attention_cell.proj_key.weight" - ) - self_attn.query.bias.data = check_and_map_params( - self_attn.query.bias.data, f"encoder.transformer_cells.{i}.attention_cell.proj_query.bias" - ) - self_attn.query.weight.data = check_and_map_params( - self_attn.query.weight.data, f"encoder.transformer_cells.{i}.attention_cell.proj_query.weight" - ) - self_attn.value.bias.data = check_and_map_params( - self_attn.value.bias.data, f"encoder.transformer_cells.{i}.attention_cell.proj_value.bias" - ) - self_attn.value.weight.data = check_and_map_params( - self_attn.value.weight.data, f"encoder.transformer_cells.{i}.attention_cell.proj_value.weight" - ) - - # self attention output - self_output: BertSelfOutput = layer.attention.output - - self_output.dense.bias = check_and_map_params( - self_output.dense.bias, f"encoder.transformer_cells.{i}.proj.bias" - ) - self_output.dense.weight = check_and_map_params( - self_output.dense.weight, f"encoder.transformer_cells.{i}.proj.weight" - ) - self_output.LayerNorm.bias = check_and_map_params( - self_output.LayerNorm.bias, f"encoder.transformer_cells.{i}.layer_norm.beta" - ) - self_output.LayerNorm.weight = check_and_map_params( - self_output.LayerNorm.weight, f"encoder.transformer_cells.{i}.layer_norm.gamma" - ) - - # intermediate - intermediate: BertIntermediate = layer.intermediate - - intermediate.dense.bias = check_and_map_params( - intermediate.dense.bias, f"encoder.transformer_cells.{i}.ffn.ffn_1.bias" - ) - intermediate.dense.weight = check_and_map_params( - intermediate.dense.weight, f"encoder.transformer_cells.{i}.ffn.ffn_1.weight" - ) - - # output - bert_output: BertOutput = layer.output - - bert_output.dense.bias = check_and_map_params( - bert_output.dense.bias, f"encoder.transformer_cells.{i}.ffn.ffn_2.bias" - ) - bert_output.dense.weight = check_and_map_params( - bert_output.dense.weight, f"encoder.transformer_cells.{i}.ffn.ffn_2.weight" - ) - bert_output.LayerNorm.bias = check_and_map_params( - bert_output.LayerNorm.bias, f"encoder.transformer_cells.{i}.ffn.layer_norm.beta" - ) - bert_output.LayerNorm.weight = check_and_map_params( - bert_output.LayerNorm.weight, f"encoder.transformer_cells.{i}.ffn.layer_norm.gamma" - ) - - # Save space and energy 🎄 - hf_bort_model.half() - - # Compare output of both models - tokenizer = RobertaTokenizer.from_pretrained("roberta-base") - - input_ids = tokenizer.encode_plus(SAMPLE_TEXT)["input_ids"] - - # Get gluon output - gluon_input_ids = mx.nd.array([input_ids]) - output_gluon = original_bort(inputs=gluon_input_ids, token_types=[]) - - # Get Transformer output (save and reload model again) - hf_bort_model.save_pretrained(pytorch_dump_folder_path) - hf_bort_model = BertModel.from_pretrained(pytorch_dump_folder_path) - hf_bort_model.eval() - - input_ids = tokenizer.encode_plus(SAMPLE_TEXT, return_tensors="pt") - output_hf = hf_bort_model(**input_ids)[0] - - gluon_layer = output_gluon[0].asnumpy() - hf_layer = output_hf[0].detach().numpy() - - max_absolute_diff = np.max(np.abs(hf_layer - gluon_layer)).item() - success = np.allclose(gluon_layer, hf_layer, atol=1e-3) - - if success: - print("✔️ Both model do output the same tensors") - else: - print("❌ Both model do **NOT** output the same tensors") - print("Absolute difference is:", max_absolute_diff) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - # Required parameters - parser.add_argument( - "--bort_checkpoint_path", default=None, type=str, required=True, help="Path the official Bort params file." - ) - parser.add_argument( - "--pytorch_dump_folder_path", default=None, type=str, required=True, help="Path to the output PyTorch model." - ) - args = parser.parse_args() - convert_bort_checkpoint_to_pytorch(args.bort_checkpoint_path, args.pytorch_dump_folder_path) diff --git a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/diffusion/unit2mel.py b/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/diffusion/unit2mel.py deleted file mode 100644 index 52293b13da8e1afeef6fa5586aeaf01cbcc27fb7..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/diffusion/unit2mel.py +++ /dev/null @@ -1,147 +0,0 @@ -import os -import yaml -import torch -import torch.nn as nn -import numpy as np -from .diffusion import GaussianDiffusion -from .wavenet import WaveNet -from .vocoder import Vocoder - -class DotDict(dict): - def __getattr__(*args): - val = dict.get(*args) - return DotDict(val) if type(val) is dict else val - - __setattr__ = dict.__setitem__ - __delattr__ = dict.__delitem__ - - -def load_model_vocoder( - model_path, - device='cpu', - config_path = None - ): - if config_path is None: config_file = os.path.join(os.path.split(model_path)[0], 'config.yaml') - else: config_file = config_path - - with open(config_file, "r") as config: - args = yaml.safe_load(config) - args = DotDict(args) - - # load vocoder - vocoder = Vocoder(args.vocoder.type, args.vocoder.ckpt, device=device) - - # load model - model = Unit2Mel( - args.data.encoder_out_channels, - args.model.n_spk, - args.model.use_pitch_aug, - vocoder.dimension, - args.model.n_layers, - args.model.n_chans, - args.model.n_hidden) - - print(' [Loading] ' + model_path) - ckpt = torch.load(model_path, map_location=torch.device(device)) - model.to(device) - model.load_state_dict(ckpt['model']) - model.eval() - return model, vocoder, args - - -class Unit2Mel(nn.Module): - def __init__( - self, - input_channel, - n_spk, - use_pitch_aug=False, - out_dims=128, - n_layers=20, - n_chans=384, - n_hidden=256): - super().__init__() - self.unit_embed = nn.Linear(input_channel, n_hidden) - self.f0_embed = nn.Linear(1, n_hidden) - self.volume_embed = nn.Linear(1, n_hidden) - if use_pitch_aug: - self.aug_shift_embed = nn.Linear(1, n_hidden, bias=False) - else: - self.aug_shift_embed = None - self.n_spk = n_spk - if n_spk is not None and n_spk > 1: - self.spk_embed = nn.Embedding(n_spk, n_hidden) - - self.n_hidden = n_hidden - # diffusion - self.decoder = GaussianDiffusion(WaveNet(out_dims, n_layers, n_chans, n_hidden), out_dims=out_dims) - self.input_channel = input_channel - - def init_spkembed(self, units, f0, volume, spk_id = None, spk_mix_dict = None, aug_shift = None, - gt_spec=None, infer=True, infer_speedup=10, method='dpm-solver', k_step=300, use_tqdm=True): - - ''' - input: - B x n_frames x n_unit - return: - dict of B x n_frames x feat - ''' - x = self.unit_embed(units) + self.f0_embed((1+ f0 / 700).log()) + self.volume_embed(volume) - if self.n_spk is not None and self.n_spk > 1: - if spk_mix_dict is not None: - spk_embed_mix = torch.zeros((1,1,self.hidden_size)) - for k, v in spk_mix_dict.items(): - spk_id_torch = torch.LongTensor(np.array([[k]])).to(units.device) - spk_embeddd = self.spk_embed(spk_id_torch) - self.speaker_map[k] = spk_embeddd - spk_embed_mix = spk_embed_mix + v * spk_embeddd - x = x + spk_embed_mix - else: - x = x + self.spk_embed(spk_id - 1) - self.speaker_map = self.speaker_map.unsqueeze(0) - self.speaker_map = self.speaker_map.detach() - return x.transpose(1, 2) - - def init_spkmix(self, n_spk): - self.speaker_map = torch.zeros((n_spk,1,1,self.n_hidden)) - hubert_hidden_size = self.input_channel - n_frames = 10 - hubert = torch.randn((1, n_frames, hubert_hidden_size)) - mel2ph = torch.arange(end=n_frames).unsqueeze(0).long() - f0 = torch.randn((1, n_frames)) - volume = torch.randn((1, n_frames)) - spks = {} - for i in range(n_spk): - spks.update({i:1.0/float(self.n_spk)}) - orgouttt = self.init_spkembed(hubert, f0.unsqueeze(-1), volume.unsqueeze(-1), spk_mix_dict=spks) - - def forward(self, units, f0, volume, spk_id = None, spk_mix_dict = None, aug_shift = None, - gt_spec=None, infer=True, infer_speedup=10, method='dpm-solver', k_step=300, use_tqdm=True): - - ''' - input: - B x n_frames x n_unit - return: - dict of B x n_frames x feat - ''' - - x = self.unit_embed(units) + self.f0_embed((1+ f0 / 700).log()) + self.volume_embed(volume) - if self.n_spk is not None and self.n_spk > 1: - if spk_mix_dict is not None: - for k, v in spk_mix_dict.items(): - spk_id_torch = torch.LongTensor(np.array([[k]])).to(units.device) - x = x + v * self.spk_embed(spk_id_torch) - else: - if spk_id.shape[1] > 1: - g = spk_id.reshape((spk_id.shape[0], spk_id.shape[1], 1, 1, 1)) # [N, S, B, 1, 1] - g = g * self.speaker_map # [N, S, B, 1, H] - g = torch.sum(g, dim=1) # [N, 1, B, 1, H] - g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N] - x = x + g - else: - x = x + self.spk_embed(spk_id) - if self.aug_shift_embed is not None and aug_shift is not None: - x = x + self.aug_shift_embed(aug_shift / 5) - x = self.decoder(x, gt_spec=gt_spec, infer=infer, infer_speedup=infer_speedup, method=method, k_step=k_step, use_tqdm=use_tqdm) - - return x - diff --git a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/vdecoder/nsf_hifigan/env.py b/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/vdecoder/nsf_hifigan/env.py deleted file mode 100644 index 2bdbc95d4f7a8bad8fd4f5eef657e2b51d946056..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/vdecoder/nsf_hifigan/env.py +++ /dev/null @@ -1,15 +0,0 @@ -import os -import shutil - - -class AttrDict(dict): - def __init__(self, *args, **kwargs): - super(AttrDict, self).__init__(*args, **kwargs) - self.__dict__ = self - - -def build_env(config, config_name, path): - t_path = os.path.join(path, config_name) - if config != t_path: - os.makedirs(path, exist_ok=True) - shutil.copyfile(config, os.path.join(path, config_name)) diff --git a/spaces/ysharma/visual_chatgpt_dummy/visual_foundation_models.py b/spaces/ysharma/visual_chatgpt_dummy/visual_foundation_models.py deleted file mode 100644 index 0359cde9f83e1933abc195883ed22e3ee64e4e59..0000000000000000000000000000000000000000 --- a/spaces/ysharma/visual_chatgpt_dummy/visual_foundation_models.py +++ /dev/null @@ -1,702 +0,0 @@ -from diffusers import StableDiffusionPipeline, StableDiffusionInpaintPipeline, StableDiffusionInstructPix2PixPipeline -from diffusers import EulerAncestralDiscreteScheduler -from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler -from controlnet_aux import OpenposeDetector, MLSDdetector, HEDdetector - -from transformers import AutoModelForCausalLM, AutoTokenizer, CLIPSegProcessor, CLIPSegForImageSegmentation -from transformers import pipeline, BlipProcessor, BlipForConditionalGeneration, BlipForQuestionAnswering -from transformers import AutoImageProcessor, UperNetForSemanticSegmentation - -import os -import random -import torch -import cv2 -import uuid -from PIL import Image -import numpy as np -from pytorch_lightning import seed_everything - -def prompts(name, description): - def decorator(func): - func.name = name - func.description = description - return func - - return decorator - -def get_new_image_name(org_img_name, func_name="update"): - head_tail = os.path.split(org_img_name) - head = head_tail[0] - tail = head_tail[1] - name_split = tail.split('.')[0].split('_') - this_new_uuid = str(uuid.uuid4())[0:4] - if len(name_split) == 1: - most_org_file_name = name_split[0] - recent_prev_file_name = name_split[0] - new_file_name = '{}_{}_{}_{}.png'.format(this_new_uuid, func_name, recent_prev_file_name, most_org_file_name) - else: - assert len(name_split) == 4 - most_org_file_name = name_split[3] - recent_prev_file_name = name_split[0] - new_file_name = '{}_{}_{}_{}.png'.format(this_new_uuid, func_name, recent_prev_file_name, most_org_file_name) - return os.path.join(head, new_file_name) - - -class MaskFormer: - def __init__(self, device): - print("Initializing MaskFormer to %s" % device) - self.device = device - self.processor = CLIPSegProcessor.from_pretrained("CIDAS/clipseg-rd64-refined") - self.model = CLIPSegForImageSegmentation.from_pretrained("CIDAS/clipseg-rd64-refined").to(device) - - def inference(self, image_path, text): - threshold = 0.5 - min_area = 0.02 - padding = 20 - original_image = Image.open(image_path) - image = original_image.resize((512, 512)) - inputs = self.processor(text=text, images=image, padding="max_length", return_tensors="pt").to(self.device) - with torch.no_grad(): - outputs = self.model(**inputs) - mask = torch.sigmoid(outputs[0]).squeeze().cpu().numpy() > threshold - area_ratio = len(np.argwhere(mask)) / (mask.shape[0] * mask.shape[1]) - if area_ratio < min_area: - return None - true_indices = np.argwhere(mask) - mask_array = np.zeros_like(mask, dtype=bool) - for idx in true_indices: - padded_slice = tuple(slice(max(0, i - padding), i + padding + 1) for i in idx) - mask_array[padded_slice] = True - visual_mask = (mask_array * 255).astype(np.uint8) - image_mask = Image.fromarray(visual_mask) - return image_mask.resize(original_image.size) - - -class ImageEditing: - def __init__(self, device): - print("Initializing ImageEditing to %s" % device) - self.device = device - self.mask_former = MaskFormer(device=self.device) - self.inpaint = StableDiffusionInpaintPipeline.from_pretrained("runwayml/stable-diffusion-inpainting", revision="fp16", torch_dtype=torch.float16).to(device) - - @prompts(name="Remove Something From The Photo", - description="useful when you want to remove and object or something from the photo " - "from its description or location. " - "The input to this tool should be a comma seperated string of two, " - "representing the image_path and the object need to be removed. ") - def inference_remove(self, inputs): - image_path, to_be_removed_txt = inputs.split(",") - return self.inference_replace(f"{image_path},{to_be_removed_txt},background") - - @prompts(name="Replace Something From The Photo", - description="useful when you want to replace an object from the object description or " - "location with another object from its description. " - "The input to this tool should be a comma seperated string of three, " - "representing the image_path, the object to be replaced, the object to be replaced with ") - def inference_replace(self, inputs): - image_path, to_be_replaced_txt, replace_with_txt = inputs.split(",") - original_image = Image.open(image_path) - original_size = original_image.size - mask_image = self.mask_former.inference(image_path, to_be_replaced_txt) - updated_image = self.inpaint(prompt=replace_with_txt, image=original_image.resize((512, 512)), - mask_image=mask_image.resize((512, 512))).images[0] - updated_image_path = get_new_image_name(image_path, func_name="replace-something") - updated_image = updated_image.resize(original_size) - updated_image.save(updated_image_path) - print( - f"\nProcessed ImageEditing, Input Image: {image_path}, Replace {to_be_replaced_txt} to {replace_with_txt}, " - f"Output Image: {updated_image_path}") - return updated_image_path - - -class InstructPix2Pix: - def __init__(self, device): - print("Initializing InstructPix2Pix to %s" % device) - self.device = device - self.pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained("timbrooks/instruct-pix2pix", torch_dtype=torch.float16, - safety_checker=None).to(device) - self.pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(self.pipe.scheduler.config) - - @prompts(name="Instruct Image Using Text", - description="useful when you want to the style of the image to be like the text. " - "like: make it look like a painting. or make it like a robot. " - "The input to this tool should be a comma seperated string of two, " - "representing the image_path and the text. ") - def inference(self, inputs): - """Change style of image.""" - print("===>Starting InstructPix2Pix Inference") - image_path, text = inputs.split(",")[0], ','.join(inputs.split(',')[1:]) - original_image = Image.open(image_path) - image = self.pipe(text, image=original_image, num_inference_steps=40, image_guidance_scale=1.2).images[0] - updated_image_path = get_new_image_name(image_path, func_name="pix2pix") - image.save(updated_image_path) - print(f"\nProcessed InstructPix2Pix, Input Image: {image_path}, Instruct Text: {text}, " - f"Output Image: {updated_image_path}") - return updated_image_path - - -class Text2Image: - def __init__(self, device): - print("Initializing Text2Image to %s" % device) - self.device = device - self.pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5",torch_dtype=torch.float16) - self.text_refine_tokenizer = AutoTokenizer.from_pretrained("Gustavosta/MagicPrompt-Stable-Diffusion") - self.text_refine_model = AutoModelForCausalLM.from_pretrained("Gustavosta/MagicPrompt-Stable-Diffusion") - self.text_refine_gpt2_pipe = pipeline("text-generation", model=self.text_refine_model, - tokenizer=self.text_refine_tokenizer, device=self.device) - self.pipe.to(device) - - @prompts(name="Generate Image From User Input Text", - description="useful when you want to generate an image from a user input text and save it to a file. " - "like: generate an image of an object or something, or generate an image that includes some objects. " - "The input to this tool should be a string, representing the text used to generate image. ") - def inference(self, text): - image_filename = os.path.join('image', str(uuid.uuid4())[0:8] + ".png") - refined_text = self.text_refine_gpt2_pipe(text)[0]["generated_text"] - image = self.pipe(refined_text).images[0] - image.save(image_filename) - print( - f"\nProcessed Text2Image, Input Text: {text}, Refined Text: {refined_text}, Output Image: {image_filename}") - return image_filename - - -class ImageCaptioning: - def __init__(self, device): - print("Initializing ImageCaptioning to %s" % device) - self.device = device - self.processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") - self.model = BlipForConditionalGeneration.from_pretrained( - "Salesforce/blip-image-captioning-base", torch_dtype=torch.float16).to(self.device) - - @prompts(name="Get Photo Description", - description="useful when you want to know what is inside the photo. receives image_path as input. " - "The input to this tool should be a string, representing the image_path. ") - def inference(self, image_path): - inputs = self.processor(Image.open(image_path), return_tensors="pt").to(self.device, torch.float16) - out = self.model.generate(**inputs) - captions = self.processor.decode(out[0], skip_special_tokens=True) - print(f"\nProcessed ImageCaptioning, Input Image: {image_path}, Output Text: {captions}") - return captions - - -class Image2Canny: - def __init__(self, device): - print("Initializing Image2Canny") - self.low_threshold = 100 - self.high_threshold = 200 - - @prompts(name="Edge Detection On Image", - description="useful when you want to detect the edge of the image. " - "like: detect the edges of this image, or canny detection on image, " - "or perform edge detection on this image, or detect the canny image of this image. " - "The input to this tool should be a string, representing the image_path") - def inference(self, inputs): - image = Image.open(inputs) - image = np.array(image) - canny = cv2.Canny(image, self.low_threshold, self.high_threshold) - canny = canny[:, :, None] - canny = np.concatenate([canny, canny, canny], axis=2) - canny = Image.fromarray(canny) - updated_image_path = get_new_image_name(inputs, func_name="edge") - canny.save(updated_image_path) - print(f"\nProcessed Image2Canny, Input Image: {inputs}, Output Text: {updated_image_path}") - return updated_image_path - - -class CannyText2Image: - def __init__(self, device): - print("Initializing CannyText2Image to %s" % device) - self.controlnet = ControlNetModel.from_pretrained("fusing/stable-diffusion-v1-5-controlnet-canny", torch_dtype=torch.float16) - self.pipe = StableDiffusionControlNetPipeline.from_pretrained( - "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=None, torch_dtype=torch.float16) - self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config) - self.pipe.to(device) - self.seed = -1 - self.a_prompt = 'best quality, extremely detailed' - self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, ' \ - 'fewer digits, cropped, worst quality, low quality' - - @prompts(name="Generate Image Condition On Canny Image", - description="useful when you want to generate a new real image from both the user desciption and a canny image." - " like: generate a real image of a object or something from this canny image," - " or generate a new real image of a object or something from this edge image. " - "The input to this tool should be a comma seperated string of two, " - "representing the image_path and the user description. ") - def inference(self, inputs): - image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:]) - image = Image.open(image_path) - self.seed = random.randint(0, 65535) - seed_everything(self.seed) - prompt = instruct_text + ', ' + self.a_prompt - image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt, - guidance_scale=9.0).images[0] - updated_image_path = get_new_image_name(image_path, func_name="canny2image") - image.save(updated_image_path) - print(f"\nProcessed CannyText2Image, Input Canny: {image_path}, Input Text: {instruct_text}, " - f"Output Text: {updated_image_path}") - return updated_image_path - - -class Image2Line: - def __init__(self, device): - print("Initializing Image2Line") - self.detector = MLSDdetector.from_pretrained('lllyasviel/ControlNet') - - @prompts(name="Line Detection On Image", - description="useful when you want to detect the straight line of the image. " - "like: detect the straight lines of this image, or straight line detection on image, " - "or peform straight line detection on this image, or detect the straight line image of this image. " - "The input to this tool should be a string, representing the image_path") - def inference(self, inputs): - image = Image.open(inputs) - mlsd = self.detector(image) - updated_image_path = get_new_image_name(inputs, func_name="line-of") - mlsd.save(updated_image_path) - print(f"\nProcessed Image2Line, Input Image: {inputs}, Output Line: {updated_image_path}") - return updated_image_path - - -class LineText2Image: - def __init__(self, device): - print("Initializing LineText2Image to %s" % device) - self.controlnet = ControlNetModel.from_pretrained("fusing/stable-diffusion-v1-5-controlnet-mlsd") - self.pipe = StableDiffusionControlNetPipeline.from_pretrained( - "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=None - ) - self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config) - self.pipe.to(device) - self.seed = -1 - self.a_prompt = 'best quality, extremely detailed' - self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, ' \ - 'fewer digits, cropped, worst quality, low quality' - - @prompts(name="Generate Image Condition On Line Image", - description="useful when you want to generate a new real image from both the user desciption " - "and a straight line image. " - "like: generate a real image of a object or something from this straight line image, " - "or generate a new real image of a object or something from this straight lines. " - "The input to this tool should be a comma seperated string of two, " - "representing the image_path and the user description. ") - def inference(self, inputs): - image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:]) - image = Image.open(image_path) - self.seed = random.randint(0, 65535) - seed_everything(self.seed) - prompt = instruct_text + ', ' + self.a_prompt - image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt, - guidance_scale=9.0).images[0] - updated_image_path = get_new_image_name(image_path, func_name="line2image") - image.save(updated_image_path) - print(f"\nProcessed LineText2Image, Input Line: {image_path}, Input Text: {instruct_text}, " - f"Output Text: {updated_image_path}") - return updated_image_path - - -class Image2Hed: - def __init__(self, device): - print("Initializing Image2Hed") - self.detector = HEDdetector.from_pretrained('lllyasviel/ControlNet') - - @prompts(name="Hed Detection On Image", - description="useful when you want to detect the soft hed boundary of the image. " - "like: detect the soft hed boundary of this image, or hed boundary detection on image, " - "or peform hed boundary detection on this image, or detect soft hed boundary image of this image. " - "The input to this tool should be a string, representing the image_path") - def inference(self, inputs): - image = Image.open(inputs) - hed = self.detector(image) - updated_image_path = get_new_image_name(inputs, func_name="hed-boundary") - hed.save(updated_image_path) - print(f"\nProcessed Image2Hed, Input Image: {inputs}, Output Hed: {updated_image_path}") - return updated_image_path - - -class HedText2Image: - def __init__(self, device): - print("Initializing HedText2Image to %s" % device) - self.controlnet = ControlNetModel.from_pretrained("fusing/stable-diffusion-v1-5-controlnet-hed") - self.pipe = StableDiffusionControlNetPipeline.from_pretrained( - "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=None - ) - self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config) - self.pipe.to(device) - self.seed = -1 - self.a_prompt = 'best quality, extremely detailed' - self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, ' \ - 'fewer digits, cropped, worst quality, low quality' - - @prompts(name="Generate Image Condition On Soft Hed Boundary Image", - description="useful when you want to generate a new real image from both the user desciption " - "and a soft hed boundary image. " - "like: generate a real image of a object or something from this soft hed boundary image, " - "or generate a new real image of a object or something from this hed boundary. " - "The input to this tool should be a comma seperated string of two, " - "representing the image_path and the user description") - def inference(self, inputs): - image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:]) - image = Image.open(image_path) - self.seed = random.randint(0, 65535) - seed_everything(self.seed) - prompt = instruct_text + ', ' + self.a_prompt - image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt, - guidance_scale=9.0).images[0] - updated_image_path = get_new_image_name(image_path, func_name="hed2image") - image.save(updated_image_path) - print(f"\nProcessed HedText2Image, Input Hed: {image_path}, Input Text: {instruct_text}, " - f"Output Image: {updated_image_path}") - return updated_image_path - - -class Image2Scribble: - def __init__(self, device): - print("Initializing Image2Scribble") - self.detector = HEDdetector.from_pretrained('lllyasviel/ControlNet') - - @prompts(name="Sketch Detection On Image", - description="useful when you want to generate a scribble of the image. " - "like: generate a scribble of this image, or generate a sketch from this image, " - "detect the sketch from this image. " - "The input to this tool should be a string, representing the image_path") - def inference(self, inputs): - image = Image.open(inputs) - scribble = self.detector(image, scribble=True) - updated_image_path = get_new_image_name(inputs, func_name="scribble") - scribble.save(updated_image_path) - print(f"\nProcessed Image2Scribble, Input Image: {inputs}, Output Scribble: {updated_image_path}") - return updated_image_path - - -class ScribbleText2Image: - def __init__(self, device): - print("Initializing ScribbleText2Image to %s" % device) - self.controlnet = ControlNetModel.from_pretrained("fusing/stable-diffusion-v1-5-controlnet-scribble") - self.pipe = StableDiffusionControlNetPipeline.from_pretrained( - "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=None - ) - self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config) - self.pipe.to(device) - self.seed = -1 - self.a_prompt = 'best quality, extremely detailed' - self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, ' \ - 'fewer digits, cropped, worst quality, low quality' - - @prompts(name="Generate Image Condition On Sketch Image", - description="useful when you want to generate a new real image from both the user desciption and " - "a scribble image or a sketch image. " - "The input to this tool should be a comma seperated string of two, " - "representing the image_path and the user description") - def inference(self, inputs): - image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:]) - image = Image.open(image_path) - self.seed = random.randint(0, 65535) - seed_everything(self.seed) - prompt = instruct_text + ', ' + self.a_prompt - image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt, - guidance_scale=9.0).images[0] - updated_image_path = get_new_image_name(image_path, func_name="scribble2image") - image.save(updated_image_path) - print(f"\nProcessed ScribbleText2Image, Input Scribble: {image_path}, Input Text: {instruct_text}, " - f"Output Image: {updated_image_path}") - return updated_image_path - - -class Image2Pose: - def __init__(self, device): - print("Initializing Image2Pose") - self.detector = OpenposeDetector.from_pretrained('lllyasviel/ControlNet') - - @prompts(name="Pose Detection On Image", - description="useful when you want to detect the human pose of the image. " - "like: generate human poses of this image, or generate a pose image from this image. " - "The input to this tool should be a string, representing the image_path") - def inference(self, inputs): - image = Image.open(inputs) - pose = self.detector(image) - updated_image_path = get_new_image_name(inputs, func_name="human-pose") - pose.save(updated_image_path) - print(f"\nProcessed Image2Pose, Input Image: {inputs}, Output Pose: {updated_image_path}") - return updated_image_path - - -class PoseText2Image: - def __init__(self, device): - print("Initializing PoseText2Image to %s" % device) - self.controlnet = ControlNetModel.from_pretrained("fusing/stable-diffusion-v1-5-controlnet-openpose") - self.pipe = StableDiffusionControlNetPipeline.from_pretrained( - "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=None) - self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config) - self.pipe.to(device) - self.num_inference_steps = 20 - self.seed = -1 - self.unconditional_guidance_scale = 9.0 - self.a_prompt = 'best quality, extremely detailed' - self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit,' \ - ' fewer digits, cropped, worst quality, low quality' - - @prompts(name="Generate Image Condition On Pose Image", - description="useful when you want to generate a new real image from both the user desciption " - "and a human pose image. " - "like: generate a real image of a human from this human pose image, " - "or generate a new real image of a human from this pose. " - "The input to this tool should be a comma seperated string of two, " - "representing the image_path and the user description") - def inference(self, inputs): - image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:]) - image = Image.open(image_path) - self.seed = random.randint(0, 65535) - seed_everything(self.seed) - prompt = instruct_text + ', ' + self.a_prompt - image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt, - guidance_scale=9.0).images[0] - updated_image_path = get_new_image_name(image_path, func_name="pose2image") - image.save(updated_image_path) - print(f"\nProcessed PoseText2Image, Input Pose: {image_path}, Input Text: {instruct_text}, " - f"Output Image: {updated_image_path}") - return updated_image_path - - -class Image2Seg: - def __init__(self, device): - print("Initializing Image2Seg") - self.image_processor = AutoImageProcessor.from_pretrained("openmmlab/upernet-convnext-small") - self.image_segmentor = UperNetForSemanticSegmentation.from_pretrained("openmmlab/upernet-convnext-small") - self.ade_palette = [[120, 120, 120], [180, 120, 120], [6, 230, 230], [80, 50, 50], - [4, 200, 3], [120, 120, 80], [140, 140, 140], [204, 5, 255], - [230, 230, 230], [4, 250, 7], [224, 5, 255], [235, 255, 7], - [150, 5, 61], [120, 120, 70], [8, 255, 51], [255, 6, 82], - [143, 255, 140], [204, 255, 4], [255, 51, 7], [204, 70, 3], - [0, 102, 200], [61, 230, 250], [255, 6, 51], [11, 102, 255], - [255, 7, 71], [255, 9, 224], [9, 7, 230], [220, 220, 220], - [255, 9, 92], [112, 9, 255], [8, 255, 214], [7, 255, 224], - [255, 184, 6], [10, 255, 71], [255, 41, 10], [7, 255, 255], - [224, 255, 8], [102, 8, 255], [255, 61, 6], [255, 194, 7], - [255, 122, 8], [0, 255, 20], [255, 8, 41], [255, 5, 153], - [6, 51, 255], [235, 12, 255], [160, 150, 20], [0, 163, 255], - [140, 140, 140], [250, 10, 15], [20, 255, 0], [31, 255, 0], - [255, 31, 0], [255, 224, 0], [153, 255, 0], [0, 0, 255], - [255, 71, 0], [0, 235, 255], [0, 173, 255], [31, 0, 255], - [11, 200, 200], [255, 82, 0], [0, 255, 245], [0, 61, 255], - [0, 255, 112], [0, 255, 133], [255, 0, 0], [255, 163, 0], - [255, 102, 0], [194, 255, 0], [0, 143, 255], [51, 255, 0], - [0, 82, 255], [0, 255, 41], [0, 255, 173], [10, 0, 255], - [173, 255, 0], [0, 255, 153], [255, 92, 0], [255, 0, 255], - [255, 0, 245], [255, 0, 102], [255, 173, 0], [255, 0, 20], - [255, 184, 184], [0, 31, 255], [0, 255, 61], [0, 71, 255], - [255, 0, 204], [0, 255, 194], [0, 255, 82], [0, 10, 255], - [0, 112, 255], [51, 0, 255], [0, 194, 255], [0, 122, 255], - [0, 255, 163], [255, 153, 0], [0, 255, 10], [255, 112, 0], - [143, 255, 0], [82, 0, 255], [163, 255, 0], [255, 235, 0], - [8, 184, 170], [133, 0, 255], [0, 255, 92], [184, 0, 255], - [255, 0, 31], [0, 184, 255], [0, 214, 255], [255, 0, 112], - [92, 255, 0], [0, 224, 255], [112, 224, 255], [70, 184, 160], - [163, 0, 255], [153, 0, 255], [71, 255, 0], [255, 0, 163], - [255, 204, 0], [255, 0, 143], [0, 255, 235], [133, 255, 0], - [255, 0, 235], [245, 0, 255], [255, 0, 122], [255, 245, 0], - [10, 190, 212], [214, 255, 0], [0, 204, 255], [20, 0, 255], - [255, 255, 0], [0, 153, 255], [0, 41, 255], [0, 255, 204], - [41, 0, 255], [41, 255, 0], [173, 0, 255], [0, 245, 255], - [71, 0, 255], [122, 0, 255], [0, 255, 184], [0, 92, 255], - [184, 255, 0], [0, 133, 255], [255, 214, 0], [25, 194, 194], - [102, 255, 0], [92, 0, 255]] - - @prompts(name="Segmentation On Image", - description="useful when you want to detect segmentations of the image. " - "like: segment this image, or generate segmentations on this image, " - "or peform segmentation on this image. " - "The input to this tool should be a string, representing the image_path") - def inference(self, inputs): - image = Image.open(inputs) - pixel_values = self.image_processor(image, return_tensors="pt").pixel_values - with torch.no_grad(): - outputs = self.image_segmentor(pixel_values) - seg = self.image_processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0] - color_seg = np.zeros((seg.shape[0], seg.shape[1], 3), dtype=np.uint8) # height, width, 3 - palette = np.array(self.ade_palette) - for label, color in enumerate(palette): - color_seg[seg == label, :] = color - color_seg = color_seg.astype(np.uint8) - segmentation = Image.fromarray(color_seg) - updated_image_path = get_new_image_name(inputs, func_name="segmentation") - segmentation.save(updated_image_path) - print(f"\nProcessed Image2Pose, Input Image: {inputs}, Output Pose: {updated_image_path}") - return updated_image_path - - -class SegText2Image: - def __init__(self, device): - print("Initializing SegText2Image to %s" % device) - self.controlnet = ControlNetModel.from_pretrained("fusing/stable-diffusion-v1-5-controlnet-seg") - self.pipe = StableDiffusionControlNetPipeline.from_pretrained( - "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=None) - self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config) - self.pipe.to(device) - self.seed = -1 - self.a_prompt = 'best quality, extremely detailed' - self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit,' \ - ' fewer digits, cropped, worst quality, low quality' - - @prompts(name="Generate Image Condition On Segmentations", - description="useful when you want to generate a new real image from both the user desciption and segmentations. " - "like: generate a real image of a object or something from this segmentation image, " - "or generate a new real image of a object or something from these segmentations. " - "The input to this tool should be a comma seperated string of two, " - "representing the image_path and the user description") - def inference(self, inputs): - image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:]) - image = Image.open(image_path) - self.seed = random.randint(0, 65535) - seed_everything(self.seed) - prompt = instruct_text + ', ' + self.a_prompt - image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt, - guidance_scale=9.0).images[0] - updated_image_path = get_new_image_name(image_path, func_name="segment2image") - image.save(updated_image_path) - print(f"\nProcessed SegText2Image, Input Seg: {image_path}, Input Text: {instruct_text}, " - f"Output Image: {updated_image_path}") - return updated_image_path - - -class Image2Depth: - def __init__(self, device): - print("Initializing Image2Depth") - self.depth_estimator = pipeline('depth-estimation') - - @prompts(name="Predict Depth On Image", - description="useful when you want to detect depth of the image. like: generate the depth from this image, " - "or detect the depth map on this image, or predict the depth for this image. " - "The input to this tool should be a string, representing the image_path") - def inference(self, inputs): - image = Image.open(inputs) - depth = self.depth_estimator(image)['depth'] - depth = np.array(depth) - depth = depth[:, :, None] - depth = np.concatenate([depth, depth, depth], axis=2) - depth = Image.fromarray(depth) - updated_image_path = get_new_image_name(inputs, func_name="depth") - depth.save(updated_image_path) - print(f"\nProcessed Image2Depth, Input Image: {inputs}, Output Depth: {updated_image_path}") - return updated_image_path - - -class DepthText2Image: - def __init__(self, device): - print("Initializing DepthText2Image to %s" % device) - self.controlnet = ControlNetModel.from_pretrained("fusing/stable-diffusion-v1-5-controlnet-depth") - self.pipe = StableDiffusionControlNetPipeline.from_pretrained( - "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=None) - self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config) - self.pipe.to(device) - self.seed = -1 - self.a_prompt = 'best quality, extremely detailed' - self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit,' \ - ' fewer digits, cropped, worst quality, low quality' - - @prompts(name="Generate Image Condition On Depth", - description="useful when you want to generate a new real image from both the user desciption and depth image. " - "like: generate a real image of a object or something from this depth image, " - "or generate a new real image of a object or something from the depth map. " - "The input to this tool should be a comma seperated string of two, " - "representing the image_path and the user description") - def inference(self, inputs): - image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:]) - image = Image.open(image_path) - self.seed = random.randint(0, 65535) - seed_everything(self.seed) - prompt = instruct_text + ', ' + self.a_prompt - image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt, - guidance_scale=9.0).images[0] - updated_image_path = get_new_image_name(image_path, func_name="depth2image") - image.save(updated_image_path) - print(f"\nProcessed DepthText2Image, Input Depth: {image_path}, Input Text: {instruct_text}, " - f"Output Image: {updated_image_path}") - return updated_image_path - - -class Image2Normal: - def __init__(self, device): - print("Initializing Image2Normal") - self.depth_estimator = pipeline("depth-estimation", model="Intel/dpt-hybrid-midas") - self.bg_threhold = 0.4 - - @prompts(name="Predict Normal Map On Image", - description="useful when you want to detect norm map of the image. " - "like: generate normal map from this image, or predict normal map of this image. " - "The input to this tool should be a string, representing the image_path") - def inference(self, inputs): - image = Image.open(inputs) - original_size = image.size - image = self.depth_estimator(image)['predicted_depth'][0] - image = image.numpy() - image_depth = image.copy() - image_depth -= np.min(image_depth) - image_depth /= np.max(image_depth) - x = cv2.Sobel(image, cv2.CV_32F, 1, 0, ksize=3) - x[image_depth < self.bg_threhold] = 0 - y = cv2.Sobel(image, cv2.CV_32F, 0, 1, ksize=3) - y[image_depth < self.bg_threhold] = 0 - z = np.ones_like(x) * np.pi * 2.0 - image = np.stack([x, y, z], axis=2) - image /= np.sum(image ** 2.0, axis=2, keepdims=True) ** 0.5 - image = (image * 127.5 + 127.5).clip(0, 255).astype(np.uint8) - image = Image.fromarray(image) - image = image.resize(original_size) - updated_image_path = get_new_image_name(inputs, func_name="normal-map") - image.save(updated_image_path) - print(f"\nProcessed Image2Normal, Input Image: {inputs}, Output Depth: {updated_image_path}") - return updated_image_path - - -class NormalText2Image: - def __init__(self, device): - print("Initializing NormalText2Image to %s" % device) - self.controlnet = ControlNetModel.from_pretrained("fusing/stable-diffusion-v1-5-controlnet-normal") - self.pipe = StableDiffusionControlNetPipeline.from_pretrained( - "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=None) - self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config) - self.pipe.to(device) - self.seed = -1 - self.a_prompt = 'best quality, extremely detailed' - self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit,' \ - ' fewer digits, cropped, worst quality, low quality' - - @prompts(name="Generate Image Condition On Normal Map", - description="useful when you want to generate a new real image from both the user desciption and normal map. " - "like: generate a real image of a object or something from this normal map, " - "or generate a new real image of a object or something from the normal map. " - "The input to this tool should be a comma seperated string of two, " - "representing the image_path and the user description") - def inference(self, inputs): - image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:]) - image = Image.open(image_path) - self.seed = random.randint(0, 65535) - seed_everything(self.seed) - prompt = instruct_text + ', ' + self.a_prompt - image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt, - guidance_scale=9.0).images[0] - updated_image_path = get_new_image_name(image_path, func_name="normal2image") - image.save(updated_image_path) - print(f"\nProcessed NormalText2Image, Input Normal: {image_path}, Input Text: {instruct_text}, " - f"Output Image: {updated_image_path}") - return updated_image_path - - -class VisualQuestionAnswering: - def __init__(self, device): - print("Initializing VisualQuestionAnswering to %s" % device) - self.device = device - self.processor = BlipProcessor.from_pretrained("Salesforce/blip-vqa-base") - self.model = BlipForQuestionAnswering.from_pretrained("Salesforce/blip-vqa-base", torch_dtype=torch.float16).to(self.device) - - @prompts(name="Answer Question About The Image", - description="useful when you need an answer for a question based on an image. " - "like: what is the background color of the last image, how many cats in this figure, what is in this figure. " - "The input to this tool should be a comma seperated string of two, representing the image_path and the question") - def inference(self, inputs): - image_path, question = inputs.split(",") - raw_image = Image.open(image_path).convert('RGB') - inputs = self.processor(raw_image, question, return_tensors="pt").to(self.device, torch.float16) - out = self.model.generate(**inputs) - answer = self.processor.decode(out[0], skip_special_tokens=True) - print(f"\nProcessed VisualQuestionAnswering, Input Image: {image_path}, Input Question: {question}, " - f"Output Answer: {answer}") - return answer \ No newline at end of file diff --git a/spaces/yuhanbo/chat-gpt/app/constant.ts b/spaces/yuhanbo/chat-gpt/app/constant.ts deleted file mode 100644 index 818ef1fbe983cb2bfc4ec911592933d8cbe05f3a..0000000000000000000000000000000000000000 --- a/spaces/yuhanbo/chat-gpt/app/constant.ts +++ /dev/null @@ -1,5 +0,0 @@ -export const OWNER = "Yidadaa"; -export const REPO = "ChatGPT-Next-Web"; -export const REPO_URL = `https://github.com/${OWNER}/${REPO}`; -export const UPDATE_URL = `${REPO_URL}#%E4%BF%9D%E6%8C%81%E6%9B%B4%E6%96%B0-keep-updated`; -export const FETCH_COMMIT_URL = `https://api.github.com/repos/${OWNER}/${REPO}/commits?per_page=1`; diff --git a/spaces/zhang-wei-jian/docker/node_modules/cache-content-type/index.js b/spaces/zhang-wei-jian/docker/node_modules/cache-content-type/index.js deleted file mode 100644 index 60e66671ad3b0a8dde685e88ef4c792d3325b524..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/cache-content-type/index.js +++ /dev/null @@ -1,15 +0,0 @@ -'use strict'; - -const mimeTypes = require('mime-types'); -const LRU = require('ylru'); - -const typeLRUCache = new LRU(100); - -module.exports = type => { - let mimeType = typeLRUCache.get(type); - if (!mimeType) { - mimeType = mimeTypes.contentType(type); - typeLRUCache.set(type, mimeType); - } - return mimeType; -}; diff --git a/spaces/zhangyd/bingo/src/components/markdown.tsx b/spaces/zhangyd/bingo/src/components/markdown.tsx deleted file mode 100644 index d4491467a1f14d1d72e535caac9c40636054e5df..0000000000000000000000000000000000000000 --- a/spaces/zhangyd/bingo/src/components/markdown.tsx +++ /dev/null @@ -1,9 +0,0 @@ -import { FC, memo } from 'react' -import ReactMarkdown, { Options } from 'react-markdown' - -export const MemoizedReactMarkdown: FC = memo( - ReactMarkdown, - (prevProps, nextProps) => - prevProps.children === nextProps.children && - prevProps.className === nextProps.className -) diff --git a/spaces/zht1/test2/utils/test5.py b/spaces/zht1/test2/utils/test5.py deleted file mode 100644 index 0a843d2da9fe590d8b4aa47e568e3266ec8ec719..0000000000000000000000000000000000000000 --- a/spaces/zht1/test2/utils/test5.py +++ /dev/null @@ -1,105 +0,0 @@ -import os -# import matplotlib -# matplotlib.use('Qt5Agg') -import matplotlib.pyplot as plt -import gradio as gr -import cv2 -import numpy as np -import torch -from mobile_sam import SamAutomaticMaskGenerator, SamPredictor, sam_model_registry -from PIL import ImageDraw,Image -from utils.tools import box_prompt, format_results, point_prompt -from utils.tools_gradio import fast_process - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -# Load the pre-trained model -sam_checkpoint = r"F:\zht\code\MobileSAM-master\weights\mobile_sam.pt" -model_type = "vit_t" -mobile_sam = sam_model_registry[model_type](checkpoint=sam_checkpoint) -mobile_sam = mobile_sam.to(device=device) -mobile_sam.eval() - -mask_generator = SamAutomaticMaskGenerator(mobile_sam) -predictor = SamPredictor(mobile_sam) - -# default_example = examples[0] - -@torch.no_grad() -def segment_with_boxs( - image, - input_size=1024, - better_quality=False, - withContours=True, - use_retina=True, - mask_random_color=True, -): - global global_points - global global_point_label - - input_size = int(input_size) - w, h = image.size - scale = input_size / max(w, h) - new_w = int(w * scale) - new_h = int(h * scale) - - image = image.resize((new_w, new_h)) - ################# - scaled_points = np.array( - [[int(x * scale) for x in point] for point in global_points] - ) - print("nnnnnnnnnnnnnnnnnnnnnnnnnnnnn00nnnnn",scaled_points) - scaled_point_label = np.array(global_point_label) - - nd_image = np.array(image) - print("mmmmmmm0mmmm",nd_image.shape) #(685, 1024, 3) - predictor.set_image(nd_image) #改变形状 - masks, scores, logits = predictor.predict( - point_coords=scaled_points, - point_labels=scaled_point_label, - multimask_output=True, - ) - - results = format_results(masks, scores, logits, 0) - print("mmmmmmmmmmmmmmmm2222m",len(results)) # [530 437] - annotations, _ = point_prompt( - results, scaled_points, scaled_point_label, new_h, new_w - ) - annotations = np.array([annotations]) - # 显示图像 - plt.imshow(annotations[0], cmap='viridis') # 使用 'viridis' 颜色映射 - plt.colorbar() # 显示颜色条 - plt.savefig(r'F:\zht\code\2.png') - plt.show() - - fig = fast_process( - annotations=annotations, - image=image, - device=device, - scale=(1024 // input_size), - better_quality=better_quality, - mask_random_color=mask_random_color, - bbox=None, - use_retina=use_retina, - withContours=withContours, - ) - global_points = [] - global_point_label = [] - return fig, image - -################################################# -if __name__ == "__main__": - path = r"F:\zht\code\MobileSAM-master\app\assets\05.jpg" - image1 = Image.open(path) - # image = cv2.imread(path) - print(image1.size) - # global_points = [[1069,928]] - global_points = [[324,740,1448,1192]] - global_point_label = [1] - segment_with_boxs( - image1, - input_size=1024, - better_quality=False, - withContours=True, - use_retina=True, - mask_random_color=True, - ) \ No newline at end of file diff --git a/spaces/zxy666/bingo-chatai666/src/lib/isomorphic/index.ts b/spaces/zxy666/bingo-chatai666/src/lib/isomorphic/index.ts deleted file mode 100644 index 738dc92f74079ab762d584fb7422a8c8c3b61547..0000000000000000000000000000000000000000 --- a/spaces/zxy666/bingo-chatai666/src/lib/isomorphic/index.ts +++ /dev/null @@ -1,17 +0,0 @@ -'use client' - -import Default from './browser' - -let exportsModel: any = {} - -if (process.browser) { - Object.assign(exportsModel, require('./browser').default) -} else { - Object.assign(exportsModel, require('./node').default) -} - -export default exportsModel! as typeof Default - -export const fetch: typeof Default.fetch = exportsModel!.fetch -export const WebSocket: typeof Default.WebSocket = exportsModel!.WebSocket -export const debug: typeof Default.debug = exportsModel!.debug